HCI talks by SIGCHI Researchers (ユビキタス情報社会基盤研究センター共催)

After SIGCHI conference in Seoul, two HCI researchers will visit Tokyo University and give talks, open for everybody.

Date: April 27 (mon) 13:30-15:00 (door open at 13:15)
Venue: Ishibashi Hall, 3F, Daiwa Ubiquitous Computing Research Building
http://www.u-tokyo.ac.jp/content/400020145.pdf (building #125)

TALK 1

Alex Olwal
Ph.D, Google, USA
http://www.olwal.com

TITLE: Augmented Realism through Relevant Minimalism

ABSTRACT
Augmented Reality (AR) merges virtual information with a real environment for intuitive and direct user interfaces. I will give an overview of our research projects that emphasize a seamless blend of the digital and physical. We leverage exotic displays, sensing and context, to register digital content with the environment and to support rich interactions. Our goal is to render less, with every pixel being relevant, dynamic and grounded in the space. This is one of our key philosophies for enhancing the human senses and capabilities through minimal, yet highly relevant augmentations, that embrace the realism of our physical world. Our projects include novel interaction techniques, see-through displays, sensing technologies, immaterial user interfaces, and dynamic shape displays.

BIO
Alex Olwal (Ph.D., M.Sc.) is an Interaction Researcher at Google, Affiliate Faculty at KTH, and Research Affiliate at the MIT Media Lab.

Alex designs and develops interactions and technologies that embrace digital and physical experiences. He is interested in tools, techniques and devices that enable new interaction concepts for the augmentation and empowerment of the human senses.

Alex’s research (olwal.com) includes augmented reality, spatially aware mobile devices, medical user interfaces, ubiquitous computing, touch-screens, as well as novel interaction devices and displays.

He has previously worked with the development of new technologies for Human-Computer Interaction at MIT – Massachusetts Institute of Technology (Cambridge, MA), KTH – Royal Institute of Technology (Stockholm), Columbia University (NY), University of California (Santa Barbara, CA) and Microsoft Research (Redmond, WA).

At Google, Alex conducts applied research and development of novel and exotic input and output mechanisms for wearables, and explore associated
interaction techniques. Alex’s goal is to expand expressiveness while avoiding interference with user’s experience of the physical reality.

TALK 2:

Pedro Lopes
Ph.D Candidate, Computer Interaction Lab, Hasso Platner Institute, Germany
http://plopes.org/

TITLE: Affordance++ and Proprioceptive Interaction

ABSTRACT
We propose extending the affordance of objects by allowing them to communicate dynamic use, such as (1) motion (e.g., spray can shakes when touched), (2) multi-step processes (e.g., spray can sprays only after shaking), and (3) behaviors that change over time (e.g., empty spray can does not allow spraying anymore). Rather than enhancing objects directly, however, we implement this concept by enhancing the user. We call this affordance++. By stimulating the user’s arms using electrical muscle stimulation, our prototype allows objects not only to make the user actuate them, but also perform required movements while merely approaching the object, such as not to touch objects that do not “want” to be touched. In our user study, affordance++ helped participants to successfully operate devices of poor natural affordance, such as a multi-functional slicer tool or a magnetic nail sweeper, and to stay away from cups filled with hot liquids. We call this concept of creating object behavior by controlling user behavior affordance++. Conceptually there are many ways of implementing affordance++, generally by applying sensors and actuators to the user’s body, such as the arm. We actuate users by controlling their arm poses using electrical muscle stimulation, i.e., users wear a device on their arm that talks to the user’s muscles by means of electrodes attached to the user’s arm. This allows for a particularly compact form factor and is arguably even more “direct” than the indirection through a mechanical system. However, the concept of affordance++ needs not to be tied to a particular means of actuating the user, but to the concept of doing so instead of actuating the objects that the user interacts with.

BIO
http://plopes.org/bio/

CHI 2015 Presentations & Activities

At CHI2015, The following papers will be presented from our group.

  • ChameleonMask: Embodied Physical and Social Telepresence using human surrogates
    Authors
    , Kana Misawa and Jun Rekimoto (Monday, April 20, Time: 11:30 – 12:50, room 308, alt.chi “Augmentation”)

  • ImmerseBoard: Immersive Telepresence Experience using a Digital Whiteboard
    Keita Higuchi, Yinpeng Chen, Philip A Chou, Zhengyou Zhang, Zicheng Liu (Wednesday, April 22, 2015
    Time: 9:30 – 10:50, papers “Telepresence Video, Robots, and Walls”, room E6)

優秀修士論文賞(専攻長賞)受賞

今年度の修士論文二件が、平成26年度学際情報学府優秀修士論文賞(専攻長賞)を受賞し、3月24日の学位授与式にて表彰されました。

永井翔平   全周囲映像を用いた体験共有システムの研究

新田 慧     プレイヤーの能力に適合し自律移動できる ボールの開発

落合陽一君博論最終審査会

落合陽一君(東京大学学際情報学府、暦本研究室)の博士論文最終審査会(公聴会)を以下要領で開催いたします:

論文題目:Graphics by Computational Acoustic Field
日時:2015年3月26日(木) 15:00〜
於: ダイワハウス石橋信夫記念ホール(東京大学 ダイワユビキタス学術研究館 3F)

審査委員:
暦本純一    東京大学 情報学環・教授 (主査)
坂村健     東京大学 情報学環・教授
越塚登     東京大学 情報学環・教授
五十嵐健夫   東京大学 情報理工学研究科・教授
篠田裕之    東京大学 新領域創成科学研究科・教授

東京大学大学院学際情報学府入試説明会

暦本研究室が所属している東京大学情報学環・学際情報学府の入試説明会および総合分析情報学コース説明会が5月31日に開催されます。

東京大学大学院情報学環の総合分析情報学コース入試説明会
日時 2014年5/31(土)の10:00-11:30
会場: 東京大学本郷キャンパス ダイワユビキタス学術研究館
事前申込み:不要
詳細はこちら

東京大学情報学環・学際情報学府の入試説明会
日時:2014年5月31日(土)13:00-17:00
場所:東京大学本郷キャンパス・情報学環福武ホール地下二階
事前申込み:不要
詳細はこちら

Augmented Human 2014

We will present the following papers at Augmented Human 2014 (March 7-9, 2014, Kobe, Japan)

  • Around Me: A System for Providing Sports Player’s Self-Images with an Escort Robot
    Junya Tominaga, Kensaku Kawauchi, and Jun Rekimoto [AH2014 Best Paper Award]

  • HoverBall : Augmented Sports with a Flying Ball
    Kei Nitta, Keita Higuchi, and Jun Rekimoto

  • JackIn: Integrating the First Person View with Out-of- Body Vision Generation for Human-Human Augmentation [AH2014 Best Presentation Award]
    Shunichi Kasahara and Jun Rekimoto