Laboratoire Révolutionnaire et Romantique, (Human Augmentation Lab.), Interfaculty Initiative in Information Studies, The University of Tokyo – 東京大学大学院情報学環 暦本研究室
At Human Augmentation 2026 (2026.3.15-17), we will present the following papers/talks:
Yuto Nakamura, Keigo Minamida, Masanobu Kanazawa, Koya Dendo and Jun Rekimoto Augmented Leap: Human Jump Augmentation through Apparent Reduced Gravity
Qing Zhang, Zixiong Su, Yoshihito Kondoh, Kazunori Asada, Thad Starner, Kai Kunze, Yuta Itoh and Jun RekimotoOpticalAging: Real-time Presbyopia Simulation for Inclusive Design via Tunable Lenses
The paper titled “NoseKnowsNorth: Directional Cueing using Nose Vibration Stimulation for Smart Glasses” received the Best Paper Award at the 1st International Workshop on Virtual Reality for Human and Spatial Augmentation (VR-HSA), held as part of IEEE VR 2025 from March 8 to 12, 2025.
2025年3月8日から12日にかけて開催された IEEE VR 2025 の一環として行われた The 1st International Workshop on Virtual Reality for Human and Spatial Augmentation (VR-HSA) において、「NoseKnowsNorth: Directional Cueing using Nose Vibration Stimulation for Smart Glasses」が Best Paper Award を受賞しました。
Best Paper Award:
NoseKnowsNorth: Directional Cueing using Nose Vibration Stimulation for Smart Glasses Yuto Nakamura, Kazuki Nishimoto, Akira Yui, Takuji Narumi, Jun Rekimoto
NoseKnowsNorth is a device that provides directional cues by applying vibration stimuli to the sides of the nose. By leveraging the nose’s high tactile sensitivity, it enables users to sense directions without relying on visual or auditory cues. The device can be embedded in the nose pads of eyeglasses or smart glasses, making it suitable for everyday use. For example, when riding a bicycle or driving a car, users can check directions without shifting their gaze to a smartphone, enabling safer navigation.
Conductive Fabric Diaphragm for Noise-Suppressive Headset Microphone (Demos) Hirotaka Hiraki , Shusuke Kanazawa , Takahiro Miura , Manabu Yoshida , Masaaki Mochimaru , Jun Rekimoto
Mapping Gaze and Head Movement via Salience Modulation and Hanger Reflex (Posters), Wanhui Li , Qing Zhang , Takuto Nakamura , Sinyu Lai , Jun Rekimoto
Piezoelectric Sensing of Mask Surface Waves for Noise-Suppressive Speech Input (Posters), Hirotaka Hiraki , Jun Rekimoto
The following paper awards were received at the Augmented Humans 2024 International Conference held in Melbourne, Australia from April 4-6, 2024. This conference featured research on augmenting human capabilities through advanced technologies:
Serendipity Wall: A Discussion Support System Using Real-time Speech Recognition and Large Language Model Shota Imamura, Hirotaka Hiraki and Jun Rekimoto.
This research focuses on AI activating human discussions by dynamically searching for context-relevant information based on participants’ free speech recognized through voice recognition. It presents a user interface that displays discussion-relevant information summarized by a large language model on a large display without interrupting the discussion.
Aged Eyes: Optically Simulating Presbyopia Using Tunable Lenses Qing Zhang, Yoshihito Kondoh, Yuta Itoh and Jun Rekimoto.
This enables reproducing and simulating visual phenomena like presbyopia using liquid lenses that can dynamically adjust eyeglass degree, contributing to realizing the universal design.
WhisperMask: a noise suppressive mask-type microphone for whisper speech Hirotaka Hiraki, Shusuke Kanazawa, Takahiro Miura, Manabu Yoshida, Masaaki Mochimaru and Jun Rekimoto
FastPerson: Enhancing Video Learning through Effective Video Summarization that Preserves Linguistic and Visual Contexts Kazuki Kawamura and Jun Rekimoto
SkillsInterpreter: A Case Study of Automatic Annotation of Flowcharts to Support Browsing Instructional Videos in Modern Martial Arts using Large Language Models Kotaro Oomori, Yoshio Ishiguro and Jun Rekimoto
Serendipity Wall: A Discussion Support System Using Real-time Speech Recognition and Large Language Model Shota Imamura, Hirotaka Hiraki and Jun Rekimoto
Exploring the Kuroko Paradigm: The Effect of Enhancing Virtual Humans with Reality Actuators in Augmented Reality Émilie Fabre, Jun Rekimoto and Yuta Itoh
posters:
QA-FastPerson: Extending Video Platform Search Capabilities by Creating Summary Videos in Response to User Queries Kazuki Kawamura and Jun Rekimoto
Aged Eyes: Optically Simulating Presbyopia Using Tunable Lenses Qing Zhang, Yoshihito Kondoh, Yuta Itoh and Jun Rekimoto
At IEEE VR 2024 workshop ‘Seamless Reality’ the following papers are presented:
HoloArm: A Face-Following 3D Display Using Autostereoscopic Display and Robot Arm, Koya Dendo, Yuta Itoh, Émilie Fabre, Jun Rekimoto
Pinching Tactile Display: A Remote Haptic Feedback System for Fabric Texture, Takekazu Kitagishi, Hirotaka Hiraki, Yoshio Ishiguro, Jun Rekimoto
Serendipity Wall: A Discussion Support System Using Real-time Speech Recognition and Natural Language Processing Technology, Shota Imamura, Hirotaka Hiraki, Jun Rekimoto
SUMART: SUMmARizing Translation from Wordy to Concise Expression, Naoto Nishida, Jun Rekimoto
The Kuroko Paradigm: Augmenting Avatars in Augmented Reality with Reality Actuators, Émilie Fabre, Yuta Itoh
You must be logged in to post a comment.