CHI2024 presentations

At CHI2024, the following papers will be presented:

Watch Your Mouth: Silent Speech Recognition with Depth Sensing (full paper)
Xue Wang, Zixiong Su, Jun Rekimoto, Yang Zhang

FoodSkin: Fabricating Edible Gold Leaf Circuits on Food Surfaces (full paper)
Kunihiro Kato, Kaori Ikematsu, Hiromi Nakamura, Hinako Suzaki, Yuki Igarashi

Multiple paper awards at the Augmented Humans 2024 International Conference

The following paper awards were received at the Augmented Humans 2024 International Conference held in Melbourne, Australia from April 4-6, 2024. This conference featured research on augmenting human capabilities through advanced technologies:

2024年4月4日〜6日にオーストラリア・メルボルンで開催された人間拡張に関する国際学会 Augmented Humans 2024 で以下の論文賞を受賞しました

hm

Best Paper Honorable Mention:

Serendipity Wall: A Discussion Support System Using Real-time Speech Recognition and Large Language Model
Shota Imamura, Hirotaka Hiraki and Jun Rekimoto.

This research focuses on AI activating human discussions by dynamically searching for context-relevant information based on participants’ free speech recognized through voice recognition. It presents a user interface that displays discussion-relevant information summarized by a large language model on a large display without interrupting the discussion.

人間の議論をAIが活性化させるための研究で、参加者の自由発話による議論を音声認識し、議論の文脈に近い関連論文を動的に検索します。大規模言語モデルによって論文内容を要約し、議論をしている場所の大型ディスプレイに表示するなど、議論をさまたげずに関連情報を提示するユーザインタフェースが実現されています。

Best Poster Award:

Aged Eyes: Optically Simulating Presbyopia Using Tunable Lenses
Qing Zhang, Yoshihito Kondoh, Yuta Itoh and Jun Rekimoto.

This enables reproducing and simulating visual phenomena like presbyopia using liquid lenses that can dynamically adjust eyeglass degree, contributing to realizing the universal design.

眼鏡の「度」を動的に調整できる液体レンズを利用して、老眼などの視覚現象を再現して疑似体験することを可能にします。これによりユニバーサルデザインの実現に貢献します。

Augmented Humans 2024

at Augmented Humans 2024, we present the following papers and posters:

papers:

WhisperMask: a noise suppressive mask-type microphone for whisper speech
Hirotaka Hiraki, Shusuke Kanazawa, Takahiro Miura, Manabu Yoshida, Masaaki Mochimaru and Jun Rekimoto

FastPerson: Enhancing Video Learning through Effective Video Summarization that Preserves Linguistic and Visual Contexts
Kazuki Kawamura and Jun Rekimoto

SkillsInterpreter: A Case Study of Automatic Annotation of Flowcharts to Support Browsing Instructional Videos in Modern Martial Arts using Large Language Models
Kotaro Oomori, Yoshio Ishiguro and Jun Rekimoto

Serendipity Wall: A Discussion Support System Using Real-time Speech Recognition and Large Language Model
Shota Imamura, Hirotaka Hiraki and Jun Rekimoto

Exploring the Kuroko Paradigm: The Effect of Enhancing Virtual Humans with Reality Actuators in Augmented Reality
Émilie Fabre, Jun Rekimoto and Yuta Itoh

posters:

QA-FastPerson: Extending Video Platform Search Capabilities by Creating Summary Videos in Response to User Queries
Kazuki Kawamura and Jun Rekimoto

Aged Eyes: Optically Simulating Presbyopia Using Tunable Lenses
Qing Zhang, Yoshihito Kondoh, Yuta Itoh and Jun Rekimoto

IEEE VR 2024 workshop ‘seamless reality’

At IEEE VR 2024 workshop ‘Seamless Reality’ the following papers are presented:

HoloArm: A Face-Following 3D Display Using Autostereoscopic Display and Robot Arm, Koya Dendo, Yuta Itoh, Émilie Fabre, Jun Rekimoto

Pinching Tactile Display: A Remote Haptic Feedback System for Fabric Texture, Takekazu Kitagishi, Hirotaka Hiraki, Yoshio Ishiguro, Jun Rekimoto

Serendipity Wall: A Discussion Support System Using Real-time Speech Recognition and Natural Language Processing Technology, Shota Imamura, Hirotaka Hiraki, Jun Rekimoto

SUMART: SUMmARizing Translation from Wordy to Concise Expression, Naoto Nishida, Jun Rekimoto

The Kuroko Paradigm: Augmenting Avatars in Augmented Reality with Reality Actuators, Émilie Fabre, Yuta Itoh

INTERACTION 2024

INTERACTION 2024 (2024/3/6 – 3/8 開催)にて、以下の論文を発表します:

FastPerson: ユーザ中心の学習体験のための視覚・音声情報に基づく講義動画要約
河村 和紀,暦本 純一(東大/ソニーCSL)

 WhisperMask:騒音環境で音声入力可能なマスク型マイク
平城 裕隆(東大/産総研),金澤 周介,三浦 貴大,吉田 学,持丸 正明(産総研),暦本 純一(東大/ソニーCSL)

Serendipity Wall: 会話文字起こしのベクター検索と大規模言語モデルによる議論支援システム
今村 翔太(東大),平城 裕隆(東大/産総研),暦本 純一(東大/ソニーCSL)

UIST 2023


at ACM UIST 2023 (Oct 29, 2023 – Nov 1, 2023, San Francisco, CA, USA), we will present the following work:

Takekazu Kitagishi, Yuichi Hiroi, Yuna Watanabe, Yuta Itoh, Jun Rekimoto Telextiles: End-to-end Remote Transmission of Fabric Tactile Sensation (paper and demo)

CHI2023 Presentations

At ACM CHI2023, we present the following papers and interactivities:

Papers:

  1. Zixiong Su, Shitao Fang, and Jun Rekimoto LipLearner: Customizable Silent Speech Interactions on Mobile Devices, [Best Paper Award] [Link] [Project Page]
  2. Jun Rekimoto, WESPER: Zero-shot and Realtime Whisper to Normal Voice Conversion for Whisper-based Speech interactions, [Link] [Project Page]

Interactivity:

  1. Yuna Watanabe, Xi Laura Cang, Rúbia Reis Guerra, Devyani McLaren, Preeti Vyas, Jun Rekimoto, Karon E MacLean, Demonstrating Virtual Teamwork with Synchrobots: A Robot-Mediated Approach to Improving Connectedness, [Link]
  2. Hirotaka Hiraki, Shusuke Kanazawa, Takahiro Miura, Manabu Yoshida, Masaaki Mochimaru, Jun Rekimoto, External noise reduction using WhisperMask, a mask-type wearable microphone, [Link]
  3. Kei Asano, Naoki Kimura, Jun Rekimoto, HMDspeller: Fast and Hands-free Text Entry System for Head Mount Displays using Silent Spelling Recognition, [Link]
  4. Kunihiro Kato, Ami Motomura, Kaori Ikematsu, Hiromi Nakamura, Yuki Igarashi, Demonstrating FoodSkin: A Method for Creating Electronic Circuits on Food Surfaces by Using Edible Gold Leaf for Enhancement of Eating Experience, [Link]

Augmented Humans 2023


At Augmented Humans 2023 (March 12-14,2023, Glasgow, UK),
we present the following research:

  • Kazuki Kawamura and Jun Rekimoto AIx speed: Playback Speed Optimization using Listening Comprehension of Speech Recognition Models (paper)
  • Yuto Koike, Yuichi Hiroi, Yuta Itoh, and Jun Rekimoto Brain-Computer Interface using Directional Auditory Perception (poster) [Best Poster Honorable Mentions]img_0812

IEEE ICMLA 2022

At IEEE ICMLA 2022 (21st IEEE International Conference on Machine Learning and Applications), the following paper is presented:

DDSupport: Language Learning Support System that Displays Differences and Distances from
Model Speech
, Kawamura, Kazuki; Rekimoto, Jun