Interactions based on automatic speech recognition (ASR) have become widely used, with speech input being increasingly utilized to create documents. However, as there is no easy way to distinguish between commands being issued and text required to be input in speech, misrecognitions are difficult to identify and correct, meaning that documents need to be manually edited and corrected. The input of symbols and commands is also challenging because these may be misrecognized as text letters. To address these problems, this study proposes a speech interaction method called DualVoice, by which commands can be input in a whispered voice and letters in a normal voice. The proposed method does not require any specialized hardware other than a regular microphone, enabling a complete hands-free interaction. The method can be used in a wide range of situations where speech recognition is already available, ranging from text input to mobile/wearable computing. Two neural networks were designed in this study, one for discriminating normal speech from whispered speech, and the second for recognizing whisper speech.


For whisper voice recognition, we use HuBERT.  The feature extractor part is also used for whisper voice classification.



  1.  Jun Rekimoto, DualVoice: A Speech Interaction Method Using Whisper Voice as Commands, ACM CHI 2022 EA [ACMDL]
  2. Jun Rekimoto, DualVoice: Speech Interaction that Discriminates between Normal and Whispered Voice Input, ACM UIST 2022 paper. [ACMDL] [arXiv]