10.1184/R1/6622268.v1
Brad Myers
Brad
Myers
Robert Malkin
Robert
Malkin
Michael Bett
Michael
Bett
Alexander Waibel
Alexander
Waibel
Ben Bostwick
Ben
Bostwick
Robert Miller
Robert
Miller
Jie Yang
Jie
Yang
Matthias Denecke
Matthias
Denecke
Edgar Seemann
Edgar
Seemann
Jie Zhu
Jie
Zhu
Choon Hong Peck
Choon Hong
Peck
Dave Kong
Dave
Kong
Jeffrey Nichols
Jeffrey
Nichols
Bill Scherlis
Bill
Scherlis
Flexi-modal and Multi-Machine User Interfaces
Carnegie Mellon University
2002
Multi-modal interfaces
speech recognition
gesture recognition
handwriting recognition
gaze tracking
handhelds
personal digital assistants (PDAs)
laser pointers
computer supported collaborative work (CSCW)
2002-01-01 00:00:00
Journal contribution
https://kilthub.cmu.edu/articles/journal_contribution/Flexi-modal_and_Multi-Machine_User_Interfaces/6622268
<p>We describe our system which facilitates collaboration using multiple modalities, including speech, handwriting, gestures, gaze tracking, direct manipulation, large projected touch-sensitive displays, laser pointer tracking, regular monitors with a mouse and keyboard, and wirelessly-networked handhelds. Our system allows multiple, geographically dispersed participants to simultaneously and flexibly mix different modalities using the right interface at the right time on one or more machines. This paper discusses each of the modalities provided, how they were integrated in the system architecture, and how the user interface enabled one or more people to flexibly use one or more devices.</p>