We look at relevant theory and neuroscience foundations for guiding the development of high-performance systems. We discuss approaches to user modeling, interface design that supports user choice, synergistic combination of modalities with sensors, and blending of multimodal input and output. We also highlight an in-depth look at the most common multimodal-multisensor combinations- for example, touch and pen input, haptic and non-speech audio output, and speech co-processed with visible lip movements, gaze, gestures, or pen input. A common theme throughout is support for mobility and individual differences among users-including the world's rapidly growing population of seniors.
This seminar would be most appropriate for graduate students, and of primary interest to students studying computer science and information technology, human–computer interfaces, mobile and ubiquitous interfaces, and related multidisciplinary majors.
Central part of the seminar is the reference book "The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition - Volume 1" (https://dl.acm.org/doi/book/10.1145/3015783
). At the beginning there will be an introduction to the subject. Everyone will receive a chapter, for which a presentation (30 min. + 30 min. discussion) and a written elaboration (5-10 pages) are to be prepared.
Contact: Ilira Troshani, firstname.lastname@example.org