inf5456 - Applied AI - Multimodal-Multisensor Interfaces I: Foundations, User Modeling, and Common Modality Combination (Complete module description)
Module label | Applied AI - Multimodal-Multisensor Interfaces I: Foundations, User Modeling, and Common Modality Combination |
Module code | inf5456 |
Credit points | 3.0 KP |
Workload | 90 h |
Institute directory | Department of Computing Science |
Applicability of the module |
|
Responsible persons |
|
Prerequisites | basic concepts of Artificial Intelligence, Human-Computer Interfaces |
Skills to be acquired in this module | Learning methods of multimodal interaction, learning Human-Computer Interaction concepts.
Methological competences
The students
Self competences
|
Module contents | We look at relevant theory and neuroscience foundations for guiding the development of high-erformance systems. We discuss approaches to user modeling, interface design that supports user choice, synergistic combination of modalities with sensors, and blending of multimodal input and output. We also highlight an in- depth look at the most common multimodal-multisensor combinations- for example, touch and pen input, haptic and non- speech audio output, and speech co-processed with visible lip movements, gaze, gestures, or pen input. A common theme throughout is support for mobility and individual differences among users-including the world's rapidly growing population of seniors. |
Recommended reading | The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition - Volume 1 (https://dl.acm.org/doi/book/10.1145/3015783) |
Links | |
Language of instruction | English |
Duration (semesters) | 1 Semester |
Module frequency | regular |
Module capacity | 12 |
Teaching/Learning method | S |
Examination | Prüfungszeiten | Type of examination |
---|---|---|
Final exam of module | at the end of the lecture period |
oral exam or portfolio or presentation |
Type of course | Seminar |
SWS | 2 |
Frequency | SuSe and WiSe |