inf5456 - Applied AI - Multimodal-Multisensor Interfaces I: Foundations, User Modeling, and Common Modality Combination (Complete module description)

inf5456 - Applied AI - Multimodal-Multisensor Interfaces I: Foundations, User Modeling, and Common Modality Combination (Complete module description)

Original version English PDF download
Module label Applied AI - Multimodal-Multisensor Interfaces I: Foundations, User Modeling, and Common Modality Combination
Module code inf5456
Credit points 3.0 KP
Workload 90 h
Institute directory Department of Computing Science
Applicability of the module
  • Master's Programme Computing Science (Master) > Angewandte Informatik
Responsible persons
  • Sonntag, Daniel (module responsibility)
  • Lehrenden, Die im Modul (authorised to take exams)
Prerequisites

basic concepts of Artificial Intelligence, Human-Computer Interfaces

Skills to be acquired in this module

Learning methods of multimodal interaction, learning Human-Computer Interaction concepts.

Professional competences
The students

  • work their way into the topic of multimodality (competence: basic concepts of multimodality,
  • develop an intuition for multimodal approaches, multimodal fusion techniques).

Methological competences
The students

  • prepare a term paper on a special topic in the field of multimodality (competence: quick comprehension,
  • structured literature review, precise expression).


Social competences

The students

  • choose a topic and interact with each other and the supervising person (competence: communication skills, enthusiasm, initiative)

Self competences
The students

  • work independently in a supervised setting (competencies: Personal responsibility, analytical thinking, organization, time management).
Module contents

We look at relevant theory and neuroscience foundations for guiding the development of high-erformance systems. We discuss approaches to user modeling, interface design that supports user choice, synergistic combination of modalities with sensors, and blending of multimodal input and output. We also highlight an in- depth look at the most common multimodal-multisensor combinations- for example, touch and pen input, haptic and non- speech audio output, and speech co-processed with visible lip movements, gaze, gestures, or pen input. A common theme throughout is support for mobility and individual differences among users-including the world's rapidly growing population of seniors.

Recommended reading

The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition - Volume 1 (https://dl.acm.org/doi/book/10.1145/3015783)

Links
Language of instruction English
Duration (semesters) 1 Semester
Module frequency regular
Module capacity 12
Teaching/Learning method S
Examination Prüfungszeiten Type of examination
Final exam of module

at the end of the lecture period

oral exam or portfolio or presentation

Type of course Seminar
SWS 2
Frequency SuSe and WiSe