Topic: Uncertainty Quantification for Semantic Segmentation in Deep Learning

Topic: Uncertainty Quantification for Semantic Segmentation in Deep Learning

Personal details

Title Uncertainty Quantification for Semantic Segmentation in Deep Learning
Description

Semantic segmentation assigns a class label to each pixel of an image and is widely applied in domains such as autonomous driving, robotics, and medical imaging. While modern deep learning models achieve high segmentation accuracy, they often produce overconfident predictions, even for uncertain or unseen inputs. This poses a major risk in safety-critical applications.

This thesis aims to investigate and evaluate uncertainty quantification (UQ) techniques for semantic segmentation. The student will implement and compare existing UQ methods on state-of-the-art segmentation networks, perform systematic evaluations, and analyze the reliability of the resulting uncertainty estimates.

Beyond benchmarking, motivated students are encouraged to explore extensions and novel ideas, such as:

  • Developing boundary-aware UQ metrics that separate natural aleatoric uncertainty (e.g., at object edges) from epistemic model uncertainty.
  • Proposing lightweight UQ techniques that reduce computational overhead compared to ensembles.
  • Designing uncertainty-driven improvements for downstream tasks such as active learning, post-processing, or failure detection.

The project thus offers a balance between solid empirical study and the opportunity to make research-level contributions.

Home institution Department of Computing Science
Associated institutions
Type of work practical / application-focused
Type of thesis Bachelor's or Master's degree
Author Prof. Dr. Chih-Hong Cheng
Status available
Problem statement

Deep neural networks are powerful but unreliable in expressing their confidence. Standard semantic segmentation models output only class probabilities, which are often poorly calibrated and fail to indicate when the model is uncertain. This lack of trustworthy uncertainty estimation makes it challenging to decide whether a model’s prediction can be trusted in safety-critical scenarios.

The thesis addresses the following challenges:

  • How can different UQ methods (e.g., Monte Carlo dropout, deep ensembles, and evidential learning) be adapted for semantic segmentation?
  • How reliable are the uncertainty estimates in terms of calibration, robustness, and interpretability?
  • Can uncertainty information improve downstream tasks, such as failure detection or active learning?
Requirement
  • Programming skills in Python.
  • Familiarity with deep learning frameworks (e.g., PyTorch or TensorFlow).
  • Background knowledge in machine learning and ideally computer vision.
  • Ability to work independently and critically evaluate experimental results.
Created 24/09/25

Study data

Departments
Degree programmes
  • Master Data Science and Machine Learning
  • Master's Programme Computing Science
  • Dual-Subject Bachelor's Programme Computing Science
  • Bachelor's Programme Computing Science
Assigned courses
Contact person