Thesis Topics

This list includes topics for potential bachelor or master theses, guided research, projects, seminars, and other activities. Search with Ctrl+F for desired keywords, e.g. ‘machine learning’ or others.

PLEASE NOTE: If you are interested in any of these topics, click the respective supervisor link to send a message with a simple CV, grade sheet, and topic ideas (if any). We will answer shortly.

Of course, your own ideas are always welcome!


RNA Sequence Design using Deep Learning

Type of Work:

  • Master

Keywords:

  • bioinformatics
  • deep learning
  • neural networks
  • RNA design
  • sequence optimization

Description:

The goal of this project is to use deep learning methods to design RNA sequences with specific desired functions. Traditional RNA design relies on complex rules and manual optimization, which can be slow and limited. This thesis will explore how neural networks can learn patterns from existing RNA data to automatically generate new sequences that fold into target structures or perform specific biological tasks.

The project will focus on training deep learning models on RNA sequence-structure datasets and developing methods to generate functional RNA molecules. The approach will combine sequence generation techniques with structure prediction to ensure the designed RNAs can actually fold correctly and work as intended.

References:

  • RNA design rules from a massive open laboratory
  • Improved RNA secondary structure prediction by maximizing expected accuracy

Importance-Sampled Coresets via Neural Image Compression

Type of Work:

  • Guided Research
  • Master

Keywords:

  • coreset selection
  • deep learning
  • neural image compression

Description:

The goal of this project is to explore the intersection of coreset selection [1] and neural image compression [2] for data-efficient training in deep learning. Specifically, the thesis will investigate the use of importance-sampled coresets based on the compressibility of input samples. The core idea is that the ease with which an image can be compressed by a neural compression model may reflect its redundancy or informativeness. By analyzing the latent representations and compression performance (e.g., reconstruction error, bitrate) of a neural compressor, the project will aim to define an importance metric. This metric will then be used to select a subset of training data - the coreset - that is representative yet compact.


Efficient Optimization with Multi-Level Gradient Accumulation

Type of Work:

  • Master

Keywords:

  • machine learning
  • optimization

Description:

Multi-level methods are widely used in numerical analysis to solve problems efficiently by combining solutions across coarse and fine resolutions (levels). This project explores how a similar idea can be applied to gradient-based optimization in deep learning: gradients are first computed on coarse levels (e.g. low resolution or small size) using a large batch size, then refined using residual gradients from finer levels. The goal is to improve the quality of gradient estimates while reducing the computational cost of high-resolution training. The student will implement this approach in Jax and test it on models for classification or generative tasks. Background in deep learning and interest in optimization techniques is important; familiarity with Python, Jax/PyTorch and NumPy is a plus but not strictly required.


Pruning image super-resolution models by removing unnecessary ReLU activations.

Type of Work:

  • Guided Research
  • Master

Keywords:

  • Deep Learning
  • Image Processing
  • Image Super-Resolution

Description:

This work investigates the optimization of image super-resolution neural network architectures by removing ReLU and other noise-canceling activation layers. The resulting method should combine convolution layers surrounding the removed activation layers into a single convolution layer, reducing redundancy and improving computational efficiency. As a starting point, the selection of ReLU layers for removal will be based on an analysis of their activations distributions (non-noise-canceled vs noise-canceled) using a representative dataset.


Combining Dynamic Attention-Guided Diffusion and Wavelet-Based Diffusion for Image Super-Resolution

Type of Work:

  • Guided Research
  • Master

Keywords:

  • deep learning
  • single image super-resolution
  • vision transformer

Description:

This thesis focuses on merging two techniques developed in our group [1, 2]. The first component, Dynamic Attention-Guided Diffusion, allows selective diffusion across regions of interest in the image, driven by time-dependent attention mechanisms. This method ensures that only certain parts of the image are diffused at specific time-steps, enhancing focus on critical image regions. The second component, Wavelet-based Diffusion, introduces image processing in the frequency domain via discrete wavelet transforms (DWT). Instead of working in the pixel domain, this method applies diffusion in the frequency domain, effectively capturing and enhancing multiscale image details. By combining these approaches, this work will explore the synergy of frequency-domain wavelet transforms with dynamic, time-based attention in diffusion models. The research aims to produce sharper, high-resolution images by diffusing across relevant areas in both the spatial and frequency domains, leading to more efficient and accurate SR results.

Previous Job Offers
Next Contact