Seminar Applied Artificial Intelligence
Content
Selected topics from the field of socio-technical knowledge (see topics of the lecture Collaborative Intelligence). The seminar teaches the students how to write and present a scientific paper on a specific topic. Students are also introduced to doing a literature review of scientific papers. The final presentation is carried out in the form of a block event. More details will be given in the obligatory introductory meeting.
Requirements
This seminar is offered to both Bachelor and Master students. Registration via OLAT is required for this seminar; the access code will be given in the introductory meeting. As we provide each student with a topic and a tutor, the number of seminar places is limited according to the number of topics available.
Materials
You can find all course materials, news and information in OLAT. The access code for the OLAT course will be given in the introductory meeting.
Organisation
The seminars of all students take place as a block event at the end of the semester.
There will be a mandatory introductory online meeting via BigBlueButton (BBB), where the course organization and topics of the seminar will be presented, and the OLAT access code will be published. Deadline for registration on OLAT is 23.10.2024.
- Introductory meeting: 22.10.2024, 10:00-11:00
- Meeting link: https://bbb.rlp.net/b/nat-g1u-mfm-mdv
- Participant Access code: 391026
During the lecture period, students will work on the topics. Discussions with their supervisor will take place individually. We offer a mandatory one-hour course about scientific writing and working with LaTeX.
The paper is to be written in English and should be of minimum length 10 pages (Bachelor: 8 pages) at the end. The presentations, which are also given in English, take place at the end of the semester and last about 25 minutes each (Bachelor: 20 minutes), including questions. The final presentations will take place as in-person event at the DFKI building. Students should follow the provided mandatory LaTeX template for their seminar paper and an optional template for the presentation slides.
Topics
The format of the list is [Student Level Preference] Seminar Topic (Supervisor Name). Student level preference could be Bachelor/Master/Any. Bachelor students are eligible for topics marked as Bachelor or Any. Click the topics to reveal a short topic description.
After the introductory meeting, the Topic Assignment will take place via OLAT based on a topic preference survey.
[Master] Fairness in Computer Human Interaction (Ko Watanabe)
In recent years, trustworthy AI is becoming significant. Fairness is one of the important factor within trustworthy AI. Your task is to survey and map how "fairness" roles in the domain of computer-human interaction. Explaining how existing work tackle on fairness, or metrics use to show the fairness in AI models. How human's feel fair against the AI models/systems are also in scope. Sample papers you are going to read is here ( https://cloud.dfki.de/owncloud/index.php/s/6y3xqe4zHkCnZj6 ).[Any] Automated indicator recognition to assess local climate using the Matrix-Method (Julia Mayer)
Many different indicators are used to assess the local climate. These include, for example, the degree of sealing, shading and topography. The aim of this study is to investigate which of these indicators can be recorded (partially) automatically and under what exact conditions.[Any] Foundation models for weather and climate forecasting (Dinesh Natarajan)
Due to climate change, there have been increasing occurences of extreme weather events across the globe such as flash floods, extreme heat waves, wildfires, hurricanes, etc. For the development of early warning systems for such extreme weather events, there is a need for fast forecasting tools that can help towards preventive measures and emergency response. Conventional weather forecasting tools are computationally expensive and slow, therefore modern deep learning approaches are being used to replace them for fast, real-time predictions. Foundation models are AI models trained on large datasets and can be later adapted for a variety of task-specific applications. In this seminar, you will systematically review the newly emerging foundation models trained on large amounts of spatio-temporal data for applications in weather and climate forecasting. You will compare the various deep learning architectures used in these models and their applications across different spatial and temporal scales.[Master] Multi-modal similarity enhacement in remote sensing data (Francisco Mena)
Multi-source remote sensing data has been crucial to understand and analyze our Earth. Different sources (or sensors) can be used to improve different applications, like crop-type classification and flood mapping. However, the redundancy and complementary information in these sources are unclear, based on their heterogeneous nature with different spatio-temporal-spectral resolutions. Some methods in the literature have put the focus on learning shared features between different sources, to increase the extraction of similar information. In this seminar, the student will explore different machine (deep) learning models that focus on increasing the similarity in multi-source remote sensing data.[Master] Concept-Based Explanations in Medical Imaging (Payal Varshney)
Deep Neural Network have shown great potential in medical imaging, still their "black-box" nature poses challenges in clinical acceptance and trust. Concept-based explanations method interpret the decison made by these models with clinically relevant, human-understandable concepts. These explanations also help in identifying the biases within the model and detecting potential novel biomarkers, improving transparency in critical diagnostic processes. This seminar will provide a comprehensive review of the recent advancements in concept-based explainability within medical imaging, focusing on methods such as Testing with Concept Activation Vectors (TCAV) and other interpretable frameworks. You will summarize the SOTA methods, highlight the limitations and identify the gaps in existing methodology to improve interpretability.[Master] Disentangled Explainable AI Methods (Payal Varshney)
Explainable AI (XAI) methods are crucial for gaining trust and transparency in the decisions made by machine learning models. Various XAI approaches such as concept-based and counterfactual explanations offer valuable insights, but many tend to entangle important model features. This seminar will provide a comprehensive review of recent research on disentangled XAI methods, emphasizing approaches that isolate key features to provide clearer explanations. You will summarize the SOTA methods, the losses, and the evaluation metrics used to assess the quality of these disentangled explanations.[Master] AI Regulatory landscape in the United states vs the European Union (Jayanth Siddamsetty)
Understanding the AI regulatory landscapes in both the United States and the European Union is important for several reasons. These two regions are global leaders in AI development, and their regulations significantly influence international standards and practices. Knowing the differences helps researchers, developers, and policymakers navigate legal compliance, cross-border collaborations, and potential market entry strategies. It also allows for better preparation to adapt AI technologies to meet diverse regulatory requirements.[Any] Diffusion Models for Data Augmentation (Tobias Nauen)
"Data augmentation is a technique used to artificially increase the size and diversity of a dataset by applying various transformations to existing data points. This helps to improve the generalization performance of deep learning models by reducing overfitting and making them more robust to variations in unseen data. Diffusion models, a type of generative model, can be used to generate high-quality, diverse synthetic data that can augment existing datasets. This seminar will explore the application of diffusion models for high quality data augmentation, which in turn enhances the capability of deep learning models. The student will investigate different approaches of using diffusion models for data augmentation. They will then compare and contrast these approaches qualitatively and quantitatively based on model performance and efficiency."[Master] Synthetic Ground-Truth Datasets in XAI (David Dembinsky)
Explainable AI (XAI) aims to make AI model decisions transparent and understandable. Synthetic datasets, where the relationship between features or concepts and labels is carefully controlled, are commonly used to evaluate XAI methods. Here, assuming that the model (e.g. neural network) performs at approximately 100%, it is safe to assume that it internally represents the rationale of the dataset. Therefore, these datasets allow researchers to test how well explanations capture predefined dependencies between input features and model predictions. Your task is to explore and describe different synthetic datasets used in XAI research, starting with resources like *, and discuss how they are constructed and help to assess the effectiveness of XAI techniques. *{https://arxiv.org/pdf/2206.11104v5}, {https://link.springer.com/chapter/10.1007/978-3-031-44064-9_25}, {https://arxiv.org/pdf/2005.01399}[Bachelor] Overview over XAI Methods for Images (David Dembinsky)
Explainable AI (XAI) seeks to make AI decisions understandable, especially in image-based models, which are among the most commonly studied. Various local, post-hoc methods generate explanations for individual decisions rather than the entire model, using techniques like feature attributions, concept importance, and counterfactuals. Your task is to identify the most prevalent of these methods and explain how they work. A good starting point for your research would be {https://www.mdpi.com/2673-2688/4/3/33} {https://christophm.github.io/interpretable-ml-book/}[Any] How to increase the Robustness of Neural Networks? (David Dembinsky)
Deep learning techniques, particularly Neural Networks, are renowned for their high performance on complex tasks. However, their intricate decision boundaries often lead to low robustness, making them vulnerable to adversarial attacks and out-of-distribution data. Your task is to explore and describe common techniques used to improve the robustness of neural networks in these scenarios. You can begin your research with the following references: {https://arxiv.org/pdf/2112.00639}, {https://www.sciencedirect.com/science/article/pii/S294985542300045X#sec4}, {https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8294186}.[Master] Improving Object Detection in Crowded Scenes: Methods and Metric (Nabeel Khalid)
This report will address the challenges of detecting objects in densely populated environments, where traditional methods often struggle due to factors such as occlusion, overlapping objects, and high object density. It will review advanced techniques like anchor-free detectors, occlusion-aware models, and attention mechanisms, all designed to enhance detection accuracy in crowded scenes. Additionally, the report will explore evaluation metrics such as mAP and GIoU, which are used to assess performance in these challenging conditions, with applications in fields like surveillance, autonomous driving, and medical imaging.[Any] Recent Advances in Self-Supervised Learning for Time Series (Philipp Engler)
Time series data arises from sensors everywhere around us. Machine learning algorithms can help us make sense of this data, allowing them for instance to predict the action a human is performing or to detect faults in a machine. Machine learning models, such as neural networks, typically require large amounts of data for training. While labeled data can be expensive to acquire due to the need of human supervision or expensive measurements, unlabeled data is often available in larger quantities. Self-supervised representation learning methods, allowing to pre-train models on unlabeled data, have improved significantly and gained increased attention lately. We are interested in surveying recent literature on self-supervised representation learning in the time series domain to obtain an overview of the current state-of-the-art.[Master] AI-Driven Educational Insights (Jayasankar Santhosh)
In today's rapidly evolving educational scenario, AI-driven learning analytics offers immense potential to enhance how we understand and improve learning experiences. By leveraging advanced AI technologies, educators can gain deeper insights into student behaviors, personalize learning pathways, and optimize educational outcomes. This approach not only empowers teachers with valuable data but also supports students in achieving their full potential through tailored feedback and adaptive learning environments. As AI continues to evolve, its integration into learning analytics promises to enhance the future of education, making it more responsive and effective for diverse learners.[Any] Practices and Trends in Informed Machine Learning for Earth Observation (Miro Miranda Lorenz)
Despite its great success, machine learning can have its limits when dealing with insufficient training data. A potential solution is the additional integration of prior knowledge into the training process, which leads to the notion of informed machine learning. In this seminar, you will systematically review the common practices and trends of informed machine learning in earth observation applications.[Any] Image generation: From single object to complex imaging (Duway Nicolas Lesmes Leon)
Generative models, more specifically Diffusion models, have shown great potential for natural image synthesis with great generation control. Different from natural imaging, cell microscopy imaging is composed of repetitive objects (cells) in single images. Cell microscopy datasets usually have a high number of cell examples in a few numbers of images, which is a limitation for model training since normally an image is taken as an example. In this seminar, you will compile the research focused on image generation oriented to single object training to later produce composed images, ideally in cell microscopy. The goal of this review is to collect, compare, and understand the alternatives to producing complex image setups from their smallest components.[Any] GNNs in Time Series Anomaly Detection (Ensiye Tahaei)
Graph Neural Networks (GNNs) present a compelling method for detecting anomalies in time series data through their ability to analyze complex dependencies. This seminar explores how GNNs can enhance traditional anomaly detection methods. The student will develop a deep understanding of the principles of GNNs, investigate their practical applications, participate in analytical comparisons, and recognize their potential challenges. The goal is for the student to learn how GNNs can be integrated into existing frameworks to achieve more efficient and accurate results.[Master] Transformer-based Models in Image Super-Resolution (Brian Moser)
Transformer models have revolutionized various domains in deep learning, including natural language processing and vision tasks. Recently, transformers have also shown great promise in the domain of image super-resolution (SR), with the ability to capture long-range dependencies and model complex structures. The goal of this seminar is to provide a comprehensive survey of transformer-based models applied to image super-resolution, focusing on their architectures, the innovations in attention mechanisms, and their performance compared to traditional convolutional neural network (CNN)-based approaches. Furthermore, the survey should identify challenges and potential improvements in integrating transformers into SR workflows, and, if possible, pointing to future directions in a possible publication.[Master] Blockchain-Enabled Digital Product Passports for Food Supply Chains (Gagan Gowda)
The Digital Product Passport (DPP) is a powerful tool in the evolving landscape of the Digital Circular Economy and Industry 4.0, designed to improve traceability, sustainability, and transparency. By assigning unique digital identities to physical products, DPPs enable detailed tracking throughout their lifecycles, from production to disposal or recycling. In line with EU policies, which mandate the implementation of DPPs for certain sectors to enhance circularity and resource efficiency, this technology is increasingly crucial for compliance and sustainability goals. Blockchain technology ensures the security and integrity of this data, creating an immutable record that enhances accountability. In food supply chains, the adoption of DPPs is essential for improving traceability, food safety, and sustainability, allowing for real-time monitoring of a product’s journey from farm to consumer. The student’s goal is to explore how blockchain-enabled DPPs can be effectively implemented in food supply chains, identifying key challenges, potential solutions, and strategies for enhancing transparency, efficiency, and sustainability in this sector.[Master] Integrating Large Language Models into AutoML systems (Marc Gänsler)
The task is to conduct a comprehensive paper review of the current state of research in Automated Machine Learning (AutoML), with a specific focus on how Large Language Models (LLMs) could enhance AutoML systems. The review should cover key advancements in AutoML, including techniques for model selection, hyperparameter optimization, and feature engineering. Additionally, the review should explore the potential for LLMs to contribute to these processes, such as improving model interpretability, automating more complex tasks, and optimizing workflows within AutoML systems. The review should synthesize existing research, identify gaps, and propose future directions for integrating LLMs into AutoML.[Any] Knowledge Graph-based Retrieval-Augmented-Generation (Desiree Heim)
To enhance user prompts for Large Language Models, Knowledge Graph Retrieval-Augmented Generation analyzes user prompts and enriches them with relevant information from knowledge graphs. The goal of this seminar is to create a comprehensive overview of existing approaches and classify them based on relevant characteristics.[Master] Deep Learning for Requirement Engineering (Summra Saleem)
Software requirements engineering is a critical phase in the software development lifecycle that focuses on defining, documenting, and managing the needs and expectations of stakeholders for a software system. This process involves eliciting requirements from various sources, analyzing and refining them, specifying them in a clear and unambiguous manner, and validating them with stakeholders. Effective requirements engineering helps ensure that the final software product aligns with user needs and business objectives. The goal of this seminar is to explore AI-drive algorithms that can aid to automate manual and time consuiming tasks of requirement engineering.[Master] XAI for Genomics (Ahtisham Fazeel)
The importance of XAI (Explainable AI) in genomics lies in its ability to make complex AI models more transparent, interpretable, and trustworthy, especially in critical areas like disease diagnosis and personalized medicine. Understanding how AI models make decisions is vital for ensuring their accuracy, fairness, and ethical application in genomic research. This seminar will cover the XAI paradigm in its entirety, exploring both foundational methods like feature attribution and advanced techniques for improving model interpretability.[Master] Data Fusion in Omics (Ahtisham Fazeel)
Data fusion in omics is essential for integrating diverse datasets, such as genomics, proteomics, and transcriptomics, to provide a comprehensive understanding of biological systems. Combining these data types enables more accurate predictions and deeper insights into complex biological processes, such as disease mechanisms. This seminar will cover the entire data fusion paradigm, exploring key methods for multi-omics integration, including feature-level, decision-level, and deep learning-based fusion techniques.[Master] XAI for Finance (Michael Schulze)
AI gets more and more traction in the finance sector. However, especially in finance, the explainability of AI models plays a key role in order to be able to base decision on them. This seminar topic reviews XAI approaches (explainable AI) within the scope of finance use-cases.