Master Thesis

Project Overview

This thesis explores the convergence of Interactive Machine Learning and Neuro-Symbolic Artificial Intelligence. The primary objective is to develop an interactive debugging protocol for hierarchical classifiers. Current methods for explanatory debugging in machine learning are often limited to one-shot interactions, where the machine provides explanations for individual predictions. This project addresses the need for more comprehensive debugging by proposing a multi-round interaction approach. This approach facilitates structured arguments between the machine and the user to validate or challenge predictions and model-related statements.

Objectives

  1. Multi-Round Interaction: Develop a multi-round interaction approach to enable more extensive debugging compared to one-shot interactions.

  2. Automated Argument Extraction: Automatically extract relevant arguments from the model using eXplainable Artificial Intelligence (XAI) relevance measures.

  3. User-Model Interaction: Facilitate dialogical interaction between the user and the machine, allowing structured arguments for debugging.

  4. Model Validation: Validate the effectiveness of the interactive debugging protocol on Neuro-Symbolic architectures, including Coherent Hierarchical Multi-Label Classification Networks and Semantic Probabilistic Layers.

  5. Feasibility of Bug Correction: Investigate the feasibility of correcting Neuro-Symbolic models with prior knowledge of underlying bugs.

Key Results:

Future Directions:

Conclusion

This project introduces an innovative interactive debugging protocol that extends beyond one-shot interactions. While serving as a starting point, it has the potential to significantly influence future research in Interactive Machine Learning and Neuro-Symbolic AI.