Benchmarking in Neuro-Symbolic AI
Abstract
Neural-symbolic (NeSy) AI has gained a lot of popularity by enhancing learning models with explicit reasoning capabilities. Both new systems and new benchmarks are constantly introduced and used to evaluate learning and reasoning skills. The large variety of systems and benchmarks, however, makes it difficult to establish a fair comparison among the various frameworks, let alone a unifying set of benchmarking criteria. This paper analyzes the state-of-the-art in benchmarking NeSy systems, studies its limitations, and proposes ways to overcome them. We categorize popular neural-symbolic frameworks into three groups: model-theoretic, proof-theoretic fuzzy, and proof-theoretic probabilistic systems. We show how these three categories have distinct strengths and weaknesses, and how this is reflected in the type of tasks and benchmarks to which they are applied.
How to cite
@inproceedings{manhaeve2024benchmarking,
title={Benchmarking in Neuro-Symbolic AI},
author={Manhaeve, Robin and Giannini, Francesco and Ali, Mehdi and Azzolini, Damiano and Bizzarri, Alice and Borghesi, Andrea and Bortolotti, Samuele and De Raedt, Luc and Dhami, Devendra and Diligenti, Michelangelo and others},
booktitle={Proceedings of The 4th International Joint Conference on Learning \& Reasoning},
year={2024}
}