Authors: D Testa, G Bonetta, R Bernardi, A Bondielli
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: Università di Roma La Sapienza
Research Area: Multimodal Reasoning, AI Benchmarking
Discipline: Artificial Intelligence
MAIA is a benchmark designed to evaluate the reasoning abilities of Vision Language Models (VLMs) on video-based tasks, with a focus on Italian culture and language, revealing their fragility in consistency and visually grounded language comprehension and generation.
Methods: MAIA comprises a set of video-related questions tested with two tasks: visual statement verification and open-ended visual question answering, categorized into twelve reasoning types to disentangle language-vision relations.
Key Findings: The ability of Vision Language Models (VLMs) to perform consistent, visually grounded natural language understanding and generation across fine-grained reasoning categories.
DOI: https://doi.org/10.48550/arXiv.2502.16989
Authors: Z Qiu, W Liu, H Feng, Z Liu, T Xiao
Year: 2024
Published in: ArXiv
Institution: Massachusetts Institute of Technology, Max Planck Institute, University of Cambridge
Research Area: Computational cognition, LLM evaluation, Program synthesis, Multimodal reasoning
Discipline: Artificial Intelligence
Introduces SGP-Bench, a benchmark testing whether LLMs can answer semantic and spatial questions about images purely from graphics programs (SVG/CAD), effectively probing “visual imagination without vision.” The authors show current LLMs struggle - sometimes near chance - even when images are trivial for humans, but demonstrate that Symbolic Instruction Tuning (SIT) can meaningfully improve thi...