*Result*: Counterfactual causal inference for robust visual question answering.

Title:
Counterfactual causal inference for robust visual question answering.
Authors:
Li W; Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China; Guangxi Key Lab of Multi-source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China; School of Computer Science, Liupanshui Normal University, Liupanshui, 553004, China. Electronic address: liwei24@stu.gxnu.edu.cn., Li Z; Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China; Guangxi Key Lab of Multi-source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China. Electronic address: lizx@gxnu.edu.cn., Deng F; Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China; Guangxi Key Lab of Multi-source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China. Electronic address: dengfy@stu.gxnu.edu.cn., Zeng K; Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China; Guangxi Key Lab of Multi-source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China. Electronic address: zengk@stu.gxnu.edu.cn., Zhang C; Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China; Guangxi Key Lab of Multi-source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China. Electronic address: clzhang@gxnu.edu.cn.
Source:
Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2026 Feb; Vol. 194, pp. 108115. Date of Electronic Publication: 2025 Sep 16.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Pergamon Press Country of Publication: United States NLM ID: 8805018 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1879-2782 (Electronic) Linking ISSN: 08936080 NLM ISO Abbreviation: Neural Netw Subsets: MEDLINE
Imprint Name(s):
Original Publication: New York : Pergamon Press, [c1988-
Contributed Indexing:
Keywords: Causal graphs; Counterfactual inference; Multimodal learning; Visual question answering
Entry Date(s):
Date Created: 20250919 Date Completed: 20251216 Latest Revision: 20251216
Update Code:
20260130
DOI:
10.1016/j.neunet.2025.108115
PMID:
40972116
Database:
MEDLINE

*Further Information*

*Visual Question Answering (VQA) systems have seen remarkable progress with the incorporation of multimodal data. However, their performance is still hampered by biases ingrained in language and vision modalities, frequently resulting in subpar generalization. In this study, we introduce a novel counterfactual causal framework (CC-VQA). This framework utilizes Counterfactual Sample Synthesis (CSS) and causal inference to tackle cross-modality biases. Our approach innovatively employs a strategy based on causal graphs, which effectively disentangles spurious correlations in multimodal data. This ensures a balanced and precise multimodal reasoning process, enabling the model to make more accurate and unbiased decisions. Moreover, we propose a contrastive loss mechanism. By contrasting the embeddings of positive and negative samples, this mechanism significantly enhances the robustness of VQA models. Additionally, we develop a robust training strategy that improves both the visual-explainable and question-sensitive capabilities of these models. Our experimental evaluations on benchmark datasets, such as VQA-CP v2 and VQA v2, demonstrate substantial improvements in bias mitigation and overall accuracy. The proposed CC-VQA framework outperforms state-of-the-art methods, highlighting its effectiveness in enhancing the performance of VQA systems.
(Copyright © 2025 Elsevier Ltd. All rights reserved.)*

*Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.*