*Result*: Explainable generalized additive neural networks with independent neural network training.

Title:
Explainable generalized additive neural networks with independent neural network training.
Source:
Statistics & Computing; Feb2024, Vol. 34 Issue 1, p1-11, 11p
Database:
Complementary Index

*Further Information*

*Neural Networks are one of the most popular methods nowadays given their high performance on diverse tasks, such as computer vision, anomaly detection, computer-aided disease detection and diagnosis or natural language processing. While neural networks are known for their high performance, they often suffer from the so-called “black-box” problem, which means that it is difficult to understand how the model makes decisions. We introduce a neural network topology based on Generalized Additive Models. By training an independent neural network to estimate the contribution of each feature to the output variable, we obtain a highly accurate and explainable deep learning model, providing a flexible framework for training Generalized Additive Neural Networks which does not impose any restriction on the neural network architecture. The proposed algorithm is evaluated through different simulation studies with synthetic datasets, as well as a real-world use case of Distributed Denial of Service cyberattack detection on an Industrial Control System. The results show that our proposal outperforms other GAM-based neural network implementations while providing higher interpretability, making it a promising approach for high-risk AI applications where transparency and accountability are crucial. [ABSTRACT FROM AUTHOR]

Copyright of Statistics & Computing is the property of Springer Nature and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)*