*Result*: Automated Model Inference for Gaussian Processes: An Overview of State-of-the-Art Methods and Algorithms.
IEEE Trans Neural Netw Learn Syst. 2020 Nov;31(11):4405-4423. (PMID: 31944966)
J Am Stat Assoc. 2016;111(514):800-812. (PMID: 29720777)
Neural Comput. 2002 Aug;14(8):1771-800. (PMID: 12180402)
Chaos Solitons Fractals. 2020 Jul;136:109924. (PMID: 32501372)
Nature. 2015 May 28;521(7553):452-9. (PMID: 26017444)
Philos Trans A Math Phys Eng Sci. 2012 Dec 31;371(1984):20110550. (PMID: 23277607)
IEEE Trans Neural Netw Learn Syst. 2012 Aug;23(8):1177-93. (PMID: 24807516)
Knowl Inf Syst. 2017 May;51(2):339-367. (PMID: 28603327)
*Further Information*
*Gaussian process models (GPMs) are widely regarded as a prominent tool for learning statistical data models that enable interpolation, regression, and classification. These models are typically instantiated by a Gaussian Process with a zero-mean function and a radial basis covariance function. While these default instantiations yield acceptable analytical quality in terms of model accuracy, GPM inference algorithms automatically search for an application-specific model fitting a particular dataset. State-of-the-art methods for automated inference of GPMs are searching the space of possible models in a rather intricate way and thus result in super-quadratic computation time complexity for model selection and evaluation. Since these properties only enable processing small datasets with low statistical versatility, various methods and algorithms using global as well as local approximations have been proposed for efficient inference of large-scale GPMs. While the latter approximation relies on representing data via local sub-models, global approaches capture data's inherent characteristics by means of an educated sample. In this paper, we investigate the current state-of-the-art in automated model inference for Gaussian processes and outline strengths and shortcomings of the respective approaches. A performance analysis backs our theoretical findings and provides further empirical evidence. It indicates that approximated inference algorithms, especially locally approximating ones, deliver superior runtime performance, while maintaining the quality level of those using non-approximative Gaussian processes.
(© The Author(s) 2022.)*
*Conflict of interestThe authors declare that they have no conflict of interest.*