*Result*: Adaptive Feeding Robot With Multisensor Feedback and Predictive Control Using Autoregressive Integrated Moving Average-Feed-Forward Neural Network: Simulation Study.
IEEE J Biomed Health Inform. 2020 Sep;24(9):2535-2549. (PMID: 32340971)
IEEE Int Conf Rehabil Robot. 2013 Jun;2013:6650501. (PMID: 24187316)
Disabil Rehabil Assist Technol. 2012 Mar;7(2):168-79. (PMID: 22013888)
Lancet. 2015 Feb 14;385(9968):658-661. (PMID: 25468151)
*Further Information*
*Background: Eating is a primary daily activity crucial for maintaining independence and quality of life. Individuals with neuromuscular impairments often struggle with eating due to limitations in current assistive devices, which are predominantly passive and lack adaptive capabilities.
Objective: This study aims to introduce an adaptive feeding robot that integrates time series decomposition, autoregressive integrated moving average (ARIMA), and feed-forward neural networks (FFNN). The goal is to enhance feeding precision, efficiency, and personalization, thereby promoting autonomy for individuals with motor impairments.
Methods: The proposed feeding robot combines information from sensors and actuators to collect real-time data, that is, facial landmarks, mouth status (open or closed), fork-to-mouth and plate distances, as well as the force and angle required for food handling based on the food type. ARIMA and FFNN algorithms analyze data to predict user behavior and adjust feeding actions dynamically. A strain gauge sensor ensures precise force regulation, an ultrasonic sensor optimizes positioning, and facial recognition algorithms verify safety by monitoring mouth conditions and plate contents.
Results: The combined ARIMA+FFNN model achieved a mean squared error (MSE) of 0.008 and an R2 of 94%, significantly outperforming the standalone ARIMA (MSE=0.015; R2=85%) and FFNN (MSE=0.012; R2=88%). Feeding success rate improved from 75% to 90% over 150 iterations (P<.001), and response time decreased by 28% (from 3.6 s to 2.2 s). ANOVA revealed significant differences in success rates across scenarios (F3,146=12.34; P= .002), with scenario 1 outperforming scenario 3 (P=.030) and scenario 4 (P=.010). Object detection showed high accuracy (face detection precision=97%, recall=96%, 95% CI 94%-99%). Force application matched expected ranges with minimal deviation (24 [1] N for apples; 7 [0.5] N for strawberries).
Conclusions: Combining predictive algorithms and adaptive learning mechanisms enables the feeding robot to demonstrate substantial improvements in precision, responsiveness, and personalization. These advancements underline its potential to revolutionize assistive technology in rehabilitation, delivering safe and highly personalized feeding assistance to individuals with motor impairments, thereby enhancing their independence.
(© Shabnam Sadeghi-Esfahlani, Vahaj Mohaghegh, Alireza Sanaei, Zainib Bilal, Nathon Arthur, Hassan Shirvani. Originally published in JMIR Formative Research (https://formative.jmir.org).)*