*Result*: Emotion recognition from auditory autonomous sensory meridian response (ASMR) using multi-modal physiological signals.

Title:
Emotion recognition from auditory autonomous sensory meridian response (ASMR) using multi-modal physiological signals.
Authors:
Gahlan N; School of Cyber Security and Digital forensics, National Forensic Sciences University, Gandhinagar, Ahmedabad 382007, India., Sethia D; Department of Software Engineering, Delhi Technological University, New Delhi, Delhi 110042, India.
Source:
Physiological measurement [Physiol Meas] 2026 Jan 21; Vol. 47 (1). Date of Electronic Publication: 2026 Jan 21.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: IOP Pub. Ltd Country of Publication: England NLM ID: 9306921 Publication Model: Electronic Cited Medium: Internet ISSN: 1361-6579 (Electronic) Linking ISSN: 09673334 NLM ISO Abbreviation: Physiol Meas Subsets: MEDLINE
Imprint Name(s):
Original Publication: Bristol, UK : IOP Pub. Ltd., c1993-
Contributed Indexing:
Keywords: ASMR; emotion recognition; multi-modal physiological signals; signal processing; statistical analysis; wearable sensors
Entry Date(s):
Date Created: 20260108 Date Completed: 20260121 Latest Revision: 20260121
Update Code:
20260130
DOI:
10.1088/1361-6579/ae35ca
PMID:
41505907
Database:
MEDLINE

*Further Information*

*Objective.Autonomous sensory meridian response (ASMR) is a tingling sensation induced while attending to specific sounds, including whispering, tapping, scratching, or other soft, repetitive noises. While previous studies focused on low arousal-positive emotions such as relaxation and calmness, this study explores a broader range of emotions elicited by ASMR auditory stimuli, including happiness, sadness, and disgust.Approach.The proposed study collects the multi-modal physiological data from electroencephalography, photoplethysmography, and electrodermal activity via wearable bio-sensors from 23 ASMR-experiencing participants while exposed to different ASMR-inducing auditory stimuli. It employs the rmANOVA test on the collected physiological responses and self-reported ratings for quantitative analysis and results in a significant difference between the emotions induced from the four audio stimuli, i.e. Happy from A1, Sad from A2, Calm from A3, Disgust from A4, and the neutral state. The proposed study also applies deep learning classifiers, artificial neural network (ANN), and convolution neural network (CNN) to the collected multi-modal physiological data to classify the four induced emotions from the ASMR auditory stimuli using the dimensions of arousal, valence, and dominance.Main results. The classification accuracy results from ANN, and CNN prove an excellent success rate of 96.12% and 74.25% with multi-modal valence-arousal-dominance for ANN and CNN, respectively, in classifying the four emotions induced by ASMR stimuli. And the statistical rmANOVA test results indicated distinctions among the four emotions, as thep-values exceeded the significance threshold of 0.05.Significance.The results highlight the effectiveness of multi-modal physiological signals and deep learning in reliably classifying ASMR-induced emotions, contributing to advancements in emotion recognition for mental health and therapeutic applications.
(© 2026 Institute of Physics and Engineering in Medicine. All rights, including for text and data mining, AI training, and similar technologies, are reserved.)*