*Result*: Word classification across speech modes from low-density electrocorticography signals.

Title:
Word classification across speech modes from low-density electrocorticography signals.
Authors:
de Borman A; Laboratory for Neuro- and Psychophysiology, KU Leuven, Leuven, Belgium., Dyck BV; Laboratory for Neuro- and Psychophysiology, KU Leuven, Leuven, Belgium., Rooy KV; Department of Neurology, Ghent University Hospital, Ghent, Belgium., Carrette E; Department of Neurology, Ghent University Hospital, Ghent, Belgium., Meurs A; Department of Neurology, Ghent University Hospital, Ghent, Belgium., Roost DV; Department of Neurosurgery, Ghent University Hospital, Ghent, Belgium., Van Hulle MM; Laboratory for Neuro- and Psychophysiology, KU Leuven, Leuven, Belgium.; Leuven Brain Institute (LBI), Leuven, Belgium.; Leuven institute for Artificial Intelligence (Leuven.AI), Leuven, Belgium.
Source:
Journal of neural engineering [J Neural Eng] 2026 Feb 02; Vol. 23 (1). Date of Electronic Publication: 2026 Feb 02.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Institute of Physics Pub Country of Publication: England NLM ID: 101217933 Publication Model: Electronic Cited Medium: Internet ISSN: 1741-2552 (Electronic) Linking ISSN: 17412552 NLM ISO Abbreviation: J Neural Eng Subsets: MEDLINE
Imprint Name(s):
Original Publication: Bristol, U.K. : Institute of Physics Pub., 2004-
Contributed Indexing:
Keywords: brain-computer interface; electrocorticography; speech
Entry Date(s):
Date Created: 20260202 Date Completed: 20260202 Latest Revision: 20260202
Update Code:
20260203
DOI:
10.1088/1741-2552/ae3a1b
PMID:
41623142
Database:
MEDLINE

*Further Information*

*Objective.Speech brain-computer interfaces (BCIs) aim to provide an alternative means of communication for individuals who are not able to speak. Remarkable progress has been achieved to decode attempted speech in individuals with severe anarthria. In contrast, imagined speech remains challenging to decode. The underlying neural mechanisms and relations to other speech modes are still elusive.Approach.In this study, we collected low-density electrocorticography signals from ten participants during a word repetition task. Electrodes were implanted for presurgical epilepsy evaluation in participants with preserved speech abilities. Models were developed using linear discriminant analysis to classify five words in response to different speech modes. We compared models trained during speaking, listening, imagining speaking, mouthing and reading. The relations between speech modes were investigated by transferring and augmenting models across speech modes.Main results.As expected, performed speech achieved the highest word classification accuracy followed by listening, mouthing, imagining and reading. While the accuracies obtained were not high enough for practical application, model transfer and augmentation could be investigated across speech modes. Transferring or augmenting models from one speech mode to another mode could significantly improve model performance. In particular, patterns learned from performed and perceived speech could generalize to imagined speech, leading to significantly improved imagined speech performance in seven participants. For four participants, imagined speech could be decoded above chance exclusively when models were transferred or augmented with performed or perceived speech.Significance.Imagined speech is often preferred by speech BCI users over attempted speech, as it requires less effort and can be produced more quickly. Transferring models across speech modes has the potential to facilitate and boost the development of imagined speech decoders.
(Creative Commons Attribution license.)*