*Result*: Word classification across speech modes from low-density electrocorticography signals.
*Further Information*
*Objective.Speech brain-computer interfaces (BCIs) aim to provide an alternative means of communication for individuals who are not able to speak. Remarkable progress has been achieved to decode attempted speech in individuals with severe anarthria. In contrast, imagined speech remains challenging to decode. The underlying neural mechanisms and relations to other speech modes are still elusive.Approach.In this study, we collected low-density electrocorticography signals from ten participants during a word repetition task. Electrodes were implanted for presurgical epilepsy evaluation in participants with preserved speech abilities. Models were developed using linear discriminant analysis to classify five words in response to different speech modes. We compared models trained during speaking, listening, imagining speaking, mouthing and reading. The relations between speech modes were investigated by transferring and augmenting models across speech modes.Main results.As expected, performed speech achieved the highest word classification accuracy followed by listening, mouthing, imagining and reading. While the accuracies obtained were not high enough for practical application, model transfer and augmentation could be investigated across speech modes. Transferring or augmenting models from one speech mode to another mode could significantly improve model performance. In particular, patterns learned from performed and perceived speech could generalize to imagined speech, leading to significantly improved imagined speech performance in seven participants. For four participants, imagined speech could be decoded above chance exclusively when models were transferred or augmented with performed or perceived speech.Significance.Imagined speech is often preferred by speech BCI users over attempted speech, as it requires less effort and can be produced more quickly. Transferring models across speech modes has the potential to facilitate and boost the development of imagined speech decoders.
(Creative Commons Attribution license.)*