9th Speech in Noise Workshop, 5-6 January 2017, Oldenburg

Decoding speaker attendance from EEG-data using deep machine learning in continuous speech

Tobias de Taillez(a), Birger Kollmeier(b), Bernd Meyer(b)
University Oldenburg, Germany

(a) Presenting
(b) Attending

Previous research has investigated the question if signals obtained from EEG can be used to predict which speaker is attended in an acoustic scene. The long-term goal is to provide solutions for hearing aid users using EEG-based speaker selection or optimization. In this work, we analyze EEG data from listeners in a two-speaker scenario and test the application of algorithms borrowed from automatic speech recognition (ASR) to estimate which speaker was attended. Specifically, a deep neural net is trained to predict the envelope of the attended speech signal. We compare our results to previous research [Mirkovic et al., 2015], in which a linear model was applied to obtain the estimate. The DNN-based approach requires shorter data segments to be analyzed for a decision, which is partially explained by the transferred information in the experiment that is four times higher compared to the linear model.

Mirkovic B, Debener S, Jaeger M, De Vos M. "Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications." Journal of neural engineering 12.4 (2015): 046007.


Warning: Use of undefined constant s - assumed 's' (this will throw an Error in a future version of PHP) in /home/spinnluxnr/www/2017/pages/programme.php on line 208

Last modified 2017-01-04 23:51:47