Special Session 2: Multilingual and Low-Resourced Languages Speech Processing in Human-Computer Interaction
Special Session 2: Multilingual and Low-Resourced Languages Speech Processing in Human-Computer Interaction
Multilingual speech processing has been an active topic for many years. Over the last few years, the availability of big data in a vast variety of languages and the convergence of speech recognition and synthesis approaches to statistical parametric techniques (mainly deep learning neural networks) have put this field in the center of research interest, with a special attention for low- or even zero-resourced languages. In this special session, we call for research papers in the field of multilingual speech processing. The topics include (but are not limited to): multilingual speech recognition and understanding, dialectal speech recognition, cross-lingual adaptation, text-to-speech synthesis, spoken language identification, speech-to-speech translation, multi-modal speech processing, keyword spotting, emotion recognition and deep learning in speech processing.
The special session organized by:
- Alexandros Lazaridis (Swisscom, Switzerland)
- Ivan Himawan (Queensland University of Technology, Australia)
- Blaise Potard (CereProc Ltd, Edinburgh, UK)
- Kate Knill (Cambridge University Engineering Department)
- Peter Bell (University of Edinburgh, UK)
Alexandros Lazaridis, alexandros.lazaridis@swisscom.com
Ivan Himawan, i.himawan@qut.edu.au
Blaise Potard, blaise@cereproc.com
Kate Knill, kate.knill@eng.cam.ac.uk
Peter Bell, peter.bell@ed.ac.uk