Ever thought “What’s new in speech tech this year?” Well, buckle up—INTERSPEECH 2022 is serving innovation on a silver platter.
📅 Date & Time
Held from September 18 to September 22, 2022, the event ran all day—from morning keynotes through late‑afternoon poster sessions—packed full over five days.
📍 Location & Country
It took place at Songdo ConvensiA, Incheon, in South Korea, bringing together researchers, developers, and enthusiasts from around the world.
🌐 What’s This Event About?
INTERSPEECH is the largest global conference on speech science and technology, where academics and industry professionals converge to share breakthroughs in speech recognition, synthesis, emotion detection, and more. This year’s theme—“Human and Humanizing Speech Technology”—prompted a real focus on making machines that don’t just “understand,” but actually sound… well, human.
👥 Who Should Attend?
- Speech tech enthusiasts (you know the type)
- Researchers across phonetics, signal processing, AI
- Dev teams building voice assistants, accessibility tools
- Product managers and strategists pushing voice-powered apps
- Basically anyone fascinated by how we make computers talk, listen, and respond.
🎓 Key Topics Covered
This year spanned everything from theory to deployment:
- Models of speech production & perception
- Phonetics, prosody, paralinguistics (tone, personality, etc.)
- Speaker recognition, language ID systems
- Signal processing—noise, enhancement, separation
- Speech coding, synthesis, and generation
- Automatic speech recognition—robust, neural, low-resource
- Dialogue systems, translation, summarization
- Building speech tech for disorders or accessibility
🔢 By the Numbers
- 1,100+ presentations (oral and poster), covering new findings
- Over 30 tutorial and special sessions, plus industry showcases and demos
- Hosted 4 keynote and 6 survey talks, offering themes and broad visions
- Packed full of “Show & Tell” demos, challenges, and panel debates
🎙️ Anecdote Moment
I remember grabbing coffee in the lobby and overhearing someone whisper, “Did you catch the paper on self‑supervised speech models?” Ten minutes later, they were skimming code samples with an energetic grin—so infectious, I dove in too. Makes sense—this place wasn’t just talk; it felt like co‑creation in motion.
✅ Why You Should Care
- Hear about self-supervised learning breakthroughs, already making waves
- Explore practical methods for improving speech recognition and synthesis
- See tools for sentiment, emotion, health, and accessibility in speech tech
- Network and collaborate at one of the most vibrant voice-tech hubs globally
💬 Your Turn
Did you attend? Or plan to watch recordings? Drop a comment—tell us which speech innovation had you nodding, or where we still need more breakthroughs. Let’s geek out together!