Silent Speech & Sub-Vocal research is picking up. EMG can detect speech since the 70s but It’s been hard to make it useful. Now though? There are even instructions for making your own . Check out some papers and AlterEgo from MIT for a fancy demo. It’s AI aka “Applied Statistics” making this possible - and I feel that it’s this aiding access that will be the biggest impact on our field than areas of language.