Stephen Hawking's AAC setup in closeup

At MOSI in Manchester today, I saw Stephen Hawking’s Chair and other neat things from his office in Cambridge. Note the spaghetti of cables. It’s tricky to figure out where all the leads go, but I’ll give it a wild guess. The plugs look like either mini XLR or the old PS2 Serial leads. Some questions, though; I’m unsure what the “Filter” box fits to and why is the Words+ box even used? I thought the connection with Intel meant he was using ACAT. Why is that Words+ Softkey box the parallel version when there is clearly a lot of USB kicking about, too? Why are we plugging into something behind the chair when surely the tablet has the speakers anyway? There are as many questions than answers.


🚀 Calling All AAC Testers for a new release of our Google Cloud/Azure TTS and Translation tool

We’ve given our little Translate and Speak app for Windows a complete makeover. Our app not only translates text but also vocalizes messages from the message window using online services. We’ve introduced a user-friendly GUI to simplify configuration, extended support to include paid translation services, and here’s the grand reveal… you can now empower any Windows AAC app to leverage Google Cloud TTS or Azure TTS which massively opens up the possibility of using AAC with more languages. You can even use these services without translation - so just to speak.

Get your hands on the early version here. BUT - just a heads up, you will need to be comfortable obtaining keys for Azure or Google Cloud. Check out our (somewhat outdated) docs for guidance. And ping me some feedback before we release it properly. Prizes for someone who can make me a nice demo video!


The new voice creation tool in iOS and live speech (in-built tts app from any screen) in iOS. My voice is definitely.. clunky .. but bear in mind I recorded this at like 2am in a AirBnB and didn’t want to wake the neighbours. 15 mins recording.


Need an AAC/AT textbook but nowhere near a library or have money? This is awesome from Internet Archive. Loan a textbook for free for an hour at a time. archive.org/details/i… (and my personal fav : archive.org/details/a…)


Knocked this up quickly. Type in one language. Translate within your AAC app, speak it out loud in that language and paste it back. Pretty configurable. Lag is due to using Google TTS but will work with offline TTS systems - i.e SAPI, coqui & eSpeak. code

Quick demo video here


From our fab OT student today

So are you telling me my phone can autocorrect but that’s not done on AAC Devices .. and the user always has to select their predictions?.

Hmmm. I wonder if I was missing something (and not from the world of research like Keith Vertanen’s demos.. anyone?

So what’s the difference between autocorrect and prediction? Prediction software has been around for years. In essence, the software displays predictions, and in some way, you have to select the predicted word or sentence. In some software the selection technique is reduced (e.g. in Microsoft 365 products now a swipe to the right allows you to select the suggested word/phrase). But you still have to actively look and seek it. More recently, autocorrection software has started to appear. If you ask me, it makes a lot of sense (some suggest it’s a terrible idea for learning language.. but for dyslexia support it looks amazing). You reduce the visual search aspect and just type. Any mistakes or typos it tries to correct. It’s not for everyone - but in AAC, it seems like a great idea. Focus on what you can get out and let the thing correct itself.


I am reviewing the “traffic light” system for AAC by Karen Erickson for our ATU this week. It’s similar to ideas in an Activity & Occupation Analysis - but much more reduced and focused on AAC activities in a day. Redrawn here - maybe too reduced from the original concept. Use at your peril.


A fascinating couple of papers that Simon Judge notes in his blog about the design and abandonment of AAC systems

“the role of communication aids in an individuals’ communication is subtle and not a simple binary ‘used or not used’”.

What I find really neat is Zoë’s paper and the creation of a model

“This model consists of a communication loop – where experiences of prior communication attempts feed into decisions about whether to communicate and what method to use to communicate – each of which were influenced by considerations of the importance of the message, the time taken, who the communication partner was, the environment the communication is taking place in (physical and social) and the personal context and preferences of the individual

The “choice” of when and how much to use an AAC device is down to the user. We shouldn’t see this as abandonment.


Just watched Edward Changs talk about their BRAVO project at #bci2023. It’s next level. They are not the only team doing this (e.g. Frank Willets) , but they are one of the few making significant improvement over current AAC solutions.. (even if its for n=2). The video of Ann writing by thought alone at this rate. wow.

This was pre-publication. Watch their page for updates changlab.ucsf.edu/overview


Silent Speech and Sub-Vocal Communication. The next big technology breakthrough for AAC?

Silent Speech & Sub-Vocal research is picking up. EMG can detect speech since the 70s but It’s been hard to make it useful. Now though? There are even instructions for making your own . Check out some papers and AlterEgo from MIT for a fancy demo. It’s AI aka “Applied Statistics” making this possible - and I feel that it’s this aiding access that will be the biggest impact on our field than areas of language.


Sebastian Pape has been doing a ton of work on the original Dasher code base for his research on Dasher in VR. It’s pretty awesome. Some of the output can be seen here (and watch the video) - you can also watch a 3D initial from our meeting here. dasher.acecentre.net


Over the next few weeks I’m fortunate to be representing Ace Centre at two international conferences; BCI meeting and ISAAC talking about our audit on text entry rate in AAC and a lot about Dasher. Hope to see you there if you are going too!


Last week we released TextAloud on the AppStore. You can read our blog for the entire details as to what it’s all about and why but in brief, it’s v1 of a more extensive app we want to create to support people better in long streams of TTS. We have several ideas for this - but most importantly, we are putting users at the heart of the design process along all stages (using the double diamond approach). Get in touch if you want to be part of the focus group. One idea, though, is using SSML to help markup a speech. You can see one implementation idea below.

There’s a much longer post due from me about why SSML hasn’t been used in AAC, but in short - the time is overdue.