Will Wade

Detecting hand writing - and outputting as speech - with BCI

Frank Willets won the young researcher award for his work on BCI speech (cBCI). Their approach is different - they are detecting the thought of writing out each letter. Not only can this do this pretty quickly (writing around 65 wpm) they can even remodel the pen tip angular movement to show a kind of handwriting. Wild. See here for some detail You can watch a video of it all (old but basically same stuff) youtu.be/SdlJ6wjJ7…


I met some OTs at #bci2023 ! Whoop! Check out the work from Canada looking at paediatric (CP, Rett syndrome and other diagnoses) in mobility and play using BCI. (I took some pictures but their own tweets are better! twitter.com/sneakysho… and twitter.com/sneakysho…)


Adoption threshold is inversely related to functional novelty, i.e. “If you are replacing a function already addressed by existing assistive technology, it needs to work substantially better.” Brian Dekleva at the Workshop looking at design and Home use of BCI #bci2023


Really neat talks today, including David Moses and Christian Herff at #BCI2023. A lot of chat about “reading inner speech”. BCI cannot read your random thoughts - this is not in the Speech Motor Cortex (but possibly parietal and even so would be (impossible?) to read). There’s confusion about this, and not helped by people using different terms. A definition is needed.


Just watched Edward Changs talk about their BRAVO project at #bci2023. It’s next level. They are not the only team doing this (e.g. Frank Willets) , but they are one of the few making significant improvement over current AAC solutions.. (even if its for n=2). The video of Ann writing by thought alone at this rate. wow.

This was pre-publication. Watch their page for updates changlab.ucsf.edu/overview


Submitted 8 feedbacks for iOS 17. Under the terms of the beta testing, you can’t share much but I will say this; writing/autocorrect is now AMAZING! and all the personal voice creation is neat. BUT.. still no headmouse under AssistiveTouch! Why?!


Silent Speech and Sub-Vocal Communication. The next big technology breakthrough for AAC?

Silent Speech & Sub-Vocal research is picking up. EMG can detect speech since the 70s but It’s been hard to make it useful. Now though? There are even instructions for making your own . Check out some papers and AlterEgo from MIT for a fancy demo. It’s AI aka “Applied Statistics” making this possible - and I feel that it’s this aiding access that will be the biggest impact on our field than areas of language.


So Apple thoughts today from WWDC. The big news is still the AT announcements for iOS 17. But the long game is in the ARKit/MLKit work that has gone into making the ski goggles work (with hand, eye, voice). If that works well, that has excellent uses across their other platforms for AT


Thank Lord Brexit hasn’t affected long queues entering our neighbours 🤬 (sarcasm. if you didn’t realise). I am arriving in Brussels (with my bike - if it’s made it through the journey) for the BCI symposium (Postscript. took an hour 20 get through)



I’m reading about some of the most recent work in BCI. Much of it is academic, but this is an easy read from NeuralEchoLabs on gaming with BCI. Gaming is interesting as it’s not as critical as AAC and has many scopes to play with UI. (And for a more academic read see “this paper” )


Fun fact. Brussels sprouts have tasted better since the 90s because breeders started cross-pollinating different varieties to remove the chemicals that caused the bitterness. From npr’s consider this


Sebastian Pape has been doing a ton of work on the original Dasher code base for his research on Dasher in VR. It’s pretty awesome. Some of the output can be seen here (and watch the video) - you can also watch a 3D initial from our meeting here. dasher.acecentre.net


Over the next few weeks I’m fortunate to be representing Ace Centre at two international conferences; BCI meeting and ISAAC talking about our audit on text entry rate in AAC and a lot about Dasher. Hope to see you there if you are going too!


Last week we released TextAloud on the AppStore. You can read our blog for the entire details as to what it’s all about and why but in brief, it’s v1 of a more extensive app we want to create to support people better in long streams of TTS. We have several ideas for this - but most importantly, we are putting users at the heart of the design process along all stages (using the double diamond approach). Get in touch if you want to be part of the focus group. One idea, though, is using SSML to help markup a speech. You can see one implementation idea below.

There’s a much longer post due from me about why SSML hasn’t been used in AAC, but in short - the time is overdue.


The good and bad of Apples personal voice system

There’s a lot of chat about the newly announced personal voice stuff from Apple. A lot are screaming how awful an idea it is without having a clue about the field of AAC. There’s some good work on places like MacStories interviewing David at AssistiveWare- and this piece from FastCompany which explain the background well. Let’s be frank. It’s either mildly useful or a major disruptor to a field of companies and research groups doing similar thing at cost. You can discuss the business situation but for end users it’s a win in my book. I do have some concerns though and that’s on portability. If you create a voice it shouldn’t be locked to one system. If your access needs change and say you need eyegaze and an eyegaze system not brilliantly supported on iOS then what happens to your voice ? Tough (I imagine).

Let’s see. But my money is riding on no portability out if iOS.

See also Michael Tsai’s concerns

Will there be a way to export your Personal Voice so that you aren’t totally reliant on iCloud to preserve it? Many of these users will not be able to just re-record new prompts if something goes wrong or if they need to switch to a different Apple ID.

They aren’t the first to have this problem. Smartbox recently have done a solution with SpeakUnique for regional voices. As much as it’s needed I’m not aware of any way you can use those voices on other platforms.


Apple create their own voice banking solution on device

What the .. (from the Apple PR on accessibility)

“For users at risk of losing their ability to speak — such as those with a recent diagnosis of ALS (amyotrophic lateral sclerosis) or other conditions that can progressively impact speaking ability — Personal Voice is a simple and secure way to create a voice that sounds like them. Users can create a Personal Voice by reading along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. This speech accessibility feature uses on-device machine learning to keep users’ information private and secure, and integrates seamlessly with Live Speech so users can speak with their Personal Voice when connecting with loved ones.”

Now we find out who the first company is to make a system wide third party TTS voice (a developer SDK to create a system wide voice was released in iOS 16)!


Did you know.. that children start forgetting early childhood around the age of 7 ? (rubbish source). It’s called childhood amnesia and due to synaptic pruning. The way to think of this is when you are born you start filling up RAM. Everything gets logged. Then.. it gets full up and the brain goes, “ok, what shall we dump? Nope. That birthday party. Gone..ahh, but that horrific event - gotta keep that - don’t want to repeat that I’ll pop that in the ROM”. And then Repeat. For the rest of your life… take pics. Lots of pics. (More source )


This looks super neat. We’ve been looking into detect sub vocal communication for a while. If this really does work then it’s a far easier technique than emg and should be a game changer. See the Cornell PR and the paper for the details.


Yesterday I won the bake off at work! You too can win with this rhubarb & custard cake recipe! (I don’t know why I’m doing the robot in this pic) (money raised for Ukraine🇺🇦)


So Project Gameface has been released from Google. I truly hope it lasts more than 5 minutes a lot of Google projects last for. I have a weird hope it will as it’s been fully open-sourced from day 1 (& complete)- but then again, it’s really half-baked - No docs. No installer - a v small range of shapes. The Promo video cost a lot more than the code.. blog.google/technolog… . Read our comments in the issues github.com/google/pr…


Toby Churchill is a legend. If you are new to the world of AAC - this is a classic clip from Tomorrows world


Getting closer to this BCI malarky “Brain scans can translate a person’s thoughts into words


Numbers of neurons in the brain : 86 billion. How do we know? Step 1. “Pestle & Mortar” a brain. Step 2. Count the cells in a sample. Simples!


Saw this on a visit. Colleagues said I had to take a pic of it as so close to surely my perfect number plate