Stephen Hawking's AAC setup in closeup

At MOSI in Manchester today, I saw Stephen Hawking’s Chair and other neat things from his office in Cambridge. Note the spaghetti of cables. It’s tricky to figure out where all the leads go, but I’ll give it a wild guess. The plugs look like either mini XLR or the old PS2 Serial leads. Some questions, though; I’m unsure what the “Filter” box fits to and why is the Words+ box even used? I thought the connection with Intel meant he was using ACAT. Why is that Words+ Softkey box the parallel version when there is clearly a lot of USB kicking about, too? Why are we plugging into something behind the chair when surely the tablet has the speakers anyway? There are as many questions than answers.


Correlating Sounds for a sound switch

Last week, I visited a client for work to test out a sound switch device. For one reason and another, the kit didn’t pan out on the day (NB: Highly possible it might have been me.. I need to try again). But with the recordings we got, we can now do some fun work and create a melspectogram correlation technique. It might work.. It certainly looks pretty reliable against background noise and talking. You can see our work in progress and try it yourself at github.com/acecentre…


I was lucky to see the latest iteration of the Colibri (“Hummingbird”) from Colibri interfaces a few weeks ago. It’s a wireless head mouse - and blink switch. They also have a free web-based Scanning Speller, which is accessible with Blink from your browser. Portuguese only for now.


Midiblocks is a super neat idea. A block editor to program your gestures . Under the hood it’s using HandsFreeJs which is another wrapper around MediaPipe. Similar to EyeCommander and Project Gameface. Talking of which I’ll just leave this here. Eek. πŸ‘€ Not a great PR start for the Google project.


Having a great chat with our team of OTs about how we measure outcomes and sharing our old presentation from 2011, which still stands. Whatever happened to the adapted GAS for AT? Well GAS Light looks interesting.

Outcomes in Occupational Therapy (& Assistive Technology) from will wade

Looking forward to delivering day 9 today of our Assistive Technology Unit (with the University of Dundee). The focus is on Activity & Occupational Analysis - which I feel is an essential part of our AT assessment process. (Adapted from Acitivity & Occupational Analysis)

Steps to Activity Analysis

See also:


I met some OTs at #bci2023 ! Whoop! Check out the work from Canada looking at paediatric (CP, Rett syndrome and other diagnoses) in mobility and play using BCI. (I took some pictures but their own tweets are better! twitter.com/sneakysho… and twitter.com/sneakysho…)


Submitted 8 feedbacks for iOS 17. Under the terms of the beta testing, you can’t share much but I will say this; writing/autocorrect is now AMAZING! and all the personal voice creation is neat. BUT.. still no headmouse under AssistiveTouch! Why?!


Thank Lord Brexit hasn’t affected long queues entering our neighbours 🀬 (sarcasm. if you didn’t realise). I am arriving in Brussels (with my bike - if it’s made it through the journey) for the BCI symposium (Postscript. took an hour 20 get through)


I’m reading about some of the most recent work in BCI. Much of it is academic, but this is an easy read from NeuralEchoLabs on gaming with BCI. Gaming is interesting as it’s not as critical as AAC and has many scopes to play with UI. (And for a more academic read see β€œthis paper” )


Sebastian Pape has been doing a ton of work on the original Dasher code base for his research on Dasher in VR. It’s pretty awesome. Some of the output can be seen here (and watch the video) - you can also watch a 3D initial from our meeting here. dasher.acecentre.net


Last week we released TextAloud on the AppStore. You can read our blog for the entire details as to what it’s all about and why but in brief, it’s v1 of a more extensive app we want to create to support people better in long streams of TTS. We have several ideas for this - but most importantly, we are putting users at the heart of the design process along all stages (using the double diamond approach). Get in touch if you want to be part of the focus group. One idea, though, is using SSML to help markup a speech. You can see one implementation idea below.

There’s a much longer post due from me about why SSML hasn’t been used in AAC, but in short - the time is overdue.