Will Wade
About code Archive Photos Bookmarks
  • Nice trip to Wales (Brecon to be exact) the other weekend. Although the mosquito bites here were crazy 📷

    → 11:02 PM, Jul 6
  • Knocked this up quickly. Type in one language. Translate within your AAC app, speak it out loud in that language and paste it back. Pretty configurable. Lag is due to using Google TTS but will work with offline TTS systems - i.e SAPI, coqui & eSpeak. code

    Quick demo video here

    → 12:27 AM, Jul 5
  • Feedbin. Where have you been all my life? The answer to the demise of Google reader, twitter and now reddit. I need RSS.. but not everyone supports it. Feedbin takes your email subscriptions and turns them into one readable bucket. Fab.

    → 3:50 PM, Jul 3
  • Accessibility helps everyone. Why do 50% of Americans watch the TV with subtitles?. Whatever the reason - this accessibility feature, which was primarily designed for those with difficulty hearing - is helpful for all. Thats awesome.

    → 7:48 AM, Jul 3
  • Midiblocks is a super neat idea. A block editor to program your gestures . Under the hood it’s using HandsFreeJs which is another wrapper around MediaPipe. Similar to EyeCommander and Project Gameface. Talking of which I’ll just leave this here. Eek. 👀 Not a great PR start for the Google project.

    → 12:29 AM, Jun 29
  • Project Gameface was announced with fanfare - but the actual experience is something that isn’t very “complete”. There is a ton of issues & installing it is not fun. We made an installer at least which helps.. a bit. (warning: it’s not signed. You may have to run this from a terminal window)

    → 12:13 AM, Jun 29
  • Just for fun really - dasher in visionOS. I think this totally needs rethinking for this platform - particularly given the hand detection stuff built into visionOS.

    (“Must not get distracted.. must not get distracted..” )

    → 8:51 AM, Jun 23
  • From our fab OT student today

    So are you telling me my phone can autocorrect but that’s not done on AAC Devices .. and the user always has to select their predictions?.

    Hmmm. I wonder if I was missing something (and not from the world of research like Keith Vertanen’s demos.. anyone?

    So what’s the difference between autocorrect and prediction? Prediction software has been around for years. In essence, the software displays predictions, and in some way, you have to select the predicted word or sentence. In some software the selection technique is reduced (e.g. in Microsoft 365 products now a swipe to the right allows you to select the suggested word/phrase). But you still have to actively look and seek it. More recently, autocorrection software has started to appear. If you ask me, it makes a lot of sense (some suggest it’s a terrible idea for learning language.. but for dyslexia support it looks amazing). You reduce the visual search aspect and just type. Any mistakes or typos it tries to correct. It’s not for everyone - but in AAC, it seems like a great idea. Focus on what you can get out and let the thing correct itself.

    → 5:55 PM, Jun 21
  • I am reviewing the “traffic light” system for AAC by Karen Erickson for our ATU this week. It’s similar to ideas in an Activity & Occupation Analysis - but much more reduced and focused on AAC activities in a day. Redrawn here - maybe too reduced from the original concept. Use at your peril.

    → 5:40 PM, Jun 21
  • What duck are you today? On the rubber duck scale I think I’m feeling a bit of a 4.. (if not on that scale- but maybe a 4 on this page: www.theatlantic.com/photo/202… )

    → 5:10 PM, Jun 20
  • A fascinating couple of papers that Simon Judge notes in his blog about the design and abandonment of AAC systems

    “the role of communication aids in an individuals’ communication is subtle and not a simple binary ‘used or not used’”.

    What I find really neat is Zoë’s paper and the creation of a model

    “This model consists of a communication loop – where experiences of prior communication attempts feed into decisions about whether to communicate and what method to use to communicate – each of which were influenced by considerations of the importance of the message, the time taken, who the communication partner was, the environment the communication is taking place in (physical and social) and the personal context and preferences of the individual

    The “choice” of when and how much to use an AAC device is down to the user. We shouldn’t see this as abandonment.

    → 10:04 AM, Jun 20
  • Having a great chat with our team of OTs about how we measure outcomes and sharing our old presentation from 2011, which still stands. Whatever happened to the adapted GAS for AT? Well GAS Light looks interesting.

    Outcomes in Occupational Therapy (& Assistive Technology) from will wade
    → 4:32 PM, Jun 19
  • Looking forward to delivering day 9 today of our Assistive Technology Unit (with the University of Dundee). The focus is on Activity & Occupational Analysis - which I feel is an essential part of our AT assessment process. (Adapted from Acitivity & Occupational Analysis)

    Steps to Activity Analysis

    See also:

    • Occupational and Activity Analysis by Heather Thomas | Hatchards
    • OTPF-4 Domain and Process
    • Using Task Analysis to Support Inclusion and Assessment in the Classroom - M. Addie McConomy, Jenny Root, Taryn Wade, 2022
    → 11:58 PM, Jun 14
  • SLT’s having fun with their laminators.. ho ho..

    (Credit actually Roz Thompson aka Trash sells Trash (not actually a SLT..))

    Lady pretending to laminate her own hands kneeling on the floor behind laminator
    → 9:41 PM, Jun 14
  • “While deep contemplation is useful for problem-solving, overthinking can impair these abilities, leading us to act impulsively and make counterproductive choices.”

    From The Paradoxical Nature of Negative Emotions.

    → 11:19 AM, Jun 13
  • Detecting hand writing - and outputting as speech - with BCI

    Frank Willets won the young researcher award for his work on BCI speech (cBCI). Their approach is different - they are detecting the thought of writing out each letter. Not only can this do this pretty quickly (writing around 65 wpm) they can even remodel the pen tip angular movement to show a kind of handwriting. Wild. See here for some detail You can watch a video of it all (old but basically same stuff) youtu.be/SdlJ6wjJ7…

    → 6:04 PM, Jun 9
  • I met some OTs at #bci2023 ! Whoop! Check out the work from Canada looking at paediatric (CP, Rett syndrome and other diagnoses) in mobility and play using BCI. (I took some pictures but their own tweets are better! twitter.com/sneakysho… and twitter.com/sneakysho…)

    → 5:17 PM, Jun 8
  • Adoption threshold is inversely related to functional novelty, i.e. “If you are replacing a function already addressed by existing assistive technology, it needs to work substantially better.” Brian Dekleva at the Workshop looking at design and Home use of BCI #bci2023

    → 2:24 PM, Jun 8
  • Really neat talks today, including David Moses and Christian Herff at #BCI2023. A lot of chat about “reading inner speech”. BCI cannot read your random thoughts - this is not in the Speech Motor Cortex (but possibly parietal and even so would be (impossible?) to read). There’s confusion about this, and not helped by people using different terms. A definition is needed.

    → 11:05 PM, Jun 7
  • Just watched Edward Changs talk about their BRAVO project at #bci2023. It’s next level. They are not the only team doing this (e.g. Frank Willets) , but they are one of the few making significant improvement over current AAC solutions.. (even if its for n=2). The video of Ann writing by thought alone at this rate. wow.

    This was pre-publication. Watch their page for updates changlab.ucsf.edu/overview

    → 2:25 PM, Jun 7
  • Submitted 8 feedbacks for iOS 17. Under the terms of the beta testing, you can’t share much but I will say this; writing/autocorrect is now AMAZING! and all the personal voice creation is neat. BUT.. still no headmouse under AssistiveTouch. I’m sure there is sound logic for this

    → 7:18 AM, Jun 6
  • Silent Speech and Sub-Vocal Communication. The next big technology breakthrough for AAC?

    Silent Speech & Sub-Vocal research is picking up. EMG can detect speech since the 70s but It’s been hard to make it useful. Now though? There are even instructions for making your own . Check out some papers and AlterEgo from MIT for a fancy demo. It’s AI aka “Applied Statistics” making this possible - and I feel that it’s this aiding access that will be the biggest impact on our field than areas of language.

    → 12:02 AM, Jun 6
  • So Apple thoughts today from WWDC. The big news is still the AT announcements for iOS 17. But the long game is in the ARKit/MLKit work that has gone into making the ski goggles work (with hand, eye, voice). If that works well, that has excellent uses across their other platforms for AT

    → 8:17 PM, Jun 5
  • Thank Lord Brexit hasn’t affected long queues entering our neighbours 🤬 (sarcasm. if you didn’t realise). I am arriving in Brussels (with my bike - if it’s made it through the journey) for the BCI symposium (Postscript. took an hour 20 get through)

    → 1:22 PM, Jun 5
  • → 2:35 PM, Jun 4
← Newer Posts Page 3 of 7 Older Posts →
  • RSS
  • JSON Feed