Will Wade
About code Archive Photos Bookmarks
  • Cooking Up an AAC Correction Tool: A Recipe for Efficient Communication. Kinda.

    Alright, gather ‘round, aspiring communication chefs! We’re about to embark on a culinary adventure – one where we cook up something genuinely useful for the world of Augmentative and Alternative Communication (AAC). This post dives into how we built tools to automatically correct sentences – specifically tackling tricky issues like typos and words mashed together without spaces. My goal here is to explain this in a way that makes sense, even if you’re not a tech wizard! I’ve gone with a cooking analogy. Now those of you know me know I love an analogy but also know that most of the time they are awful. This maybe one of them and I may take this a bit too far in this post..

    What we did: We explored two main paths to auto-correction, like choosing between baking a cake from scratch or using a fancy pre-made mix:

    1. Fine-tuning a Language Model: We took a small but powerful AI model (called t5-small) and taught it to correct noisy sentences. Think of this as perfecting a secret family recipe.
    2. Developing Algorithmic Code: We also wrote specific computer code designed to fix spelling errors and add spaces where words were run together. This is more like mastering a basic, reliable cooking technique.
    3. (Slightly less exciting but..well spoiler alert: the fastest and “best” using an online Large language model)

    This project was built some time ago, and while some data might be a bit outdated, the core ideas are still super relevant. You can find all the code at https://github.com/AceCentre/Correct-A-Sentence/tree/main/helper-scripts.

    A quick note: If you’re looking for a current solution for an existing AAC system, Smartbox’s Grid3 “Fix Tool” is worth checking out. My work here was done before that tool existed, and it’s why you won’t see me rebuilding our model today!

    Hold up. Before we continue if you do want to do this I recommend two things! 1. DONT RUN MY CODE but look at it by all means and be critical of it! 2. Let me know of improvements but 3. DON’T GET YOUR LLM TO MAKE THESE SCRIPTS FOR YOU. You’ll learn heaps more if you do it yourself. I say this with a recent history of geting LLM’s churning out code. Do the harder thing. Your brain will thank you.

    The Core Ingredient: Why Automatic Correction Matters for AAC

    AAC systems need to do a lot of things but ultimately for an end user they need to allow fast ways of getting your thoughts out from what you are trying to say - and understandable ones at at that. This isn’t just my hunch; it’s what we consistently hear from users, staff, and project feedback. The key term here is efficiency.

    Sidenote: Efficiency in AAC isn’t just about the output. It could be speedier services, better quality support, or even device improvements like smarter input detection, but just to be clear for this post we are focusing on efficiency in output

    The challenge often comes from “noisy” input – things like typing fast, or facing physical access challenges. Auto-correctionis our secret weapon for cleaning up this messy input and helping users get their message across more effectively.

    Unlike prediction (where the system tries to guess what you’ll type next), auto-correction actively fixes mistakes afterthey’ve been made. Think about your phone – it’s constantly correcting your typos before you even see them. This happens seamlessly in the background, making it feel like your typing is perfect! (And you really notice it when it doesn’t work!) So, why isn’t this standard in AAC, where mistakes are equally common?

    Approach 1: Building Algorithmic Correction Tools (Baking Our Own!)

    When creating an AAC correction tool, it’s helpful to first understand the common “noisy” sentence types. These are the kinds of input issues we need to fix:

    • Ihaveaapaininmmydneck (No spaces and typos – a double whammy!)
    • I wanaat a cuupoof te pls (Double key presses – sticky keys, anyone?)
    • u want a oakey of cjeese please (Hitting nearby keys – positional errors)
    • can u brus my air (Missing letters/deletions – vanishing letters!)
    • Can you help me? Can you help me? (Repeated phrases – when a stored message goes wild!)

    These sentences are typically short. While word prediction can help create perfect sentences, it often requires significant visual scanning and mental effort, which isn’t always ideal for AAC users.

    Sidenote: We actually don’t know this for sure. There are some papers that document these types of errors but in short we don’t have a great grip on what is actually written by AAC users.. more on that later..

    Tackling Words Without Spaces and Typos (Writingwitjoutspacesandtypos)

    This is one of the trickiest problems. How do you separate words when they’re all jammed together?

    Well turns out if you know some python code there is a clever library called wordsegment. This tool uses information about how common single words (“unigrams”) and two-word phrases (“bigrams”) are to figure out where words should be. Here’s an example:

    from wordsegment import load, segment  
    load()  
    segment('thisisatest')  
    
    # Output: \['this', 'is', 'a', 'test'\]
    

    It’s very effective! However, language is deeply personal. If wordsegment doesn’t know a user’s unique slang or abbreviations (like “biccie” for biscuit), it will struggle. For example, Iwouldlikeabicciewithmycuppatea might become ['i', 'would', 'like', 'a', 'bi', 'ccie', 'with', 'my', 'cuppa', 'tea'].

    Truth is though, even with slight imperfections, the meaning is often still clear, especially if the user is communicating with a familiar partner. Sometimes, “good enough” is perfectly acceptable! In AAC, “co-construction” – where communication partners work together to understand – is super important. It happens all the time and you need to imagine any AAC system doesn’t work without it. We don’t always need to achieve perfect output.

    Adding Our Secret Family Ingredients (Personalizing Algorithmic Tools with User Data!)

    Imagine if we could incorporate a person’s actual vocabulary and common phrases into wordsegment’s data. This would significantly improve its accuracy for that specific user. We could take a user’s real-life language and use it to enhance the tool’s understanding, like using your grandma’s secret spice blend!

    The wordsegment library allows for this kind of customization. When we did this work we even created specific “typo-heavy” two-word phrases based on real typing errors to make it smarter. This is a solid starting point, though capturing every personal nuance is challenging.

    Another option is a “fuzzy” search approach (see this blogpost for some ideas on this). This method is light on memory and doesn’t require powerful computer graphics cards (GPUs), making it suitable for running directly on a device. While it might not be lightning-fast, it could be “quick enough” for many situations. However, it still faces challenges with acronyms, names, and highly unusual abbreviations.

    Approach 2: Customizing Large Language Models (LLMs) (The Fancy Pre-Made Sauce)

    What about the powerful Large Language Models (LLMs) like OpenAI’s GPT-3.5 Turbo? Let’s try wrapping our noisy sentences in an OpenAI GPT-3.5 Turbo API and see what happens:

    The results are pretty nice. You can immediately see the benefit: users can focus on typing without being distracted by predictions. This could dramatically improve communication speed.

    However, a key consideration with online LLMs is privacy. When communication data is sent to a cloud service, it raises questions about how that data is handled and to be fair some users - and providers of AAC are nervous of this. While some services offer opt-out options for data being used for model training, the notion of personal communication data residing in the cloud is something to address carefully.

    It’s fair to say that whether “sending data to the cloud” is “unacceptable” is a nuanced point. Many people routinely use cloud services for sensitive personal information (like email, documents, or medical records), and some are increasingly comfortable sharing personal data with online AIs (is that a word now?). While some AAC users will undoubtedly prefer to keep all their communication entirely offline, others may not find it unacceptable, especially if there are clear privacy controls in place.

    The primary privacy considerations often boil down to:

    • User Comfort and Perception: Users might be uncomfortable with the technology if they perceive the privacy risk to be greater than it actually is, or if they don’t trust the AAC providers to have robust privacy controls.
    • Provider Controls: The risk of poor privacy or security policies by an individual AAC provider, which could lead to chat data being logged, stored, or mishandled in the cloud.
    • Security Measures: The risk of security breaches providing unauthorized access to chat data flowing through APIs.

    These risks can and should be mitigated through robust security measures and strong privacy controls, such as not logging or storing chats unnecessarily, and avoiding associating them with user-specific IDs in API calls. If an AAC provider implements these correctly, the benefits of advanced online LLM capabilities might outweigh the concerns for many users, even if it doesn’t allay everyone’s apprehension. But either way - it would be darned easier to explain to end users if data never left the device.

    Therefore, for AAC, we need a solution that is:

    • Efficient: Quick to cook! Gets to you what meant accuratley and quickly
    • Privacy-focused: All in your kitchen, no external diners!
    • AAC-text savvy: Understands our unique culinary style!

    Training Our Own Language Model for Correction (The Holy Grail: Baking Our Own Language Model!)

    Spoiler alert - This is what we want to make - note this video is all running on device

    Building our own language model from scratch sounds daunting, but it’s more achievable than you might think! The fundamental ingredients are:

    1. Data: A large collection of “noisy” sentences (like actual user input) alongside their perfectly “cleaned” versions. The more this data resembles real AAC communication, the better the model will perform.
    2. Even More Data: Seriously, data is the most crucial ingredient.
    3. Code: The “recipe” for teaching the model.
    4. Computing Resources: Some time and a computer (preferably with a graphics card).

    The biggest challenge is DATA. There’s very little publicly available “AAC user dialogue” data. What is “AAC-like text”? It’s a surprising gap in our industry’s research – we don’t have large datasets showing how people actually use AAC. Are stored phrases used all the time? What kind of unique language do users create? We’re often just guessing.

    So, we have to “imagine” what AAC-like data looks like. Researchers like Keith Vertanen have made fantastic attempts with artificial datasets. We also believe “spoken” language datasets might be a good starting point, as AAC aims to enable “speaking” through a device. We looked at corpora like:

    • BNC 2014 Spoken
    • AAC Text
    • Daily Dialog

    The problem? These are mostly perfectly transcribed texts, lacking the glorious typos and grammatical quirks our correction tool needs to learn from. We needed truly “noisy” input.

    Injecting Noise: Making Our Data Deliberately Messy

    To make our pristine datasets messy, we artificially injected errors using tools like nlpaug. While this creates synthetic noise, we also sought out real typos from sources like the TOEFL Spell dataset (essays by English language learners). For grammar correction, we also looked at datasets like JFLEG Data and subsets of C4-200M which contain grammatically incorrect sentences with their corrected versions.

    Our Grand Baking Plan: Data Preparation:

    1. Take each spoken text corpus.
    2. Inject typos into the sentences and strip out common grammar elements (like commas and apostrophes, which AAC users often omit).
    3. Create three versions of each sentence for training:
      • With typos.
      • With typos and compressed (no spaces).
      • The correct, clean version (but still without spaces, to train for de-compression).
    4. Also, incorporate a grammar baseline layer using the grammar datasets.

    We then used a “text-to-text transformer” model (specifically, the t5-small model) to learn from this data. This type of model is designed to take one text input and transform it into another, making it perfect for correction tasks. We chose t5-small because it’s lightweight and efficient enough to run on a device.

    You can find the script for preparing the data here (you’ll need to download the BNC2014 corpus separately). The training process took several hours on a decent GPU. The resulting model is available on Hugging Face.

    The Taste Test: How Did Our Bake Perform?

    Let’s look at how our different correction techniques performed on a set of 39 test sentences (which contained both compression and typos):

    • Algorithmic Approach (Wordsegment + Spelling Engine): Took about 14 seconds to process the test sentences. This approach is surprisingly quick for what it does, and it has no memory issues. However, its major limitation is that it will fail if it encounters words not in its dictionary, or novel grammatical structures.
    • Online LLM (Azure/OpenAI GPT Turbo 16K): Took an astonishing 13 seconds! This speed, even with an external network call, is remarkable.

    Now, for our custom-trained models:

    Method Accuracy (%) Total Time (seconds) Average Similarity
    Inbuilt 0.0 17.32 0.93
    GPT 55.56 13.29 0.92
    Happy 28.95 N/A N/A
    Happy Base 13.16 N/A N/A
    Happy T5 Small 0.0 N/A N/A
    Happy C4 Small 0.0 46.90 0.81
    Happy Will Small 28.95 N/A N/A
    HappyWill N/A 24.67 0.93

    Here are some example results comparing the incorrect sentence to the correct version and outputs from various models:

    Incorrect Sentence Correct Sentence Output-Inbuilt Output-GPT Output-Happy Output-HappyBase Output-HappyT5 Output-HappyC4Small Output-HappyWill
    Feelingburntoutaftettodayhelp! Feeling burnt out after today, help! feeling burnt out aft et today help Feeling burnt out after today, help! Feeling burnt out today help! Feelingburntoutaftettodayhelp! Feelingburntoutaftettodayhelp! Feelingburntoutaftettoday help!! Feeling burnt out today help!
    Guesswhosingleagain! Guess who's single again! guess who single again Guess who’ssingle again! Guess who single again! Guesswhosingle again! Grammatik: Guesswhosingleagain! Guesswhosingleagain!! Guess who single again!
    Youwontyoubelievewhatjusthappened! You won't you believe what just happened! you wont you believe what just happened You won't believe what just happened! You want you believe what just happened! You wouldn'tbelieve what just happened! Youwontyoubelievewhatjusthappened! Youwontyoubelievewhatjust happened!! You want you believe what just happened!
    Moviemarathonatmyplacethisweekend? Movie marathon at my place this weekend? movie marathon at my place this weekend Movie marathon at my place this weekend? Movie Marathon at my place this weekend? Movie marathon at my place this weekend? grammar Moviemarathonatmyplacethisweekend? Moviemarathonatmyplacethis weekend? Movie Marathon at my place this weekend?
    Needstudymotivationanyideas? Need study motivation, any ideas? need study motivation any ideas Need study motivation. Any ideas? Need study motivation any ideas! Need study motivationanyideas? Needstudymotivationanyideas? Needstudymotivationanyideas? Need study motivation any ideas!
    Sostressedaboutthispresentation! So stressed about this presentation! so stressed about this presentation So stressed about this presentation! So stressed about this presentation! So stressed about this presentation! Sostressedaboutthispresentation! Sostressedaboutthispresentation!! So stressed about this presentation!
    Finallyfinishedthatbookyourecommended! Finally finished that book you recommended! finally finished that book you recommended Finally finished that book you recommended! Finally finished that book you're recommended! Finally finished that book yourecommended! Finalfinishedthatbookyourecommended! Finally finished that bookyourecommended!! Finally finished that book you're recommended!

    Our custom-trained model (HappyWill) actually outperformed GPT in terms of average similarity! While its total time was longer because we ran it on a small cloud machine, it would be much faster on a local device – and, crucially, it keeps all data private.

    However, it’s important to note the “inbuilt” algorithmic technique (the Wordsegment one). While its “accuracy” (meaning, how often it perfectly matched the exact expected output) was low for our specific test, its output was surprisingly readable, and it has no privacy or significant memory concerns.

    You can try the inbuilt algorithmic correction yourself for a short while via our API:

    curl -X 'POST' \
      'https://correctasentence.acecentre.net/correction/correct_sentence' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "text": "howareyou",
      "correct_typos": true,
      "correction_method": "inbuilt"
    }'
    

    Or access the API directly in a browser: https://correctasentence.acecentre.net (just remember to set "correction_method": "inbuilt").

    So, the next crucial step in our culinary journey? We need to hunt down, or create, an even richer, noisier, and more authentic corpus of data. The quest for the perfect “ingredients” continues!

    (Oh dear.. This cooking analogy really went too far… I do apologise)

    Key Takeaways

    From this project, we’ve identified a few critical points for developing communication tools:

    • Understanding Language Models for Correction (How to Broadly Make a Language Model for Correction): We’ve shown how a custom, fine-tuned language model (like t5-small) can be developed to correct complex errors while keeping user data private and enabling on-device processing. It’s about tailoring the perfect sauce to your needs.
    • Algorithmic Approaches Can Be Effective (Algorithmic Approaches May Be As Good In Some Circumstances): For specific problems, like adding spaces or basic spelling, simpler algorithmic approaches can be surprisingly readable and efficient, offering a viable alternative to complex AI models, especially when novel input isn’t a primary concern. This highlights the importance of matching the tool to the specific type of communication challenge – sometimes, a good old stew is just what you need!
    • The Need to Understand User Corpora Better (We Need to Understand Better Users Own Corpora): Regardless of the approach, the effectiveness of any correction tool is heavily dependent on the training data. There’s a significant need for more authentic, “noisy” datasets reflecting how AAC users actually communicate. Truly understanding users’ unique vocabulary and communication patterns – their personal “flavor profiles” – is essential for building the most effective tools.
    → 12:22 AM, Jun 26
    Also on Bluesky
  • Published our py3-tts-wrapper python library finally this week. Should power a lot of funky things. Supports all the major TTS engines online and new ones offline. Use alongside this app to see what voices are available (API here)

    → 11:26 AM, Jul 31
    Also on Bluesky
  • Iterative rapid development Part 2 (aka Judith Part 2).

    So sometimes.. just sometimes.. a couple of months works pays off. Or I guess we could frame this post around successful product development - keeping the end user the focus of your development and involving them at all stages of your development cycle is key. We are proud to shout loudly about this.

    So two months ago we saw Judith. Technology has not been successful for her before. (You can read more why in my previous posts). So we rapidly developed a SwiftUI app that was unique in its operation - you dragged from one letter to the next. Trying it with Judith we still had a problem - positional errors and missing letters were high. So we then rapidly worked on a “correct-a-sentence system” and settled on using a Azure OpenAI API (GPT3.5) and then developed our own custom model alongside this (needing something that worked offline and privacy first was key (NB: The videos is this offline model - Not GPT). A few iterations of this and its correcting as good as GPT 3.5 83% of the time.

    So two months later, what does this all mean for the end user? They can now write on technolgy to communicate for the very first time. Happy - well so far yes. But long term AAC use is complex and not just down to the technology. So lets be curious and proud at the same time. There is more to do

    • Small tweaks in the UI
    • Its correcting a bit too much at times - sometimes adding more grammar changes than I would like - and equally dangerous - losing words. To fix this we could further train our custom model.

    and then more I’m sure the next time we review this cycle of the iterative development cycle.

    What does this process look like overall?

    We follow the double-diamond development cycle. It’s a pretty common approach yet following it is not always easy.

    So lessons so far?

    • Iterate Fast: Rapid prototyping and adjustments are key to finding viable solutions for complex, unique solutions. There is a much longer tail to development for solid, reliable products but getting somewhere fast helps to see whats important.
    • User-Centric Design: Keeping the user at the heart of all stages of development ensures that the technology we develop truly meets their needs. Its not always easy but it can be done. Its important to do this as part of a team though. It can so easily be lost as to whether you are getting carried away or whether you are on track. We pride ourselves on at Ace Centre being a transdisciplinary team where never one person sees the whole picture.
    • Continuous Learning: Every challenge presents a learning opportunity, pushing us to constantly improve. And heck - its good fun too seeing big gains.
    • Passion and Persistence: A keen interest in making a difference and the drive to keep pushing forward are indispensable.
    → 9:38 PM, Mar 26
    Also on Bluesky
  • Objectified. "if we understand what the extremes are, the middle will take care of itself."

    Objectified (2009, 75 minutes) is a documentary film about our complex relationship with manufactured objects and, by extension, the people who design them. What can we learn about who we are, and who we want to be, from the objects with which we surround ourselves?

    It’s a great film (and free to watch from March 14-17!) but in particular, I love this quote from Dan Formosa, Design & Research, Smart Design, New York (around 6 minutes in)

    But really our common interest is in understanding people and what their needs are. So if you start to think, well really, what these guys do as consultants is focus on people, then it’s easy to think about what’s needed design wise in the kitchen, or in the hospital, or in the car.

    We have clients coming to us and saying here’s our average customer. For instance, she’s female, she’s 34 years old, she has 2.3 kids.

    And we listen politely and say, well that’s great, but we don’t care about that person. What we really need to do to design is look at the extremes, the weakest, or the person with arthritis, or the athlete, or the strongest, or the fastest person. Because if we understand what the extremes are, the middle will take care of itself.

    View it online at www.ohyouprettythings.com/free

    → 8:35 AM, Mar 15
    Also on Bluesky
  • I’ve been considering LLMs' potential in autocorrecting AAC input. They offer significant gains, but user-guided inference still excels. Yvette’s video demonstrates this: 32 WPM using a glide pad in Dasher. Anyone faster?! We must get a new version developed.

    youtu.be/va1WufK86…

    → 8:54 PM, Feb 20
    Also on Bluesky
  • Rapid development in swiftUI for niche problems

    Some clients we see are fantastic with paper-based solutions. But sometimes, finding powered AAC systems which give them more independence is far trickier than you may think. Consider Judith. She doesn’t lift her finger from the paper. This continuous movement is surprisingly not well supported in AAC. Your obvious thoughts are SwiftKey and Swype, but they require a lift-up at the end of a word or somewhere else. Next up, you may try a Keguard or TouchGuide. But then, for some users, this is too much of a change in user interaction. Even if you succeed, you often ask an end user to change the orientation or layout of their paper-based system.. and all in all, it’s just too much change. Abandonment is likely. The paper-based system is just more reliable.

    So what do we do? We could look at a bespoke system. But typically, it requires much thought, effort and scoping. That’s still needed, but you can draft something up far quicker these days using ChatGPT. It wrote the whole app after a 2-hour stint of prompt writing. Thats awesome. (Thanks also to Gavin, who tidied up the loose ends). So, this app can be operated by detecting a change in angular direction or detecting a small dwell on each letter. We need to now trial this with our end user and see what’s more likely to work and what’s not and work this up. We may need a way of writing without going to space (something that we see quite a lot), and I can see us implementing a really needed feature, autocorrect. This is all achievable. But for now, we have a working solution to trial a 500 lines of code app made in less than a day’s work.

    → 11:04 PM, Jan 12
    Also on Bluesky
  • Stephen Hawking's AAC setup in closeup

    At MOSI in Manchester today, I saw Stephen Hawking’s Chair and other neat things from his office in Cambridge. Note the spaghetti of cables. It’s tricky to figure out where all the leads go, but I’ll give it a wild guess. The plugs look like either mini XLR or the old PS2 Serial leads. Some questions, though; I’m unsure what the “Filter” box fits to and why is the Words+ box even used? I thought the connection with Intel meant he was using ACAT. Why is that Words+ Softkey box the parallel version when there is clearly a lot of USB kicking about, too? Why are we plugging into something behind the chair when surely the tablet has the speakers anyway? There are as many questions than answers.

    • Words+ Archive page (This is the USB version of the softkey box)
    • Case for Original Synthesiser made by David Mason at Cambridge Adaptive
    • Chair details
    → 9:03 PM, Nov 3
    Also on Bluesky
  • 🚀 Calling All AAC Testers for a new release of our Google Cloud/Azure TTS and Translation tool

    We’ve given our little Translate and Speak app for Windows a complete makeover. Our app not only translates text but also vocalizes messages from the message window using online services. We’ve introduced a user-friendly GUI to simplify configuration, extended support to include paid translation services, and here’s the grand reveal… you can now empower any Windows AAC app to leverage Google Cloud TTS or Azure TTS which massively opens up the possibility of using AAC with more languages. You can even use these services without translation - so just to speak.

    Get your hands on the early version here. BUT - just a heads up, you will need to be comfortable obtaining keys for Azure or Google Cloud. Check out our (somewhat outdated) docs for guidance. And ping me some feedback before we release it properly. Prizes for someone who can make me a nice demo video!

    → 8:48 AM, Sep 2
  • The new voice creation tool in iOS and live speech (in-built tts app from any screen) in iOS. My voice is definitely.. clunky .. but bear in mind I recorded this at like 2am in a AirBnB and didn’t want to wake the neighbours. 15 mins recording.

    → 9:58 PM, Jul 15
  • Need an AAC/AT textbook but nowhere near a library or have money? This is awesome from Internet Archive. Loan a textbook for free for an hour at a time. archive.org/details/i… (and my personal fav : archive.org/details/a…)

    → 4:25 PM, Jul 15
  • Knocked this up quickly. Type in one language. Translate within your AAC app, speak it out loud in that language and paste it back. Pretty configurable. Lag is due to using Google TTS but will work with offline TTS systems - i.e SAPI, coqui & eSpeak. code

    Quick demo video here

    → 12:27 AM, Jul 5
  • From our fab OT student today

    So are you telling me my phone can autocorrect but that’s not done on AAC Devices .. and the user always has to select their predictions?.

    Hmmm. I wonder if I was missing something (and not from the world of research like Keith Vertanen’s demos.. anyone?

    So what’s the difference between autocorrect and prediction? Prediction software has been around for years. In essence, the software displays predictions, and in some way, you have to select the predicted word or sentence. In some software the selection technique is reduced (e.g. in Microsoft 365 products now a swipe to the right allows you to select the suggested word/phrase). But you still have to actively look and seek it. More recently, autocorrection software has started to appear. If you ask me, it makes a lot of sense (some suggest it’s a terrible idea for learning language.. but for dyslexia support it looks amazing). You reduce the visual search aspect and just type. Any mistakes or typos it tries to correct. It’s not for everyone - but in AAC, it seems like a great idea. Focus on what you can get out and let the thing correct itself.

    → 5:55 PM, Jun 21
  • I am reviewing the “traffic light” system for AAC by Karen Erickson for our ATU this week. It’s similar to ideas in an Activity & Occupation Analysis - but much more reduced and focused on AAC activities in a day. Redrawn here - maybe too reduced from the original concept. Use at your peril.

    → 5:40 PM, Jun 21
  • A fascinating couple of papers that Simon Judge notes in his blog about the design and abandonment of AAC systems

    “the role of communication aids in an individuals’ communication is subtle and not a simple binary ‘used or not used’”.

    What I find really neat is Zoë’s paper and the creation of a model

    “This model consists of a communication loop – where experiences of prior communication attempts feed into decisions about whether to communicate and what method to use to communicate – each of which were influenced by considerations of the importance of the message, the time taken, who the communication partner was, the environment the communication is taking place in (physical and social) and the personal context and preferences of the individual

    The “choice” of when and how much to use an AAC device is down to the user. We shouldn’t see this as abandonment.

    → 10:04 AM, Jun 20
  • Just watched Edward Changs talk about their BRAVO project at #bci2023. It’s next level. They are not the only team doing this (e.g. Frank Willets) , but they are one of the few making significant improvement over current AAC solutions.. (even if its for n=2). The video of Ann writing by thought alone at this rate. wow.

    This was pre-publication. Watch their page for updates changlab.ucsf.edu/overview

    → 2:25 PM, Jun 7
  • Silent Speech and Sub-Vocal Communication. The next big technology breakthrough for AAC?

    Silent Speech & Sub-Vocal research is picking up. EMG can detect speech since the 70s but It’s been hard to make it useful. Now though? There are even instructions for making your own . Check out some papers and AlterEgo from MIT for a fancy demo. It’s AI aka “Applied Statistics” making this possible - and I feel that it’s this aiding access that will be the biggest impact on our field than areas of language.

    → 12:02 AM, Jun 6
  • Sebastian Pape has been doing a ton of work on the original Dasher code base for his research on Dasher in VR. It’s pretty awesome. Some of the output can be seen here (and watch the video) - you can also watch a 3D initial from our meeting here. dasher.acecentre.net

    → 5:09 PM, May 25
  • Over the next few weeks I’m fortunate to be representing Ace Centre at two international conferences; BCI meeting and ISAAC talking about our audit on text entry rate in AAC and a lot about Dasher. Hope to see you there if you are going too!

    → 3:05 PM, May 25
  • Last week we released TextAloud on the AppStore. You can read our blog for the entire details as to what it’s all about and why but in brief, it’s v1 of a more extensive app we want to create to support people better in long streams of TTS. We have several ideas for this - but most importantly, we are putting users at the heart of the design process along all stages (using the double diamond approach). Get in touch if you want to be part of the focus group. One idea, though, is using SSML to help markup a speech. You can see one implementation idea below.

    There’s a much longer post due from me about why SSML hasn’t been used in AAC, but in short - the time is overdue.

    → 7:00 AM, May 21
  • RSS
  • JSON Feed