With The Help of Artificial Intelligence And Breakthrough Research Conducted By UCSF Researchers, A Stroke Survivor Is Now Able To Speak With Facial Expressions For The First Time In Nearly Two Decades

Eighteen years ago, a woman named Ann Johnson experienced a stroke that left her paralyzed and took away her power of speech.
Fast forward to today, and thanks to a brain implant combined with artificial intelligence, she’s back to communicating verbally, albeit through a digital avatar.
A novel study featured in the journal Nature and conducted by scientists at the University of California San Francisco (UCSF) shed light on this breakthrough.
Researchers fixed an electrode grid onto Ann’s brain, which sends her brain signals to computer systems. Once there, AI technology deciphers these signals into words. After a short lag, Ann’s digital avatar verbalizes her thoughts and even mimics her emotions through facial expressions.
“There’s nothing that can convey how satisfying it is to see something like this actually work in real-time,” said Edward Change, a co-author of the study.
According to a UCSF statement, Ann has been communicating through a device that lets her type words on a screen by moving her head. This setup allows her to generate only 14 words per minute, which pales in comparison to the average spoken conversation rate of 160 words per minute.
However, the new interface, which she can use exclusively within the framework of the study, boosts her output to 78 words per minute, edging her closer to the pace of natural spoken dialogue.
The device also translates her intended words with an accuracy rate of about 75%.
This new interface is a significant advancement over the research team’s previous version, which converted intended speech into text at a pace of just 15 words per minute.

Gorodenkoff – stock.adobe.com- illustrative purposes only, not the actual people
The enhanced system utilizes an implant featuring 253 electrodes strategically positioned over brain areas crucial for communication.
Prior to Ann’s stroke, these brain regions would send signals to the speech-related muscles, such as those in the larynx, tongue, and lips.
Now, a cable connected to a port in Ann’s head channels these signals to computer systems.
Afterward, AI technology breaks down these signals into individual sounds, or what’s known as phonemes. These phonemes are then pieced together to form words.
The digital avatar delivering these words was crafted to resemble Ann, and its voice was modeled to sound like her, thanks to audio clips from her wedding video. The avatar’s facial movements and expressions also mirror Ann’s emotions, as indicated by her brain signals.
“The simple fact of hearing a voice similar to your own is emotional,” Ann said.
Ann spent weeks training with the interface, mentally rehearsing specific phrases repeatedly to help it decode her brain signals. During this period, the algorithm learned to identify words from a dataset of 1,024 commonly used conversational terms.
According to Kaylo Littlejohn, another co-author of the study, Ann was incredibly determined.
“She’s willing to record as long as needed, and she really understands that her efforts will go toward creating a speech neuroprosthesis that many people who have this kind of disability will be able to use,” Littlejohn detailed.
A separate study published in Nature, which was conducted by another research team, showcased a woman with ALS who also regained her ability to communicate. She used a different brain-computer interface designed for speech-to-text conversion.
This system allowed her to articulate words at a pace of 62 words per minute and came with a 23.8% error rate, all while drawing from a vocabulary set of 125,000 words.
“It is now possible to imagine a future where we can restore fluid conversation to someone with paralysis, enabling them to freely say whatever they want to say with an accuracy high enough to be understood reliably,” explained Frank Willett, a co-author of the second study.
However, it is crucial to note that these interfaces have not been widely tested yet. So, Judy Illes, a neuroethicist from the University of British Columbia in Canada, warned about “over-promising wide generalizability to large populations,” signaling more research is needed.
Not to mention, in order to be used in daily life, the devices will need to be wireless and portable. Still, the researchers are optimistic that their technology could pave the way for an FDA-approved communication system in the not-too-distant future.
To read the study’s complete findings, visit the link here.
If true crime defines your free time, this is for you: join Chip Chick’s True Crime Tribe
Many Veterans Are Turning To Nature-Based Therapy To Help Them Overcome Their PTSD
47 Years Ago She Left Home After A Heated Argument With Her Husband And Never Made It Back
Sign up for Chip Chick’s newsletter and get stories like this delivered to your inbox.
More About:Science