A semantic decoder, a new artificial intelligence system, can convert a person’s brain activity while listening to a tale or silently imagining telling a story into a continuous stream of text. Researchers at The University of Texas at Austin devised a technology that could assist patients who are intellectually cognizant but unable to physically talk, such as stroke victims, communicate intelligibly again.
Jerry Tang, a doctorate student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin, led the work, which was published in the journal Nature Neuroscience. The approach is partly based on a transformer model, similar to those used by Open AI’s ChatGPT and Google’s Bard.
Unlike other language decoding systems under development, this one does not require surgical implantation on participants, making the process noninvasive. Participants are also not required to utilize only words from a predetermined list. After rigorous training of the decoder, in which the individual listens to hours of podcasts in the scanner, brain activity is recorded using an fMRI scanner. Later, if the participant is willing to have their thoughts decoded, listening to a new story or imagining telling a story allows the machine to generate corresponding text based solely on brain activity.