Science Computer turns thoughts into sentences

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
BRAIN ACTIVITY

Computer turns thoughts into sentences​

A new decoder from the U.S. is capable of converting thoughts into speech, turning thought content into coherent sentences. People who cannot speak could thus communicate verbally again. However, the researchers also warn of the dangers inherent in devices of this kind.
Online since today, 5 p.m.

For years, scientists have been trying to analyze a person's thoughts with the help of technical devices and to decode the thought content. In the meantime, "mind reading" is quite feasible and a reality, even if not always exactly word for word.

Extensive research work
Previous studies have mostly used invasive implants - neuroprostheses - to measure the brain activity of the person in question in real time, such as in a study from the U.S. published in the journal Nature Communications in November 2022. The program used at that time was trained in almost 50 training sessions using "deep learning" while a paralyzed patient tried to articulate one of 50 given words. In total, a vocabulary of around 1,000 words was thus built up, which correlated with certain thoughts of the patient.

Other approaches do without the implanted neuroprostheses, but the effectiveness of the methods usually suffers from the smaller number of data on brain activity. Many such programs are able to decode a few individual words from a person's thoughts - but they usually cannot manage coherent sentences.

Use in medicine
A research team led by neuroscientist and computer scientist Alexander Huth from the University of Texas (USA) has now also used a non-invasive method to measure the brain activity of three test subjects.

Unlike previous experiments without implants, however, the U.S. researchers actually managed to convert thought content into complete sentences and "continuous speech," Jerry Lang told journalists on Thursday. Lang is part of Huth's research lab at the University of Texas and the lead author of the study currently presented in the journal Nature Neuroscience.

The researchers involved hope to use their method to give a voice back to those people who can no longer speak, for example, because of a stroke or a disease of the nervous system. The fact that no implants are needed makes it much easier to use in practice. Nevertheless, the program does not work right away.

Training indispensable
The three test subjects first had to listen to stories and follow discussions for 16 hours. During this time, their brain activities were scanned using functional magnetic resonance imaging (fMRI). The researchers then created a language model from word sequences to predict how the brain would react to certain words.

The programmed decoder was trained to distinguish which thought content was associated with which words and brain activities. With the help of further models, some of which were also based on artificial intelligence, the research team then trained the decoder to predict words and sentences that should cause the most similar reactions possible in the subjects' brains.

So the program was soon able to convert the brain activity of the particular person it was trained on to words and phrases - and even when the subject was listening to a story that didn't appear in the training. The test subjects did not even have to actively listen to a text; it was enough if they only thought about the content of the stories.

Content mostly recognized in its meaning
In most cases, the decoder did not manage to exactly reproduce the content of the stories heard, thought about or seen in silent films. Nevertheless, it was able to reproduce the approximate sense of the thought processes. For example, when the subjects heard the sentence "I don't have my driver's license yet," the decoder processed this into "She hasn't started learning to drive yet" in one of the trials.

According to Huth, the fact that in most cases only the approximate meaning of the content could be reproduced was primarily due to the research team's special method. "Previous approaches have almost always tried to analyze the signals that occur in the brain when a person tries to actually articulate a word," he explained in a press briefing. The research team's method, however, targets other areas of the brain and actually analyzes the person's thoughts rather than their mental attempt to speak, he said.

On the one hand, this leads to the fact that the contents are often only reproduced in the sense, on the other hand, it also allows the verbalization of contents that are not based on spoken words - for example, in silent films or pictures. The method could thus be used for a variety of problems, especially in medicine.

Mental privacy at risk
However, the researchers are well aware that "mind reading" also harbors dangers - for example, with regard to privacy. However, the team is still giving the all-clear: In order to function, the decoder must first be trained for hours on a specific person - the program is currently unable to interpret the thoughts of a complete stranger. Without the cooperation of the person in question, it is currently not possible to convert their thoughts into words and sentences with the help of technology.

Theoretically, however, we are not too far away from that either. In the coming years, technical progress could lead to thoughts also being analyzed without the preceding training - i.e. possibly even without the consent of the respective person. The team is therefore calling for measures in the form of unambiguous laws to also legally safeguard people's mental privacy in the future and to prevent misuse of thought decoders.

Use (still) impractical
In order to apply the method of the U.S. research team in medicine and actually help people with speech problems, more work is needed. Currently, the decoder still relies on extensive data from magnetic resonance imaging. The method is not yet suitable for everyday use due to the size of the equipment required and the high costs involved.

In a next step, the researchers therefore want to clarify whether the decoding of thoughts might not also work with smaller and less expensive sensors.
Raphael Krapscha, Ö1 Science

Source (Austria)
 
Finally. Now I can convince people that I am a dangerous schizophrenic by robotically calling everyone a CIA nigger.
 
Excerpt from the lab testing.

SUBJECT 3
SESSON 12-A


"Wow, so this thing can read my...holy shit it wor...oh god this is nuts. It can say what I think? Rutabega hahaha oh wow...oh shit gotta watch what I think can't think the wrong thought oh damn that lab assistant has huge ti...no don't think about her tit...must not think about her oh man she can hear what I think everyone can hear what I think this is so fuc"

END OF SESSION
 
Ebonics to English is going to be the funniest shit ever.

"Ey Nigga, show me dat pussee so I can slobba all ova it ya feel me?" Translates into: "Fellow human, please show me your female genitals so I may use my tongue on it. It is something I want."
 
I’m sure this thing works about as well as that video game controller that reads your mind by sensing when your eyebrows go up and down.
 
Governments around the world will totally respect your privacy and not use this as a means of blacklisting certain individuals that don't kiss their asses for literal thoughtcrimes. Thanks science!
 
Closer and closer. SAO level tech is getting closer every year.

Governments around the world will totally respect your privacy and not use this as a means of blacklisting certain individuals that don't kiss their asses for literal thoughtcrimes. Thanks science!

I've thought about this pretty long and decided that this stuff is basically coming whether we like it or not.. better or worse. (there are both with it) As I stated in another topic months back:

I also want to add that yes, I understand the potential implications for horrors beyond imagining and shit we can't even think of yet that we should worry about. But recognizing the potential good and cool, isn't going to speed up or slow down the coming of such tech. We need to make sure it's done in the safest and most respectful (of rights, and of the person) way possible. Just like with any tech. We also need to be open to the concept of needing to actually add new (constitutional) protections. (like "freedom of thought" and to prohibit the mere concept of crimes of thought.) It's better to stay on top of this stuff than to hide away and let others dictate the rules.
 
Closer and closer. SAO level tech is getting closer every year.



I've thought about this pretty long and decided that this stuff is basically coming whether we like it or not.. better or worse. (there are both with it) As I stated in another topic months back:
That's where I'm getting at. Of course they'll put in "privacy" laws once this technology advances enough to be used on the go, but will they actually respect those laws? Surveillance without a warrant is against the 4th amendment, so why does the IRS have millions of American's location data?

They have the keys to your jail cell. Do you trust them to let you out every now and then?
 
According to schizo sources they've had this for over a decade now, I'm not in a position to imagine what the government's holding onto right now. Basically around the time of the Snowden leaks they were disclosing more rudimentary versions of this technology, there's a great video of an air force lecture discussing the implications of such technology in warfare from a few years back.
 
>Be me
>Watch chimp out videos
>Comprehend the nature of certain races

>Be computer
>I hate ni...

How did it know?
 
Last edited:
Sixteen hours of specific probe/response pattern training to get this is not very impressive. You’re basically yelling CAR at someone for hours and then being able to pick up personal patterns of firing when cars are discussed.
What would be more interesting is if they were able to do this with other patients training profiles.
 
Back
Top Bottom