June 19, 2024

The two credits scenes from Guardians of the Galaxy Vol. 3 are described

0

Does the team have a future?

Senior correspondent for Future Perfect at Vox and co-host of the Future Perfect podcast Sigal Samuel. She focuses on the development of consciousness and the profound ethical ramifications of developments in neuroscience and artificial intelligence. Sigal worked as the religion editor at the Atlantic before to joining Vox.

I’ve been writing articles on neurotechnology with obscenely Orwellian headings for a few years now. headlines like “Facebook is building technology to read your mind” and “Brain-reading technology is coming.”

The technology is actually here now, not just “coming.” This is it.

According to a study published in Nature, researchers from the University of Texas at Austin have created a method that can convert people’s brain activity—such as the unsaid thoughts circling in our minds—into actual speech.

By placing electrodes in the brain and using an algorithm to transform the activity read from the electrodes into text on a computer screen, researchers have previously demonstrated that they can understand nonverbal communication. However, that strategy necessitates surgery and is quite intrusive. Only a few group of patients, such as those who had paralysis, for whom the benefits outweighed the drawbacks, found it appealing. As a result, scientists created methods without implant surgery. They could understand very brief words or simple mental states like weariness, but not much more.

Now that we have a non-invasive brain-computer interface (BCI), someone else can read the broad strokes of what we are thinking even if we haven’t said a single word.

How is that even doable?

It all comes down to combining two technologies: ChatGPT-like huge AI language models and fMRI scans, which quantify blood flow to various brain regions.

In the University of Texas study, three individuals listened to 16 hours of narrative podcasts like The Moth as researchers monitored changes in the blood flow to their brains using an fMRI machine. With the aid of an AI model and this data, the researchers were able to link a sentence to the visual changes that occur in each person’s brain when they hear a given word.

The scientists also employed a language model, especially GPT-1, to reduce the large amount of potential word sequences to well-formed English and forecast which words are likely to appear next in a sequence because many of them would be gibberish.

The end result is a decoder that, despite making mistakes occasionally, understands the general meaning. For instance, while inside the fMRI machine, participants were instructed to visualize narrating a narrative. They later spoke it aloud so the scientists could assess how closely the original story and the decoded version corresponded.

The decoder interpreted the participant’s thought, “Look for a message from my wife saying that she had changed her mind and was coming back,” as, “To see her, for some reason I thought she would come to me and say she misses me.”

Here’s another illustration. The decoder interpreted what the participant believed, “Coming down a hill at me on a skateboard and he was going really fast and he stopped just in time,” as “He couldn’t get to me fast enough, he drove straight up into my lane and tried to ram me.”

Although it is not a word-for-word translation, the main sense is largely kept. This signifies a development that is well beyond what prior brain-reading technology was capable of, and it also poses important ethical concerns.

Brain-computer interfaces’ harrowing ethical ramifications
That this is not a scene from a Neal Stephenson or William Gibson novel may be difficult to believe. Yet people’s lives are already being altered by this technology. Several disabled people have received brain implants over the past 12 years that enable them to operate a computer cursor or command robotic limbs with their thoughts.

The BCIs being developed by Elon Musk’s Neuralink and Mark Zuckerberg’s Meta could one day allow you to operate your phone or computer just with your thoughts. These BCIs could capture thoughts directly from your neurons and transform them into words in real time.

Commercially available non-invasive, even portable BCIs that can read thoughts are still years away because you can’t carry about an fMRI machine, which may cost up to $3 million. Although functional near-infrared spectroscopy (fNIRS), which detects the same activity as fMRI but with a lesser resolution, could potentially be used to measure portable systems, the study’s decoding method could not.

Is that advantageous? Like many cutting-edge breakthroughs, this one has the potential to cause significant ethical problems.

Let’s begin by stating the obvious. The last remaining privacy frontier is our minds. They serve as the foundation for our sense of self and our most private ideas. What is ours to govern if not the priceless three pounds of goo within our skulls?

Consider a situation in which businesses have access to individuals’ brain data. They might use that information to market to us in enticing ways that are nearly impossible for us to resist. Consumer surveys and focus groups don’t provide advertisers with particularly useful information because the majority of our purchasing decisions are influenced by unconscious impressions. By reaching straight to the source, the consumer’s brain, they can obtain considerably more accurate information. By examining how viewers’ brains respond to advertisements, advertisers in the burgeoning discipline of “neuromarketing” are already making an effort to achieve this goal. If advertisers had access to vast amounts of brain data, you can experience strong urges to purchase particular things without fully understanding why.

Or picture a situation where law enforcement or the government uses BCIs for surveillance or questioning. In a world where law enforcement has the authority to listen in on your thoughts without your permission, the US Constitution’s prohibition against self-incrimination may lose all of its value. In the science fiction film Minority Report, a special police division known as the PreCrime Division tracks out and apprehends murderers before they commit their crimes.

Some neuroethicists contend that before these technologies are implemented, we need new human rights legislation to safeguard us from their potential for abuse.

Author of The Battle for Your Brain Nita Farahany told me, “This research demonstrates how quickly generative AI is making it possible for even our thoughts to be read.” “We need to protect humanity with a right to self-determination over our brains and mental experiences before neurotechnology is used on a large scale in society.”

Leave a Reply

Your email address will not be published. Required fields are marked *