Brain-Computer Interface Archives - Singularity Hub https://singularityhub.com/tag/brain-computer-interface/ News and Insights on Technology, Science, and the Future from Singularity Group Wed, 18 Dec 2024 21:22:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://singularityhub.com/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Brain-Computer Interface Archives - Singularity Hub https://singularityhub.com/tag/brain-computer-interface/ 32 32 4183809 Neuralink Rival’s Biohybrid Implant Connects to the Brain With Living Neurons https://singularityhub.com/2024/12/19/neuralink-rival-says-its-biohybrid-implant-connects-to-the-brain-with-living-neurons/ Thu, 19 Dec 2024 15:00:22 +0000 https://singularityhub.com/?p=159881 Brain implants have improved dramatically in recent years, but they’re still invasive and unreliable. A new kind of brain-machine interface using living neurons to form connections could be the future.

While companies like Neuralink have recently provided some flashy demos of what could be achieved by hooking brains up to computers, the technology still has serious limitations preventing wider use.

Non-invasive approaches like electroencephalograms (EEGs) provide only coarse readings of neural signals, limiting their functionality. Directly implanting electrodes in the brain can provide a much clearer connection, but such risky medical procedures are hard to justify for all but the most serious conditions.

California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision. In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.

“The principal advantages of a biohybrid implant are that it can dramatically change the scaling laws of how many neurons you can interface with versus how much damage you do to the brain,” Alan Mardinly, director of biology at Science Corporation, told New Scientist.

The company’s CEO Max Hodak is a former president of Neuralink, and his company also produces a retinal implant using more conventional electronics that can restore vision in some patients. But the company has been experimenting with so-called “biohybrid” approaches, which Hodak thinks could provide a more viable long-term solution for brain-machine interfaces.

“Placing anything into the brain inevitably destroys some amount of brain tissue,” he wrote in a recent blog post. “Destroying 10,000 cells to record from 1,000 might be perfectly justified if you have a serious injury and those thousand neurons create a lot of value—but it really hurts as a scaling characteristic.”

Instead, the company has developed a honeycomb-like structure made of silicon featuring more than 100,000 “microwells”—cylindrical holes roughly 15 micrometers deep. Individual neurons are inserted into each of these microwells, and the array can then be surgically implanted onto the surface of the brain.

The idea is that while the neurons remain housed in the implant, their axons—long strands that carry nerve signals away from the cell body—and their dendrites—the branched structures that form synapses with other cells—will be free to integrate with the host’s brain cells.

To see if the idea works in practice they installed the device in mice, using neurons genetically modified to react to light. Three weeks after implantation, they carried out a series of experiments where they trained the mice to respond whenever a light was shone on the device. The mice were able to detect when this happened, suggesting the light-sensitive neurons had merged with their native brain cells.

While it’s early days, the approach has significant benefits. You can squeeze a lot more neurons into a millimeter-scale chip than electrodes and each of those neurons can form many connections. That means the potential bandwidth of a biohybrid device could be much more than a conventional neural implant. The approach is also much less damaging to the patient’s brain.

However, the lifetime of these kinds of devices could be a concern—after 21 days, only 50 percent of the neurons had survived. And the company needs to find a way to ensure the neurons don’t illicit a negative immune response in the patient.

If the approach works though, it could be an elegant and potentially safer way to merge man and machine.

Image Credit: Science Corporation

]]>
159881
The Legally Blind See Again With an Implant the Size of a Grain of Salt https://singularityhub.com/2024/10/28/the-clinically-blind-see-again-with-an-implant-the-size-of-a-grain-of-sand/ Mon, 28 Oct 2024 23:12:34 +0000 https://singularityhub.com/?p=159297 Seeing is believing. Our perception of the world heavily relies on vision.

What we see depends on cells in the retina, which sit behind the eyes. These delicate cells transform light into electrical pulses that go to the brain for further processing.

But because of age, disease, or genetics, retinal cells often break down. For people with geographic atrophy—a disease which gradually destroys retinal cells—their eyes struggle to focus on text, recognize faces, and decipher color or textures in the dark. The disease especially attacks central vision, which lets our eyes focus on specific things.

The result is seeing the world through a blurry lens. Walking down the street in dim light becomes a nightmare, each surface looking like a distorted version of itself. Reading a book or watching a movie is more frustrating than relaxing.

But the retina is hard to regenerate, and the number of transplant donors can’t meet demand. A small clinical trial may have a solution. Led by Science Corporation, a brain-machine interface company headquartered in Alameda, California, the study implanted a tiny chip that acts like a replacement retina in 38 participants who were legally blind.

Dubbed the PRIMAvera trial, the volunteers wore custom-designed eyewear with a camera acting as a “digital eye.” Captured images were then transmitted to the implanted artificial retina, which translated the information into electrical signals for the brain to decipher.

Preliminary results found a boost in the participants’ ability to read the eye exam scale—a common test of random letters, with each line smaller than the last. Some could even read longer texts in a dim environment at home with the camera’s “zoom-and-enhance” function.

The trial is ongoing, with final results expected in 2026—three years after the implant. But according to Frank Holz at the University of Bonn Ernst-Abbe-Strasse in Germany, the study’s scientific coordinator, the results are a “milestone” for geographic atrophy resulting from age.

“Prior to this, there have been no real treatment options for these patients,” he said in a press release.

Max Hodak, CEO of Science Corp and former president of Elon Musk’s Neuralink, said, “To my knowledge, this is the first time that restoration of the ability to fluently read has ever been definitively shown in blind patients.”

Eyes Wide Open

The eye is a biological wonder. The eyeball’s layers act as a lens focusing light onto the retina—the eye’s visual “sensor.” The retina contains two types of light-sensitive cells: Rods and cones.

The rods mostly line the outer edges of the retina, letting us see shapes and shadows in the dark or at the periphery. But these cells can’t detect color or sharpen their focus, which is why night vision feels blurrier. However, rods readily pick up action at the edges of sight—such as seeing rapidly moving things out of the corner of your eye.

Cones pick up the slack. These cells are mostly in the center of the retina and can detect vibrant colors and sharply focus on specific things, like the words you’re currently reading.

Both cell types rely on other cells to flourish. These cells coat the retina, and like soil in a garden, provide a solid foundation in which the rods and cones can grow.

With age, all these cells gradually deteriorate, sometimes resulting in age-related macular degeneration and the gradual loss of central vision. It’s a common condition that affects nearly 20 million Americans aged 40 or older. Details become hard to see; straight lines may seem crooked; colors look dim, especially in low-light conditions. Later stages, called geographic atrophy, result in legal blindness.

Scientists have long searched for a treatment. One idea is to use a 3D-printed stem cell patch made out of the base “garden soil” cells that support light-sensitive rods and cones. Here, doctors transform a patient’s own blood cells into healthy retinal support cells, attach them to a biodegradable scaffold, and transplant them into the eye.

Initial results showed the patch integrated into the retina and slowed and even reversed the disease. But this can take six months and is tailored for each patient, making it difficult to scale.

A New Vision

The Prima system eschews regeneration for a wireless microchip that replaces parts of the retina. The two-millimeter square implant—roughly the size of a grain of salt—is surgically inserted under the retina. The procedure may sound daunting, but according to Wired, it takes only 80 minutes, less time than your average movie. Each chip contains nearly 400 light-sensitive pixels, which convert light patterns into electrical pulses the brain can interpret. The system also includes a pair of glasses with a camera to capture visual information and beam it to the chip using infrared light.

Together, the components work like our eyes do: Images from the camera are sent to the artificial retina “chip,” which transform them into electrical signals for the brain.

Initial results were promising. According to the company, the patients had improved visual acuity a year after the implant. At the beginning of the study, most were considered legally blind with an average vision of 20/450, compared to the normal 20/20. When challenged with an eye exam test, the patients could read, on average, roughly 23 more letters—or five more lines down the chart—compared to tests taken before they received the implant. One patient especially excelled, improving their performance by 59 letters—over 11 lines.

The Prima implant also impacted their daily lives. Participants were able to read, play cards, and tackle crossword puzzles—all activities that require central vision.

While impressive, the system didn’t work for everyone. The implant caused serious side effects in some participants—such as a small tear in the retina—which were mostly resolved according to the company. Some people also experienced blood leaks under the retina that were promptly treated. However, few details regarding the injuries or treatments were released.

The trial is ongoing, with the goal of following participants for three years to track improvements and monitor side effects. The team is also looking to measure their quality of life—how the system affects daily activities that require vision and mental health.

The trial “represents an enormous turning point for the field, and we’re incredibly excited to bring this important technology to market over the next few years,” said Hodak.

Image Credit: Arteum.ro on Unsplash

]]>
159297
Elon Musk Says First Neuralink Patient Can Move Computer Cursor With Mind https://singularityhub.com/2024/02/23/elon-musk-says-first-neuralink-patient-can-move-computer-cursor-with-mind/ Fri, 23 Feb 2024 19:48:19 +0000 https://singularityhub.com/?p=156039 Neural interfaces could present an entirely new way for humans to connect with technology. Elon Musk says the first human user of his startup Neuralink’s brain implant can now move a mouse cursor using their mind alone.

While brain-machine interfaces have been around for decades, they have primarily been research tools that are far too complicated and cumbersome for everyday use. But in recent years, a number of startups have cropped up promising to develop more capable and convenient devices that could help treat a host of conditions.

Neuralink is one of the firms leading that charge. Last September, the company announced it had started recruiting for the first clinical trial of its device after receiving clearance from the US Food and Drug Administration earlier in the year. And in a discussion on his social media platform X last week, Musk announced the company’s first patient was already able to control a cursor roughly a month after implantation.

“Progress is good, patient seems to have made a full recovery…and is able to control the mouse, move the mouse around the screen just by thinking,” Musk said, according to CNN. “We’re trying to get as many button presses as possible from thinking, so that’s what we’re currently working on.”

Controlling a cursor with a brain implant is nothing new—an academic team achieved the same feat as far back as 2006. And competitor Synchron, which makes a BMI that is implanted through the brain’s blood vessels, has been running a trial since 2021 in which volunteers have been able to control computers and smartphones using their mind alone.

Musk’s announcement nonetheless represents rapid progress for a company that only unveiled its first prototype in 2019. And while the company’s technology works on similar principles to previous devices, it promises far higher precision and ease of use.

That’s because each chip features 1,024 electrodes split between 64 threads thinner than a human hair that are inserted into the brain by a “sewing machine-like” robot. That is far more electrodes per unit volume than any previous BMI, which means the device should be capable of recording from many individual neurons at once.

And while most previous BMIs required patients be wired to bulky external computers, the company’s N1 implant is wireless and features a rechargeable battery. That makes it possible to record brain activity during everyday activities, greatly expanding the research potential and prospects for using it as a medical device.

Recording from individual neurons is a capability that has mainly been restricted to animal studies so far, Wael Asaad, a professor of neurosurgery and neuroscience at Brown University, told The Brown Daily Herald, so being able to do the same in humans would be a significant advance.

“For the most part, when we work with humans, we record from what are called local field potentials—which are larger scale recordings—and we’re not actually listening to individual neurons,” he said. “Higher resolution brain interfaces that are fully wireless and allow two-way communication with the brain are going to have a lot of potential uses.”

In the initial clinical trial, the device’s electrodes will be implanted in a brain region associated with motor control. But Musk has espoused much more ambitious goals for the technology, such as treating psychiatric disorders like depression, allowing people to control advanced prosthetic limbs, or even making it possible to eventually merge our minds with computers.

There’s probably a long way to go before that’s in the cards though, Justin Sanchez, from nonprofit research organization Battelle, told Wired. Decoding anything more complicated than basic motor signals or speech will likely require recording from many more neurons in different regions, most likely using multiple implants.

“There’s a huge gap between what is being done today in a very small subset of neurons versus understanding complex thoughts and more sophisticated cognitive kinds of things,” Sanchez said.

So, as impressive as the company’s progress has been so far, it’s likely to be some time before the technology is employed for anything other than a narrow set of medical applications, particularly given its invasiveness. That means most of us will be stuck with our touchscreens for the foreseeable future.

Image Credit: Neuralink

]]>
156039
Generative AI Reconstructs Videos People Are Watching by Reading Their Brain Activity https://singularityhub.com/2023/05/26/an-ai-recreated-videos-people-watched-based-on-their-brain-activity/ Fri, 26 May 2023 14:28:10 +0000 https://singularityhub.com/?p=152110 The ability of machines to read our minds has been steadily progressing in recent years. Now, researchers have used AI video generation technology to give us a window into the mind’s eye.

The main driver behind attempts to interpret brain signals is the hope that one day we might be able to offer new windows of communication for those in comas or with various forms of paralysis. But there are also hopes that the technology could create more intuitive interfaces between humans and machines that could also have applications for healthy people.

So far, most research has focused on efforts to recreate the internal monologues of patients, using AI systems to pick out what words they are thinking of. The most promising results have also come from invasive brain implants that are unlikely to be a practical approach for most people.

Now though, researchers from the National University of Singapore and the Chinese University of Hong Kong have shown that they can combine non-invasive brain scans and AI image generation technology to create short snippets of video that are uncannily similar to clips that the subjects were watching when their brain data was collected.

The work is an extension of research the same authors published late last year, where they showed they could generate still images that roughly matched the pictures subjects had been shown. This was achieved by first training one model on large amounts of data collected using fMRI brain scanners. This model was then combined with the open-source image generation AI Stable Diffusion to create the pictures.

In a new paper published on the preprint server arXiv, the authors take a similar approach, but adapt it so that the system can interpret streams of brain data and convert them into videos rather than stills. First, they trained one model on large amounts of fMRI so that it could learn the general features of these brain scans. This was then augmented so it could process a succession of fMRI scans rather than individual ones, and then trained again on combinations of fMRI scans, the video snippets that elicited that brain activity, and text descriptions.

Separately, the researchers adapted the pre-trained Stable Diffusion model to produce video rather than still images. It was then trained again on the same videos and text descriptions that the first model had been trained on. Finally, the two models were combined and fine-tuned together on fMRI scans and their associated videos.

The resulting system was able to take fresh fMRI scans it hadn’t seen before and generate videos that broadly resembled the clips human subjects had been watching at the time. While far from a perfect match, the AI’s output was generally pretty close to the original video, accurately recreating crowd scenes or herds of horses and often matching the color palette.

To evaluate their system, the researchers used a video classifier designed to assess how well the model had understood the semantics of the scene—for instance, whether it had realized the video was of fish swimming in an aquarium or a family walking down a path—even if the imagery was slightly different. Their model scored 85 percent, which is a 45 percent improvement over the state-of-the-art.

While the videos the AI generates are still glitchy, the authors say this line of research could ultimately have applications in both basic neuroscience and also future brain-machine interfaces. However, they also acknowledge potential downsides to the technology. “Governmental regulations and efforts from research communities are required to ensure the privacy of one’s biological data and avoid any malicious usage of this technology,” they write.

That is likely a nod to concerns that the combination of AI brain scanning technology could make it possible for people to intrusively record other’s thoughts without their consent. Anxieties were also voiced earlier this year when researchers used a similar approach to essentially create a rough transcript of the voice inside peoples’ heads, though experts have pointed out that this would be impractical if not impossible for the foreseeable future.

But whether you see it as a creepy invasion of your privacy or an exciting new way to interface with technology, it seems machine mind readers are edging closer to reality.

Image Credit: Claudia Dewald from Pixabay

]]>
152110
This Brain Activity Decoder Translates Ideas Into Text Using Only Brain Scans https://singularityhub.com/2023/05/02/this-brain-activity-decoder-translates-ideas-into-text-using-only-scans/ Tue, 02 May 2023 14:00:33 +0000 https://singularityhub.com/?p=151709 Language and speech are how we express our inner thoughts. But neuroscientists just bypassed the need for audible speech, at least in the lab. Instead, they directly tapped into the biological machine that generates language and ideas: the brain.

Using brain scans and a hefty dose of machine learning, a team from the University of Texas at Austin developed a “language decoder” that captures the gist of what a person hears based on their brain activation patterns alone. Far from a one-trick pony, the decoder can also translate imagined speech, and even generate descriptive subtitles for silent movies using neural activity.

Here’s the kicker: the method doesn’t require surgery. Rather than relying on implanted electrodes, which listen in on electrical bursts directly from neurons, the neurotechnology uses functional magnetic resonance imaging (fMRI), a completely non-invasive procedure, to generate brain maps that correspond to language.

To be clear, the technology isn’t mind reading. In each case, the decoder produces paraphrases that capture the general idea of a sentence or paragraph. It does not reproduce every single word. Yet that’s also the decoder’s power.

“We think that the decoder represents something deeper than languages,” said lead study author Dr. Alexander Huth in a press briefing. “We can recover the overall idea…and see how the idea evolves, even if the exact words get lost.”

The study, published this week in Nature Neuroscience, represents a powerful first push into non-invasive brain-machine interfaces for decoding language—a notoriously difficult problem. With further development, the technology could help those who have lost the ability to speak to regain their ability to communicate with the outside world.

The work also opens new avenues for learning about how language is encoded in the brain, and for AI scientists to dig into the “black box” of machine learning models that process speech and language.

“It was a long time coming…we were kinda shocked that this worked as well as it does,” said Huth.

Decoding Language

Translating brain activity to speech isn’t new. One previous study used electrodes placed directly in the brains of patients with paralysis. By listening in on the neurons’ electrical chattering, the team was able to reconstruct full words from the patient.

Huth decided to take an alternative, if daring, route. Instead of relying on neurosurgery, he opted for a non-invasive approach: fMRI.

“The expectation among neuroscientists in general that you can do this kind of thing with fMRI is pretty low,” said Huth.

There are plenty of reasons. Unlike implants that tap directly into neural activity, fMRI measures how oxygen levels in the blood change. This is called the BOLD signal. Because more active brain regions require more oxygen, BOLD responses act as a reliable proxy for neural activity. But it comes with problems. The signals are sluggish compared to measuring electrical bursts, and the signals can be noisy.

Yet fMRI has a massive perk compared to brain implants: it can monitor the entire brain at high resolution. Compared to gathering data from a nugget in one region, it provides a birds-eye view of higher-level cognitive functions—including language.

With decoding language, most previous studies tapped into the motor cortex, an area that controls how the mouth and larynx move to generate speech, or more “surface level” in language processing for articulation. Huth’s team decided to go one abstraction up: into the realm of thoughts and ideas.

Into the Unknown

The team realized they needed two things from the onset. One, a dataset of high-quality brain scans to train the decoder. Two, a machine learning framework to process the data.

To generate the brain map database, seven volunteers had their brains repeatedly scanned as they listened to podcast stories while having their neural activity measured inside an MRI machine. Laying inside a giant, noisy magnet isn’t fun for anyone, and the team took care to keep the volunteers interested and alert, since attention factors into decoding.

For each person, the ensuing massive dataset was fed into a framework powered by machine learning. Thanks to the recent explosion in machine learning models that help process natural language, the team was able to harness those resources and readily build the decoder.

It’s got multiple components. The first is an encoding model using the original GPT, the predecessor to the massively popular ChatGPT. The model takes each word and predicts how the brain will respond. Here, the team fine-tuned GPT using over 200 million total words from Reddit comments and podcasts.

This second part uses a popular technique in machine learning called Bayesian decoding. The algorithm guesses the next word based on a previous sequence and uses the guessed word to check the brain’s actual response.

For example, one podcast episode had “my dad doesn’t need it…” as a storyline. When fed into the decoder as a prompt, it came with potential responses: “much,” “right,” “since,” and so on. Comparing predicted brain activity with each word to that generated from the actual word helped the decoder hone in on each person’s brain activity patterns and correct for mistakes.

After repeating the process with the best predicted words, the decoding aspect of the program eventually learned each person’s unique “neural fingerprint” for how they process language.

A Neuro Translator

As a proof of concept, the team pitted the decoded responses against the actual story text.

It came surprisingly close, but only for the general gist. For example, one story line, “we start to trade stories about our lives we’re both from up north,” was decoded as “we started talking about our experiences in the area he was born in I was from the north.”

This paraphrasing is expected, explained Huth. Because fMRI is rather noisy and sluggish, it’s nearly impossible to capture and decode each word. The decoder is fed a mishmash of words and needs to disentangle their meanings using features like turns of phrase.

actual vs decoded stimulus brain scans decoder
Image Credit: The University of Texas at Austin

In contrast, ideas are more permanent and change relatively slowly. Because fMRI has a lag when measuring neural activity, it captures abstract concepts and thoughts better than specific words.

This high-level approach has perks. While lacking fidelity, the decoder captures a higher level of language representation than previous attempts, including for tasks not limited to speech alone. In one test, the volunteers watched an animated clip of a girl being attacked by dragons without any sound. Using brain activity alone, the decoder described the scene from the protagonist’s perspective as a text-based story. In other words, the decoder was able to translate visual information directly into a narrative based on a representation of language encoded in brain activity.

Similarly, the decoder also reconstructed one-minute-long imagined stories from the volunteers.

After over a decade working on the technology, “it was shocking and exciting when it finally did work,” said Huth.

Although the decoder doesn’t exactly read minds, the team was careful to assess mental privacy. In a series of tests, they found that the decoder only worked with the volunteers’ active mental participation. Asking participants to count up by an order of seven, name different animals, or mentally construct their own stories rapidly degraded the decoder, said first author Jerry Tang. In other words, the decoder can be “consciously resisted.”

For now, the technology only works after months of careful brain scans in a loudly humming machine while lying completely still—hardly feasible for clinical use. The team is working on translating the technology to fNIRS (functional near-infrared spectroscopy), which measures blood oxygen levels in the brain. Although it has a lower resolution than fMRI, fNIRS is far more portable as the main hardware is a swimming-cap-like device that easily fits under a hoodie.

“With tweaks, we should be able to translate the current setup to fNIRS wholesale,” said Huth.

The team is also planning to use newer language models to boost the decoder’s accuracy, and potentially bridge different languages. Because languages have a shared neural representation in the brain, the decoder could in theory encode one language and use the neural signals to decode it into another.

It’s an “exciting future direction,” said Huth.

Image Credit: Jerry Tang/Martha Morales/The University of Texas at Austin

]]>
151709
Scientists Merge Biology and Technology by 3D Printing Electronics Inside Living Worms https://singularityhub.com/2023/04/14/scientists-merge-biology-and-technology-with-3d-printed-electronics-inside-living-worms/ Fri, 14 Apr 2023 14:00:50 +0000 https://singularityhub.com/?p=151332 Finding ways to integrate electronics into living tissue could be crucial for everything from brain implants to new medical technologies. A new approach has shown that it’s possible to 3D print circuits into living worms.

There has been growing interest in finding ways to more closely integrate technology with the human body, in particular when it comes to interfacing electronics with the nervous system. This will be crucial for future brain-machine interfaces and could also be used to treat a host of neurological conditions.

But for the most part, it’s proven difficult to make these kinds of connections in ways that are non-invasive, long-lasting, and effective. The rigid nature of standard electronics means they don’t mix well with the squishy world of biology, and getting them inside the body in the first place can require risky surgical procedures.

A new approach relies instead on laser-based 3D printing to grow flexible, conductive wires inside the body. In a recent paper in Advanced Materials Technologies, researchers showed they could use the approach to produce star- and square-shaped structures inside the bodies of microscopic worms.

“Hypothetically, it will be possible to print quite deep inside the tissue,” John Hardy at Lancaster University, who led the study, told New Scientist. “So, in principle, with a human or other larger organism, you could print around 10 centimeters in.”

The researchers’ approach involves a high-resolution Nanoscribe 3D printer, which fires out an infrared laser that can cure a variety of light-sensitive materials with very high precision. They also created a bespoke ink that includes the conducting polymer polypyrrole, which previous research had shown could be used to electrically stimulate cells in living animals.

To prove the scheme could achieve the primary goal of interfacing with living cells, the researchers first printed circuits into a polymer scaffold and then placed the scaffold on top of a slice of mouse brain tissue being kept alive in a petri dish. They then passed a current through the flexible electronic circuit and showed that it produced the expected response in the mouse brain cells.

The team then decided to demonstrate the approach could be used to print conductive circuits inside a living creature, something that had so far not been achieved. The researchers decided to use the roundworm C. elegans due to its sensitivity to heat, injury, and drying out, which they said would make for a stringent test of how safe the approach is.

First, the team had to adjust their ink to make sure it wasn’t toxic to the animals. They then had to get it inside the worms by mixing it with the bacterial paste they’re fed on.

Once the animals had ingested the ink, they were placed under the Nanoscribe printer, which was used to create square and star shapes a few micrometers across on the worms’ skin and within their guts. The shapes didn’t come out properly in the moving gut though, the researchers admit, due to the fact it was constantly moving.

The shapes printed inside the worms’ bodies had no functionality. But Ivan Minev from the University of Sheffield told New Scientist the approach could one day make it possible to build electronics intertwined with living tissue, though it would still take considerable work before it was applicable in humans.

The authors also admit that adapting the approach for biomedical applications would require significant further research. But in the long run, they believe their work could enable tailor-made brain-machine interfaces for medical purposes, future neuromodulation implants, and virtual reality systems. It could also make it possible to easily repair bioelectronic implants within the body.

All that’s likely still a long way from being realized, but the approach shows the potential of combining 3D printing with flexible, biocompatible electronics to help interface the worlds of biology and technology.

Image Credit: Kbradnam/Wikimedia Commons

]]>
151332
This Researcher Knew What Song People Were Listening to Based on Their Brain Activity https://singularityhub.com/2023/04/06/this-researcher-knew-what-song-people-were-listening-to-based-on-their-brain-activity/ Thu, 06 Apr 2023 17:06:18 +0000 https://singularityhub.com/?p=151282 The human brain remains the most mysterious organ in our bodies. From memory and consciousness to mental illness and neurological disorders, there remain volumes of research and study to be done before we understand the intricacies of our own minds. But to some degree, researchers have succeeded in tapping into our thoughts and feelings, whether roughly grasping the content of our dreams, observing the impact of psilocybin on brain networks disrupted by depression, or being able to predict what sorts of faces we’ll find attractive.

A study published earlier this year described a similar feat of decoding brain activity. Ian Daly, a researcher from the University of Sussex in England, used brain scans to predict what piece of music people were listening to with 72 percent accuracy. Daly described his work, which used two different forms of “neural decoders,” in a paper in Nature.

While participants in his study listened to music, Daly recorded their brain activity using both electroencephalography (EEG)—which uses a network of electrodes and wires to pick up the electrical signals of neurons firing in the brain—and functional magnetic resonance imaging (fMRI), which shows changes in blood oxygenation and flow that occur in response to neural activity.

EEG and fMRI have opposite strengths: the former is able to record brain activity over short periods of time, but only from the surface of the brain, since the electrodes sit on the scalp. The latter can capture activity deeper in the brain, but only over longer periods of time. Using both gave Daly the best of both worlds.

He monitored the brain regions that had high activity during music trials versus no-music trials, pinpointing the left and right auditory cortex, the cerebellum, and the hippocampus as the critical regions for listening to music and having an emotional response to it—though he noted that there was a lot of variation between different participants in terms of the activity in each region. This makes sense, as one person may have an emotional response to a given piece of music while another finds the same piece boring.

Using both EEG and fMRI, Daly recorded brain activity from 18 people while they listened to 36 different songs. He fed the brain activity data into a bi-directional long term short term (biLSTM) deep neural network, creating a model that could reconstruct the music heard by participants using their EEG.

A biLSTM is a type of recurrent neural network that’s commonly used for natural language processing applications. It adds an extra layer onto a regular long-short term memory network, and that extra layer reverses its information flow and allows the input sequence to flow backward. The network’s input thus flows both forwards and backwards (hence the “bi-directional” piece), and it’s capable of utilizing information from both sides. This makes it a good tool for modeling the dependencies between words and phrases—or, in this case, between musical notes and sequences.

Daly used the data from the biLSTM network to roughly reconstruct songs based on peoples’ EEG activity, and he was able to figure out which piece of music they’d been listening to with 72 percent accuracy.

He then recorded data from 20 new participants just using EEG, with his initial dataset providing insight into the sources of these signals. Based on that data, his accuracy for pinpointing songs went down to 59 percent.

However, Daly believes his method can be used to help develop brain-computer interfaces (BCIs) to assist people who’ve had a stroke or who suffer from other neurological conditions that can cause paralysis, such as ALS. BCIs that can translate brain activity into words would allow these people to communicate with their loved ones and care providers in a way that may otherwise be impossible. While solutions already exist in the form of brain implants, if technology like Daly’s could accomplish similar outcomes, it would be much less invasive to patients.

“Music is a form of emotional communication and is also a complex acoustic signal that shares many temporal, spectral, and grammatical similarities with human speech,” Daly wrote in the paper. “Thus, a neural decoding model that is able to reconstruct heard music from brain activity can form a reasonable step towards other forms of neural decoding models that have applications for aiding communication.”

Image Credit: Alina Grubnyak on Unsplash 

]]>
151282
Could Brain-Computer Interfaces Lead to ‘Mind Control for Good’? https://singularityhub.com/2023/03/16/mind-control-for-good-the-future-of-brain-computer-interfaces/ Thu, 16 Mar 2023 16:55:32 +0000 https://singularityhub.com/?p=150772 Of all the advanced technologies currently under development, one of the most fascinating and frightening is brain-computer interfaces. They’re fascinating because we still have so much to learn about the human brain, yet scientists are already able to tap into certain parts of it. And they’re frightening because of the sinister possibilities that come with being able to influence, read, or hijack peoples’ thoughts.

But the worst-case scenarios that have been played out in science fiction are just one side of the coin, and brain-computer interfaces could also be a tremendous boon to humanity—if we create, manage, and regulate them correctly. In a panel discussion at South by Southwest this week, four experts in the neuroscience and computing field discussed how to do this.

Panelists included Ben Hersh, a staff interaction designer at Google; Anna Wexler, an assistant professor of medical ethics and health policy at the University of Pennsylvania; Afshin Mehin, the founder of a creative studio that helps companies give form to the future called Card79; and Jacob Robinson, an associate professor in electrical and computer engineering at Rice University and co-founder of Motif Neurotech, a company creating minimally invasive electronic therapies for mental health.

“This is a field that has a lot of potential for good, and there’s a lot that we don’t know yet,” Hersh said. “It’s also an area that has a lot of expectations that we’ve absorbed from science fiction.” In his opinion, “mind control for good” is not only a possibility, it’s an imperative.

The Mysterious Brain

Of all the organs in our bodies, the brain is by far the most complex—and the one we know the least about. “Two people can perceive the same stimuli and have a very different subjective experience, and there are no real rules to help us understand what translates your experience of the world into your subjective reality,” Robinson said.

But, he added, if we zoom in on the fundamental aspect of what’s happening in our brains, it is governed by physical processes. Could it be possible to control aspects of the brain and our subjective experiences with the level of precision we have in fields like physics and engineering?

“Part of why we’ve struggled with treating mental health conditions is that we don’t have a fundamental understanding of what leads to these disorders,” Robinson said. “But we know that they are network-level problems…we’re beginning to interface with the networks that are underlying these types of conditions, and help to restore them.”

BCIs Today

Elon Musk’s Neuralink has brought BCIs into the public eye more than they’re ever been before, but there’s been a consumer neurotechnoloy market since the mid-2000s. Electroencephalography (EEG) uses electrodes placed on the head to record basic measures of brain wave activity. Consumer brain stimulation devices are marketed for cognitive enhancement, such as improving focus, memory, or attention.

More advanced neural interfaces are being used as assistive technology for people with conditions like ALS or paralysis, helping them communicate or move in ways they otherwise wouldn’t be able to: translating thoughts into text, movements, speech, or written sentences. One brain implant succeeded in alleviating treatment-resistant depression via small, targeted doses of electrical stimulation.

“Some of the things that are coming up are actually kind of extraordinary,” Hersh said. “People are working on therapies where electronics are implanted in the brain and can help deal with illnesses beyond the reach of modern medicine.”

Dystopian Possibilities

This sounds pretty great, so what could go wrong? Well, unfortunately, lots. The idea of someone tapping into your brain and being able to control it is terrifying, and we’re not just talking dramatic scenarios like The Matrix; what if you had a brain implant for a medical purpose, but someone was able to subtly influence your choices around products or services you purchase? What if a record of your emotional state was released to someone you didn’t want to have it, or your private thoughts were made public? (I know what you’re thinking: ‘Wait—isn’t that what Twitter’s for?’)

Even tools with a positive intent could have unwanted impacts. Mehin’s company created a series of video vignettes imagining what BCI tech could do in day-to-day life. “The scenarios we imagined were spread between horrifying—imagine having an AI chatbot living inside your head—to actually useful, like being able to share how you’re feeling with a friend so they can help you sort through a difficult time.”

He shared that upon showing the videos at a design conference where there were students in the audience, a teacher spoke up and said, “This is horrible, kids will never be able to communicate with each other.” But then a student got up and said “We already can’t communicate with each other, this would actually be really useful.”

Would you want to live in a world where we need brain implants to communicate our emotions to one another? Where you wouldn’t sit and have coffee with a friend to talk about your career stress or marital strife, you’d just let them tap straight into your thoughts?

No thanks.

BCI Utopia

A brain-computer interface utopia sounds like an oxymoron; the real utopia would be one where we’re healthy, productive, and happy without the need for invasive technology tapping into the networks that dictate our every thought, feeling, and action.

But reality is that the state of mental health in the US is far from ideal. Millions of people suffer from conditions like PTSD, ADHD, anxiety, and depression, and pharmaceuticals haven’t been able to come up with a great cure for any of these. Pills like Adderall, Xanax, or Prozac come with unwanted side effects, and for some people they don’t work at all.

“One in ten people in the US suffer from a mental health disorder that’s not effectively treated by their drugs,” said Robinson. “Our hope is that BCIs could offer a 20-minute outpatient procedure that would provide therapeutic benefit for conditions like treatment-resistant depression, PTSD, or ADHD, and could last the rest of your life.”

He envisions a future where everyone has the ability to communicate rapidly and seamlessly, regardless of any disability, and where BCIs actually let us get back some of the humanity that has been stolen by social media and smartphones. “Maybe BCIs could help us rebalance the neural circuits we need to have control over our focus and our mood,” he said. “We would feel better, do better, and everyone could communicate.”

In the near term, the technology will continue to advance most in medical applications. Robinson believes we should keep moving BCIs forward despite the risks, because they can help people.

“There’s a risk that people see that vision of the dystopian future and decide to stop building these things because something bad could happen,” he said. “My hope is that we don’t do that. We should figure out how to go forward responsibly, because there’s a moral obligation to the people who need these things.”

Image Credit: Gerd Altmann from Pixabay

]]>
150772
AI-Powered Brain Implant Smashes Speed Record for Turning Thoughts Into Text https://singularityhub.com/2023/01/31/ai-powered-brain-implant-smashes-speed-record-for-turning-thoughts-into-text/ Tue, 31 Jan 2023 15:00:00 +0000 https://singularityhub.com/?p=150254 We speak at a rate of roughly 160 words every minute. That speed is incredibly difficult to achieve for speech brain implants.

Decades in the making, speech implants use tiny electrode arrays inserted into the brain to measure neural activity, with the goal of transforming thoughts into text or sound. They’re invaluable for people who lose their ability to speak due to paralysis, disease, or other injuries. But they’re also incredibly slow, slashing word count per minute nearly ten-fold. Like a slow-loading web page or audio file, the delay can get frustrating for everyday conversations.

A team led by Drs. Krishna Shenoy and Jaimie Henderson at Stanford University is closing that speed gap.

Published on the preprint server bioRxiv, their study helped a 67-year-old woman restore her ability to communicate with the outside world using brain implants at a record-breaking speed. Known as “T12,” the woman gradually lost her speech from amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, which progressively robs the brain’s ability to control muscles in the body. T12 could still vocalize sounds when trying to speak—but the words came out unintelligible.

With her implant, T12’s attempts at speech are now decoded in real time as text on a screen and spoken aloud with a computerized voice, including phrases like “it’s just tough,” or “I enjoy them coming.” The words came fast and furious at 62 per minute, over three times the speed of previous records.

It’s not just a need for speed. The study also tapped into the largest vocabulary library used for speech decoding using an implant—at roughly 125,000 words—in a first demonstration on that scale.

To be clear, although it was a “big breakthrough” and reached “impressive new performance benchmarks” according to experts, the study hasn’t yet been peer-reviewed and the results are limited to the one participant.

That said, the underlying technology isn’t limited to ALS. The boost in speech recognition stems from a marriage between RNNs—recurrent neural networks, a machine learning algorithm previously effective at decoding neural signals—and language models. When further tested, the setup could pave the way to enable people with severe paralysis, stroke, or locked-in syndrome to casually chat with their loved ones using just their thoughts.

We’re beginning to “approach the speed of natural conversation,” the authors said.

Loss for Words

The team is no stranger to giving people back their powers of speech.

As part of BrainGate, a pioneering global collaboration for restoring communications using brain implants, the team envisioned—and then realized—the ability to restore communications using neural signals from the brain.

In 2021, they engineered a brain-computer interface (BCI) that helped a person with spinal cord injury and paralysis type with his mind. With a 96 microelectrode array inserted into the motor areas of the patient’s brain, the team was able to decode brain signals for different letters as he imagined the motions for writing each character, achieving a sort of “mindtexting” with over 94 percent accuracy.

The problem? The speed was roughly 90 characters per minute at most. While a large improvement from previous setups, it was still painfully slow for daily use.

So why not tap directly into the speech centers of the brain?

Regardless of language, decoding speech is a nightmare. Small and often subconscious movements of the tongue and surrounding muscles can trigger vastly different clusters of sounds—also known as phonemes. Trying to link the brain activity of every single twitch of a facial muscle or flicker of the tongue to a sound is a herculean task.

Hacking Speech

The new study, a part of the BrainGate2 Neural Interface System trial, used a clever workaround.

The team first placed four strategically located electrode microarrays into the outer layer of T12’s brain. Two were inserted into areas that control movements around the mouth’s surrounding facial muscles. The other two tapped straight into the brain’s “language center,” which is called Broca’s area.

In theory, the placement was a genius two-in-one: it captured both what the person wanted to say, and the actual execution of speech through muscle movements.

But it was also a risky proposition: we don’t yet know whether speech is limited to just a small brain area that controls muscles around the mouth and face, or if language is encoded at a more global scale inside the brain.

Enter RNNs. A type of deep learning, the algorithm has previously translated neural signals from the motor areas of the brain into text. In a first test, the team found that it easily separated different types of facial movements for speech—say, furrowing the brows, puckering the lips, or flicking the tongue—based on neural signals alone with over 92 percent accuracy.

The RNN was then taught to suggest phonemes in real time—for example, “huh,” “ah,” and “tze.” Phenomes help distinguish one word from another; in essence, they’re the basic element of speech.

The training took work: every day, T12 attempted to speak between 260 and 480 sentences at her own pace to teach the algorithm the particular neural activity underlying her speech patterns. Overall, the RNN was trained on nearly 11,000 sentences.

Having a decoder for her mind, the team linked the RNN interface with two language models. One had an especially large vocabulary at 125,000 words. The other was a smaller library with 50 words that’s used for simple sentences in everyday life.

After five days of attempted speaking, both language models could decode T12’s words. The system had errors: around 10 percent for the small library and nearly 24 percent for the larger one. Yet when asked to repeat sentence prompts on a screen, the system readily translated her neural activity into sentences three times faster than previous models.

The implant worked regardless if she attempted to speak or if she just mouthed the sentences silently (she preferred the latter, as it required less energy).

Analyzing T12’s neural signals, the team found that certain regions of the brain retained neural signaling patterns to encode for vowels and other phonemes. In other words, even after years of speech paralysis, the brain still maintains a “detailed articulatory code”—that is, a dictionary of phonemes embedded inside neural signals—that can be decoded using brain implants.

Speak Your Mind

The study builds upon many others that use a brain implant to restore speech, often decades after severe injuries or slowly-spreading paralysis from neurodegenerative disorders. The hardware is well known: the Blackrock microelectrode array, consisting of 64 channels to listen in on the brain’s electrical signals.

What’s different is how it operates; that is, how the software transforms noisy neural chatter into cohesive meanings or intentions. Previous models mostly relied on decoding data directly obtained from neural recordings from the brain.

Here, the team tapped into a new resource: language models, or AI algorithms similar to the autocomplete function now widely available for Gmail or texting. The technological tag-team is especially promising with the rise of GPT-3 and other emerging large language models. Excellent at generating speech patterns from simple prompts, the tech—when combined with the patient’s own neural signals—could potentially “autocomplete” their thoughts without the need for hours of training.

The prospect, while alluring, comes with a side of caution. GPT-3 and similar AI models can generate convincing speech on their own based on previous training data. For a person with paralysis who’s unable to speak, we would need guardrails as the AI generates what the person is trying to say.

The authors agree that, for now, their work is a proof of concept. While promising, it’s “not yet a complete, clinically viable system,” for decoding speech. For one, they said, we need to train the decoder with less time and make it more flexible, letting it adapt to ever-changing brain activity. For another, the error rate of roughly 24 percent is far too high for everyday use—although increasing the number of implant channels could boost accuracy.

But for now, it moves us closer to the ultimate goal of “restoring rapid communications to people with paralysis who can no longer speak,” the authors said.

Image Credit: Miguel Á. Padriñán from Pixabay

]]>
150254
800,000 Neurons in a Dish Learned to Play Pong in Just Five Minutes https://singularityhub.com/2022/10/18/neurons-in-a-dish-learned-to-play-pong-in-virtual-reality/ Tue, 18 Oct 2022 14:00:39 +0000 https://singularityhub.com/?p=148932 Scientists just taught hundreds of thousands of neurons in a dish to play Pong. Using a series of strategically timed and placed electrical zaps, the neurons not only learned the game in a virtual environment, but played better over time—with longer rallies and fewer misses—showing a level of adaptation previously thought impossible.

Why? Picture literally taking a chunk of brain tissue, digesting it down to individual neurons and other brain cells, dumping them (gently) onto a plate, and now being able to teach them, outside a living host, to respond and adapt to a new task using electrical zaps alone.

It’s not just fun and games. The biological neural network joins its artificial cousin, DeepMind’s deep learning algorithms, in a growing pantheon of attempts at deconstructing, reconstructing, and one day mastering a sort of general “intelligence” based on the human brain.

The brainchild of Australian company Cortical Labs, the entire setup, dubbed DishBrain, is the “first real-time synthetic biological intelligence platform,” according to the authors of a paper published this month in Neuron. The setup, smaller than a dessert plate, is extremely sleek. It hooks up isolated neurons with chips that can both record the cells’ electrical activity and trigger precise zaps to alter those activities. Similar to brain-machine interfaces, the chips are controlled with sophisticated computer programs, without any human input.

The chips act as a bridge for neurons to link to a virtual world. As a translator for neural activity, they can unite biological electrical data with silicon bits, allowing neurons to respond to a digital game world.

DishBrain is set up to expand to further games and tests. Because the neurons can sense and adapt to the environment and output their results to a computer, they could be used as part of drug screening tests. They could also help neuroscientists better decipher how the brain organizes its activity and learns, and inspire new machine learning methods.

But the ultimate goal, explained Dr. Brett Kagan, chief scientific officer at Cortical Labs, is to help harness the inherent intelligence of living neurons for their superior computing power and low energy consumption. In other words, compared to neuromorphic hardware that mimics neural computation, why not just use the real thing?

“Theoretically, generalized SBI [synthetic biological intelligence] may arrive before artificial general intelligence (AGI) due to the inherent efficiency and evolutionary advantage of biological systems,” the authors wrote in their paper.

Meet DishBrain

The DishBrain project started with a simple idea: neurons are incredibly intelligent and adaptable computing machines. Recent studies suggest that each neuron is a supercomputer in itself, with branches once thought passive acting as independent mini-computers. Like people within a community, neurons also have an inherent ability to hook up to diverse neural networks, which dynamically shifts with their environment.

This level of parallel, low-energy computation has long been the inspiration for neuromorphic chips and machine learning algorithms to mimic the natural abilities of the brain. While both have made strides, none have been able to recreate the complexity of a biological neural network.

“From worms to flies to humans, neurons are the starting block for generalized intelligence. So the question was, can we interact with neurons in a way to harness that inherent intelligence?” said Kagan.

Enter DishBrain. Despite its name, the plated neurons and other brain cells are from an actual brain with consciousness. As for “intelligence,” the authors define it as the ability to gather information, collate the data, and adjust firing activity—that is, how neurons process the data—in a way that helps adapt towards a goal; for example, rapidly learning to place your hand on the handle of a piping hot pan without searing it on the rim.

The setup starts, true to its name, with a dish. The bottom of each one is covered with a computer chip, HD-MEA, that can record from stimulated electrical signals. Cells, either isolated from the cortex of mouse embryos or derived from human cells, are then laid on top. The dish is bathed in a nutritious fluid for the neurons to grow and thrive. As they mature, they grow from jiggly blobs into spindly shapes with vast networks of sinuous, interweaving branches.

Within two weeks, the neurons from mice self-organized into networks inside their tiny homes, bursting with spontaneous activity. Neurons from human origins—skin cells or other brain cells—took a bit longer, establishing networks in roughly a month or two.

Then came the training. Each chip was controlled by commercially available software, linking it to a computer interface. Using the system to stimulate neurons is similar to providing sensory data—like those coming from your eyes as you focus on a moving ball. Recording the neurons’ activity is the outcome—that is, how they would react to (if inside a body) you moving your hand to hit the ball. DishBrain was designed so that the two parts integrated in real time: similar to humans playing Pong, the neurons could in theory learn from past misses and adapt their behavior to hit the virtual “ball.”

Ready Player DishBrain

Here’s how Pong goes. A ball bounces rapidly across the screen, and the player can slide a tiny vertical paddle—which looks like a bold line—up and down. Here, the “ball” is represented by electrical zaps based on its location on the screen. This essentially translates visual information into electrical data for the biological neural network to process.

The authors then defined distinct regions of the chip for “sensation” and “movements.” One region, for example, captures incoming data from the virtual ball movement. A part of the “motor region” then controls the virtual paddle to move up, whereas another causes it to move down. These assignments were arbitrary, the authors explained, meaning that the neurons within needed to adjust their firings to excel at a match.

So how do they learn? If the neurons “hit” the ball—that is, showing the corresponding type of electrical activity—the team then zapped them at that location with the same frequency each time. It’s a bit like establishing a “habit” for the neurons. If they missed the ball, then they were zapped with electrical noise that disrupted the neural network.

The strategy is based on a learning theory called the free energy principle, explained Kagan. Basically, it supposes that neurons hold “beliefs” about their surroundings, and adjust and repeat their electrical activity so they can better predict the environment, either changing their “beliefs” or their behavior.

The theory panned out. In just five minutes, both human and mice neurons rapidly improved their gameplay, including better rallies, fewer aces—where the paddle failed to intercept the ball without a single hit—and long gameplays with more than three consecutive hits. Surprisingly, mice neurons learned faster, though eventually they were outperformed by human ones.

The stimulations were critical for their learning. Separate experiments with DishBrain without any electrical feedback performed far worse.

Game On

The study is a proof of concept that neurons in a dish can be a sophisticated learning machine, and even exhibit signs of sentience and intelligence, said Kagan. That’s not to say they’re conscious—rather, they have the ability to adapt to a goal when “embodied” into a virtual environment.

Cortical Labs isn’t the first to test the boundaries of the data processing power of isolated neurons. Back in 2008, Dr. Steve Potter at the Georgia Institute of Technology and team found that with even just a few dozen electrodes, they could stimulate rat neurons to exhibit signs of learning in a dish.

DishBrain has a leg up with thousands of electrodes compacted in each setup, and the company hopes to tap into its biological power to aid drug development. The system, or its future derivations, could potentially act as a micro-brain surrogate for testing neurological drugs, or gaining insights into the neurocomputation powers of different species or brain regions.

But the long-term vision is a “living” bio-silicon computer hybrid. “Integrating neurons into digital systems may enable performance infeasible with silicon alone,” the authors wrote. Kagan imagines developing “biological processing units” that weave together the best of both worlds for more efficient computation—and in the process, shed a light on the inner workings of our own minds.

“This is the start of a new frontier in understanding intelligence,” said Kagan. “It touches on the fundamental aspects of not only what it means to be human, but what it means to be alive and intelligent at all, to process information and be sentient in an ever-changing, dynamic world.”

Image Credit: Cortical Labs

]]>
148932