Neuroscience Archives - Singularity Hub https://singularityhub.com/tag/neuroscience/ News and Insights on Technology, Science, and the Future from Singularity Group Wed, 18 Dec 2024 21:22:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://singularityhub.com/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Neuroscience Archives - Singularity Hub https://singularityhub.com/tag/neuroscience/ 32 32 4183809 Neuralink Rival’s Biohybrid Implant Connects to the Brain With Living Neurons https://singularityhub.com/2024/12/19/neuralink-rival-says-its-biohybrid-implant-connects-to-the-brain-with-living-neurons/ Thu, 19 Dec 2024 15:00:22 +0000 https://singularityhub.com/?p=159881 Brain implants have improved dramatically in recent years, but they’re still invasive and unreliable. A new kind of brain-machine interface using living neurons to form connections could be the future.

While companies like Neuralink have recently provided some flashy demos of what could be achieved by hooking brains up to computers, the technology still has serious limitations preventing wider use.

Non-invasive approaches like electroencephalograms (EEGs) provide only coarse readings of neural signals, limiting their functionality. Directly implanting electrodes in the brain can provide a much clearer connection, but such risky medical procedures are hard to justify for all but the most serious conditions.

California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision. In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.

“The principal advantages of a biohybrid implant are that it can dramatically change the scaling laws of how many neurons you can interface with versus how much damage you do to the brain,” Alan Mardinly, director of biology at Science Corporation, told New Scientist.

The company’s CEO Max Hodak is a former president of Neuralink, and his company also produces a retinal implant using more conventional electronics that can restore vision in some patients. But the company has been experimenting with so-called “biohybrid” approaches, which Hodak thinks could provide a more viable long-term solution for brain-machine interfaces.

“Placing anything into the brain inevitably destroys some amount of brain tissue,” he wrote in a recent blog post. “Destroying 10,000 cells to record from 1,000 might be perfectly justified if you have a serious injury and those thousand neurons create a lot of value—but it really hurts as a scaling characteristic.”

Instead, the company has developed a honeycomb-like structure made of silicon featuring more than 100,000 “microwells”—cylindrical holes roughly 15 micrometers deep. Individual neurons are inserted into each of these microwells, and the array can then be surgically implanted onto the surface of the brain.

The idea is that while the neurons remain housed in the implant, their axons—long strands that carry nerve signals away from the cell body—and their dendrites—the branched structures that form synapses with other cells—will be free to integrate with the host’s brain cells.

To see if the idea works in practice they installed the device in mice, using neurons genetically modified to react to light. Three weeks after implantation, they carried out a series of experiments where they trained the mice to respond whenever a light was shone on the device. The mice were able to detect when this happened, suggesting the light-sensitive neurons had merged with their native brain cells.

While it’s early days, the approach has significant benefits. You can squeeze a lot more neurons into a millimeter-scale chip than electrodes and each of those neurons can form many connections. That means the potential bandwidth of a biohybrid device could be much more than a conventional neural implant. The approach is also much less damaging to the patient’s brain.

However, the lifetime of these kinds of devices could be a concern—after 21 days, only 50 percent of the neurons had survived. And the company needs to find a way to ensure the neurons don’t illicit a negative immune response in the patient.

If the approach works though, it could be an elegant and potentially safer way to merge man and machine.

Image Credit: Science Corporation

]]>
159881
The Secret to Predicting How Your Brain Will Age May Be in Your Blood https://singularityhub.com/2024/12/13/the-secret-to-predicting-how-your-brain-is-aging-may-be-in-your-blood/ Fri, 13 Dec 2024 21:31:28 +0000 https://singularityhub.com/?p=159809 Brain aging occurs in distinctive phases. Its trajectory could be hidden in our blood—paving the way for early diagnosis and intervention.

A new study published in Nature Aging analyzed brain imaging data from nearly 11,000 healthy adults, middle-aged and older, using AI to gauge their “brain age.” Roughly half of participants had their blood proteins analyzed to fish out those related to aging.

Scientists have long looked for the markers of brain aging in blood proteins, but this study had a unique twist. Rather than mapping protein profiles to a person’s chronological age—the number of years on your birthday card—they used biological brain age, which better reflects the actual working state of the brain as the clock ticks on.

Thirteen proteins popped up—eight associated with faster brain aging and five that slowed down the clock. Most alter the brain’s ability to handle inflammation or are involved in cells’ ability to form connections.

From these, three unique “signatures” emerged at 57, 70, and 78 years of age. Each showed a combination of proteins in the blood marking a distinct phase of brain aging. Those related to neuron metabolism peaked early, while others spurring inflammation were more dominate in the twilight years.

These spikes signal a change in the way the brain functions with age. They may be points of intervention, wrote the authors. Rather than relying on brain scans, which aren’t often available to many people, the study suggests that a blood test for these proteins could one day be an easy way to track brain health as we age.

The protein markers could also help us learn to prevent age-related brain disorders, such as dementia, Alzheimer’s disease, stroke, or problems with movement. Early diagnosis is key. Although the protein “hallmarks” don’t test for the disorders directly, they offer insight into the brain’s biological age, which often—but not always—correlates with signs of aging.

The study helps bridge gaps in our understanding of how brains age, the team wrote.

Treasure Trove

Many people know folks who are far sharper than expected at their age. A dear relative of mine, now in their mid-80s, eagerly adopted ChatGPT, AI-assisted hearing aids, and “Ok Google.” Their eyes light up anytime they get to try a new technology. Meanwhile, I watched another relative—roughly the same age—rapidly lose their wit, sharp memory, and eventually, the ability to realize they were no longer logical.

My experiences are hardly unique. With the world rapidly aging, many of us will bear witness to, and experience, the brain aging process. Projections suggest that by 2050, over 1.5 billion people will be 65 or older, with many potentially experiencing age-related memory or cognitive problems.

But chronological age doesn’t reflect the brain’s actual functions. For years, scientists studying longevity have focused on “biological age” to gauge bodily functions, rather than the year on your birth certificate. This has led to the development of multiple aging clocks, with each measuring a slightly different aspect of cell aging. Hundreds of these clocks are now being tested, as clinical trials use them to gauge the efficacy of potential anti-aging treatments.

Many of the clocks were built by taking tiny samples from the body and analyzing certain gene expression patterns linked to the aging process. It’s tough to do that with the brain. Instead, scientists have largely relied on brain scans, showing structure and connectivity across regions, to build “brain clocks.” These networks gradually erode as we age.

The studies calculate the “brain age gap”— the difference between the brain’s structural integrity and your actual age. A ten-year gap, for example, means your brain’s networks are more similar to those of people a decade younger, or older, than you.

Most studies have had a small number of participants. The new study tapped into the UK Biobank, a comprehensive dataset of over a million people with regular checkups—including brain scans and blood draws—offering up a deluge of data for analysis.

The Brain Age Gap

Using machine learning, the study first sorted through brain scans of almost 11,000 people aged 45 to 82 to calculate their biological brain age. The AI model was trained on hundreds of structural features of the brain, such as overall size, thickness of the cortex—the outermost region—and the amount and integrity of white matter.

They then calculated the brain age gap for each person. On average, the gap was roughly three years, swinging both ways, meaning some people had either a slightly “younger” or “older” brain.

Next, the team tried to predict the brain age gap by measuring proteins in plasma, the liquid part of blood. Longevity research in mice has uncovered many plasma proteins that age or rejuvenate the brain.

After screening nearly 3,000 plasma proteins from 4,696 people, they matched each person’s protein profile to the participant’s brain age. They found 13 proteins associated with the brain age gap, with most involved in inflammation, movement, and cognition.

Two proteins particularly stood out.

One called Brevican, or BCAN, helps maintain the brain’s wiring and overall structure and supports learning and memory. The protein dwindles in Alzheimer’s disease. Higher levels, in contrast, were associated with slower brain aging and lower risk of dementia and stroke.

The other protein, growth differentiation factor 15 (GDF15), is released by the body when it senses damage. Higher levels correlated with a higher risk of age-related brain disease, likely because it sparks chronic inflammation—a “hallmark” of aging.

There was also a surprising result.

Plasma protein levels didn’t change linearly with age. Instead, changes peaked at three chronological ages—57, 70, and 78—with each stage marking a distinctive phase of brain aging.

At 57, for example, proteins related to brain metabolism and wound healing changed markedly, suggesting early molecular signs of brain aging. By 70, proteins that support the brain’s ability to rewire itself—some strongly associated with dementia and stroke—changed rapidly. Another peak, at 78, showed protein changes mostly related to inflammation and immunity.

“Our findings thus emphasize the importance and necessity of intervention and prevention at brain age 70 years to reduce the risk of multiple brain disorders,” wrote the authors

To be clear: These are early results. The participants are largely of European ancestry, and the results may not translate to other populations. The 13 proteins also need further testing in animals before any can be validated as biomarkers. But the study paves the way.

Their results, the authors conclude, suggest the possibility of earlier, simpler diagnosis of age-related brain disorders and the development of personalized therapies to treat them.

]]>
159809
Why Are Our Brains So Big? Because They Excel at Damage Control https://singularityhub.com/2024/11/26/why-are-our-brains-so-big-because-they-excel-at-damage-control/ Tue, 26 Nov 2024 15:00:39 +0000 https://singularityhub.com/?p=159622 Compared to other primates, our brains are exceptionally large. Why?

A new study comparing neurons from different primates pinpointed several genetic changes unique to humans that buffer our brains’ ability to handle everyday wear and tear. Dubbed “evolved neuroprotection,” the findings paint a picture of how our large brains gained their size, wiring patterns, and computational efficiency.

It’s not just about looking into the past. The results could also inspire new ideas to tackle schizophrenia, Parkinson’s disease, and addiction caused by the gradual erosion of one type of brain cell. Understanding these wirings may also spur artificial brains that learn like ours.

The results haven’t yet been reviewed by other scientists. But to Andre Sousa at the University of Wisconsin-Madison, who wasn’t involved in the work, the findings can help us understand “human brain evolution and all the potentially negative and positive things that come with it.”

Bigger Brain, Bigger Price

Six million years ago, we split from a common ancestor with our closest evolutionary relative, the chimpanzee.

Our brains rapidly exploded in size—but crucially, only in certain regions. One of these was at the front of the brain. Called the prefrontal cortex, it’s an “executive control” center that lets us reason, make difficult decisions, and exercise self-control. Another region, buried deep in the brain, processes emotions and gives us the ability to easily move with just a thought.

The two regions are in ready communication, and their chatter may give rise to parts of our intellect and social interactions, such as theory of mind—where we can gauge another person’s emotions, beliefs, and intentions. Dopamine neurons, a type of brain cell, bridge this connection.

They may sound familiar. Dopamine, which these neurons pump out, is known as the “feel-good” molecule. But they do so much more. Dopamine neurons are spread across the entire brain and often dial the activity of certain neural networks up or down, including those regulating emotion and movement. Dopamine neurons are like light dimmers—rather than brain networks flipping on or off like a simple switch, the neurons fine-tune the level of action.

These cells “coordinate multiple aspects” of brain function, wrote study author Alex Pollen at the University of California, San Francisco and colleagues.

The puzzle? Compared to our primate relatives, we only have twice the number of dopamine neurons, a measly increase compared to the expansion of brain size. By scanning the brains of humans and macaque monkeys—which are often used in neuroscience research—the team found that our prefrontal cortex is 18 times larger, and the striatum has ballooned roughly 7 times.

In other words, each dopamine neuron must work harder to supply these larger brain regions.

Though they have long “branches,” neurons aren’t passive wires. To connect and function normally, they require high amounts of energy. Most of this comes from the cells’ energy factories, pea-like structures called mitochondria. While highly efficient, neurons degrade as we age or in cases of neurodegeneration, including Parkinson’s disease.

Dopamine neurons are also especially vulnerable to decay compared to other types of neurons because making dopamine generates toxic byproducts. Called reactive oxygen species, these chemicals are like tiny bullets that destroy the cells’ mitochondria and their outer wrappers.

Dopamine neurons have several natural methods of fighting back. They pump out antioxidants and have evolved ways to buffer toxic molecules. But eventually these defenses break down—especially in a bigger brain. In turn, the connection between the “reasoning” and “emotion” parts of the brain starts to fray.

Accumulating damage to these neural workhorses should be a nonstarter for building larger, more complex brains during evolution. Yet somehow our brains mostly skirted the trauma. The new study asked how.

Evolution in a Dish

The team grew 3D blobs made of stem cells from human, chimpanzee, orangutan, and macaque monkeys. After a month, the hybrid mini-brains began pumping out dopamine.

It may sound like a strange strategy, but pooling cells from different species establishes a baseline for further genetic analysis. Because they’re all growing in the same environment in a single blob, any differences in a cell’s gene expression are likely due to the species it came from, rather than environmental conditions or other effects, explained the team.

The final pool included cells from eight humans, seven chimpanzees, one orangutan, and three macaque monkeys.

The cells worked well together, developing an overall pattern mimicking dopamine neurons around the striatum—ones that reach out to the frontal parts of the brain. After growing them for up to 100 days, the team captured genes from each cell to gauge which ones were turned on or off. In total, they analyzed over 105,000 cells.

Compared to other species, human stem cells seemed most versatile. They gave birth not just to dopamine neurons, but also other brain cell types. And they had another edge: Compared to chimpanzees, human dopamine neurons dialed up genes to tackle damaging reactive oxygen “bullets.”

Gene expression tests showed that human dopamine cells had far higher levels of several genes that break down the toxic chemicals compared to other non-human primates—in turn limiting their damage to the sensitive neurons.

When challenged with a pesticide that elevates reactive oxygen species, human brain cells fought off the assault with a boost of a nurturing protein called brain-derived neurotrophic factor (BDNF). The molecule has long been a neuroscience darling for its ability to spur the birth and growth of new neurons and rewire old ones. Scientists have suggested BDNF may help ketamine reverse depressive symptoms by reshaping the brain’s networks.

In contrast, chimpanzee neurons from the same mini-brains couldn’t boost the protective protein when doused with the pesticide.

Keep on Fighting

The team analyzed the hybrid mini-brains at a very early stage of their development, when there was no chance of them developing any sort of sentience.

Their goal was to understand how our brains—especially dopamine neurons—have become resilient against damage and can tolerate the energy costs that come with a larger brain.

But the results could also boost cellular defense systems in people with dopamine-related disorders. Mutations in protective genes found in the study, for example, may increase disease vulnerability in some people. Testing them in animal models paves the way for more targeted therapies against these disorders.

Knowing how dopamine works in the brain at a molecular level across species provides a snapshot of what sets us apart from our evolutionary cousins. This “can advance our understanding of the origins of human-enriched disorders and identify new therapeutic targets and strategies for drug development,” wrote the team.

Image Credit: Marek Pavlík on Unsplash

]]>
159622
Groundbreaking Brain Map Reveals Fruit Fly Brain in Stunning Detail https://singularityhub.com/2024/10/03/groundbreaking-brain-map-reveals-fruit-fly-brain-in-stunning-detail/ Thu, 03 Oct 2024 19:05:10 +0000 https://singularityhub.com/?p=159040 With a brain the size of a sesame seed, the lowly fruit fly is often considered a kitchen pest. But to neuroscientists, the flies are a treasure trove of information detailing how the brain’s intricate connections guide thoughts, decisions, and memories—not just for the critters, but also for us.

Mapping these connections is the first step. With over 140,000 neurons and 54 million synapses—the connections between nerve cells—packed into such a tiny space, the fruit fly’s brain, however rudimentary compared to ours, is highly complex.

This week, in a tour de force, hundreds of scientists from the FlyWire consortium published the first complete map of an adult female fruit fly’s brain. A project roughly a decade in the making, the wiring diagram will be a rich scientific resource for years to come. The same techniques used to make the map—which heavily relied on artificial intelligence—could be used to chart more complex brains, such as zebrafish, mice, and perhaps even humans.

“Flies are important model systems…since their brains solve the same problems as we do,” said Mala Murthy at Princeton University in a press conference. Murthy co-led the project with Sebastian Seung, who has long championed mapping as a way to better understand the inner workings of our brains and potentially extract algorithms to power more flexible AI.

In one of nine articles on the project published by Nature, Clay Reid at the Allen Institute for Brain Science, who was not involved in the project, called the release a “huge deal.”

“It’s something that the world has been anxiously waiting for, for a long time,” he said.

The study’s data and images are freely available for anyone to explore. To Murthy, the project exemplifies the power of open science. The consortium welcomed help from both neuroscientists and citizen scientists, who don’t have formal training but are passionate about the brain.

This “openness drove the science forward,” resulting in the “first time we’ve had a complete map of any complex brain,” said Murthy.

A Brain Atlas

Why do we think, feel, remember, and forget? How do we make decisions, rethink biases, and empathize with others? Even simpler, what neural signals make my fingers type these words?

It’s all about wiring. Neurons connect with each other at specific points called synapses. These connections form the basis of circuits that control behaviors. Like tracing electrical wiring in a house, mapping the brain’s cables can help decipher which neural circuit controls what behaviors. Together, the entire brain wiring diagram is called the connectome.

Previously, scientists had only fully mapped the connectome of a tiny worm with just over 300 neurons. Even so, the feat launched a revolution in neuroscience by highlighting the role of neural circuits, rather than individual cells, in steering behavior.

The fruit fly brain is bigger and far more complex. It’s densely packed with hundreds of thousands of neurons, each intricately connected to others. A single faulty reconstruction could derail our understanding of the brain’s original instructions: Rather than sending a signal down one neural highway, it could be interpreted as a taking another road that leads to nowhere.

The project began over a decade ago, when Davi Bock and colleagues at the Janelia Research Campus imaged the entire fly brain at nanoscale resolution. They “fossilized” the brain of a female fly using a chemical soup, froze it to preserve its delicate connections, and sliced it into wafers.

Using a high-resolution microscope, the scientists took images of every slice. Overall, the project produced roughly 21 million images from over 7,000 brain slices.

This wealth of data was a triumph, but also a problem. Usually, each image had to be manually examined for potential connections—an obvious headache when analyzing millions of images.

Here’s where AI comes in. Seung has long championed using AI to untangle neural wiring from individual images and 3D recreations. With AI becoming increasingly sophisticated, it’s easier for different models to learn how to identify a synapse or the branches of a neuron.

But initial AI systems were imperfect. Overlapping neural wires from two circuits could be interpreted as one: Imagine a satellite view of a tricky highway interchange that confuses your phone’s GPS system. A giant tangle of neural connections from multiple sources could be labeled as a single source, rather than a hub directing the flow of information.

Scientists in the consortium spent years manually proofreading AI-generated results. But they had help. Seung and colleagues elicited crowd input. His earlier project, Eyewire, gamified the brain-mapping process by asking citizen scientists to detect neural connections critical for vision.

FlyWire built a similar online platform in 2022, allowing hundreds of people interested in the brain, but with no formal training, to proofread AI reconstructions and classify neurons based on their shape.

The project would have taken a single person 33 years. By sharing data and recruiting citizen scientists, the team constructed the entire connectome in a fraction of the time. According to study author Gregory Jefferis at the University of Cambridge, the volunteers and scientists made more than three million edits to the AI’s initial results. They also annotated the maps—for example, labeling different cells—providing context for the viewer.

Throughout the process, the consortium released versions of its data so researchers could tap into the expanding dataset. Even without the entire map, scientists have already begun exploring ideas about how the fly’s brain works.

Brain Cartographer

The final map captures over 54 million synapses between roughly 140,000 neurons. It also includes over 8,000 different types of neuron—far more than anyone expected. Incredibly, nearly half were newly discovered for the species.

To Seung, each new cell type poses “a question” about how it influences brain functions.

The fly’s brain was also interconnected to a surprising degree. Neurons that allowed the fly to see also received sound and touch cues, suggesting these senses are wired together.

The connectome data is already spurring new studies and ideas. One team made a digitized fly brain from all the mapped neurons and connections. They then activated artificial neurons that can detect honey or bitter flavors. The virtual brain responded by sticking out the fly’s “tongue” when it detected sweet flavors.

“For decades, we haven’t known what the taste neurons in the brain are,” study author Anita Devineni at Emory University told Science. “And then, all of a sudden in a small amount of time … you can figure it out.”

Other studies using the map found neural circuits for walking, grooming, and feeding—all of which are essential to the fly’s (and our) everyday routine.

The connectome does have some limitations though. It’s based on a single female fruit fly. Brains are highly individualized in their connections, especially across sexes and ages. The decade-long effort is just a snapshot of one brain at one moment in time.

However, the map could still help researchers discover fundamental ways the brain works—like, for example, how wiring between certain brain regions allows them “talk” more efficiently.

The team is already looking to expand the work to a mouse brain with roughly 500 times more neurons than the fly. Similar efforts have already charted synapses in the mouse brain, but the new study’s technology could yield comprehensive maps of neural connections across the entire brain.

“This achievement is not just remarkable, it’s outstanding,” Moritz Helmstaedter at the Max Planck Institute for Brain Research, who was not involved in the project, told Science. “In the next decade, we’ll see tremendous progress, and possibly the first full whole mammalian brain connectome.”

Image Credit: Amy Sterling, Murthy and Seung labs, Princeton University, (Baker et al., Current Biology, 2022)

]]>
159040
A New Brain Mapping Study Reveals Depression’s Signature in the Brain https://singularityhub.com/2024/09/06/a-brain-signature-of-depression-revealed-by-new-brain-mapping-study/ Fri, 06 Sep 2024 14:00:53 +0000 https://singularityhub.com/?p=158741 Depression doesn’t mean you’re always feeling low. Sure, most times it’s hard to crawl out of bed or get motivated. Once in a while, however, you feel a spark of your old self—only to get sucked back into an emotional black hole.

There’s a reason for this variability. Depression changes brain connections, even when the person is feeling okay at the moment. Scientists have long tried to map these alternate networks. But traditional brain mapping technologies average multiple brains, which doesn’t capture individual brain changes.

This week, an international team took a peek into the depressed mind. With brain imaging technology called precision functional mapping, they captured the brains of 135 people with depression for over a year and a half.

The largest brain mapping study of the disorder to date, the results revealed a curious change in the brain’s connections in people with depression—a neural network, usually involved in attention, nearly doubled its size compared to those without the condition. The increase remained even during periods when the person no longer felt low.

The brain signature isn’t just a neurobiological sign of depression—it could also be a predictor. When observed in the brain imaging data of nearly 12,000 children starting from nine years old, the expanded network predicted the onset of depression later in adolescence.

So far, brain imaging studies for depression have been “one size fits all,” in that studies compare averaged brain scans between people with or without depression, explained the team in their study.

With precision functional mapping, it’s possible to track individual brain trajectories as they change over time. In turn, this could lead to more nuanced insights into neural connections in depression and inspire more sophisticated and personalized brain implants to tackle the disorder.

To Dr. Caterina Gratton at the University of Illinois Urbana-Champaign, who was not involved in the work, the precise details from the brain scans are impressive. “Rather than reading a few pages of many books, we’re reading whole chapters,” she told Nature.

The Old Playbook

Scientists have long tried to decipher the brain networks underlying depression.

There have been successes. At the turn of this century, neurologist Dr. Helen Mayberg and colleagues spearheaded brain mapping studies that compared brains with the disorders and those without. They eventually pinpointed a region at the front of brain that hyperactivates in people with severe depression.

Given deep brain stimulation in the region—a technique where implanted electrodes zap dysfunctional circuits with brief pulses of electricity—some patients rapidly improved. Since then, neuroscientists have identified multiple brain networks involved in the disorder. However, larger trials of deep brain stimulation yielded mixed results.

Some patients didn’t respond to the treatment. But others experienced life-altering changes. In 2021, a woman named Sarah received a personalized brain implant. She had battled severe depression for years and had tried a range of medications. None of them worked. But the implant did. Fine-tuned to her brain’s unique electrical signals, for the first time in her life, Sarah had her depression under control. “I’m finally laughing,” she said at the time.

Sarah’s case highlighted two points for tackling the brain networks involved in depression. One, the disorder affects each brain differently. And two, depression is chronic, with ebbs and flows in mood. Imaging brain connection changes at just one point in time isn’t enough—what’s needed is to follow the brain’s functional changes over time.

Precision Mapping

There are many ways to track brain function, but a popular one is functional magnetic resonance imaging (fMRI). The technology tracks changes in blood flow in the brain—a proxy for activity—and builds a map of how different regions and brain connections “talk” to each other.

But our brains are snowflakes. Although brain networks look similar on average, each person slightly deviates. Precision mapping captures these individual differences, with previous studies showing that the size, shape, and location of neural networks can markedly differ, but are generally stable for each person. In other words, we all have a unique “brainprint.”

However, depression changes these dynamics as the disorder progresses. A single fMRI brain scan—a snapshot—can’t capture the brain’s trajectory over time.

The team tackled these problems head-on. In a first small study, they repeatedly imaged the brains of 6 people with depression—ranging from mild to severe—across 22 sessions. Precision mapping was also used for 37 people without the disorder.

By looking at brain activation patterns, “it was immediately apparent” that a brain network changed in people with depression, even without averaging the results. Dubbed the “salience network,” it relies on multiple brain regions to help us navigate the world with purpose. The network combines outside stimulation with an internal goal—say, make a cup of coffee for an early morning jolt. As a central networking hub in the brain, it lets us decide what to pay attention to.

In people with depression, the network expanded twice as much, compared to controls—in that more parts of the brain activated to support it.

Six people hardly represent the entire spectrum of depression. To validate their findings, the team turned to three existing datasets from Weill Cornell Medicine and Stanford University. Totaling 135 people with depression, the datasets captured detailed brain images and demographic and clinical information. Almost every person with depression showed a larger salience network. They also saw a similar brain pattern in an additional dataset of nearly 300 people with depression, who didn’t respond to antidepressant drugs.

Symptoms of depression ebb and flow. Does network expansion follow the pattern? In another test, they used precision fMRI to follow people with depression roughly every week for up to a year and a half. With each scan, the participants also reported their mood based on a standardized depression scale.

Regardless of current emotions, the salience network remained roughly the same size for each person. However, the strength of connections between the network’s components changed—decreasing when the person was actively depressed. Using AI to analyze these patterns, the team was able to predict—for any of the participants—if they might experience a depressive episode the following week.

The results suggest that an expanded salience network, and its inner connections, isn’t just a marker for depression after symptoms have already set in. It could also be a predictor. To test the idea, they tapped into the Adolescent Brain Cognitive Development (ABCD) dataset, the largest long-term study of brain development and health for children in the US. The ambitious project tracks nearly 12,000 children across the country, from nine years old to young adulthood.

By analyzing salience network expansion, the team identified 57 kids, aged 10 to 12 years old, who eventually developed depression a few years later. Their salience network was already far larger than similarly aged peers at their initial visit. If replicated in more children, it could be a feature that helps predict depression risk and allows early intervention.

For now, scientists don’t know why or how the network expands. It could be partly due to genetics, which plays a role in depression. Another reason could be the brain dials up the network during depressive episodes, recruiting more brain cells and resulting in the network’s growth. But the study shows the power of precision brain mapping over time.

The results “will open new avenues for understanding cause and effect” when it comes to brain changes and depression, wrote the team, “and for designing personalized, prophylactic treatments.”

Image Credit: Andrew Neel / Unsplash

]]>
158741
Some Brains Develop Alzheimer’s—Others Don’t. A New Cell Map Could Explain Why. https://singularityhub.com/2024/08/30/some-brains-develop-alzheimers-others-dont-a-new-cell-map-could-explain-why/ Fri, 30 Aug 2024 17:45:39 +0000 https://singularityhub.com/?p=158618 Alzheimer’s disease slowly takes over the mind. Long before symptoms occur, brain cells are gradually losing their function. Eventually they wither away, eroding brain networks that store memories. With time, this robs people of their recollections, reasoning, and identity.

It’s not the type of forgetfulness that happens during normal aging. In the twilight years, our ability to soak up new learning and rapidly recall memories also nosedives. While the symptoms seem similar, normally aging brains don’t exhibit the classic signs of Alzheimer’s—toxic protein buildups inside and surrounding neurons, eventually contributing to their deaths.

These differences can only be caught by autopsies, when it’s already too late to intervene. But they can still offer insights. Studies have built a profile of Alzheimer’s brains: Shrunken in size, with toxic protein clumps spread across regions involved in reasoning, learning, and memory.

However, those results only capture the very end of the journey.

This week, an international team led by Columbia University, MIT, and Harvard sought to map the entire process. Analyzing 437 donated brains from aging people—some with Alzheimer’s, others not—they peeked into the gene expression of 1.65 million brain cells in the regions most affected by Alzheimer’s and built a comprehensive cell atlas for aging brains.

A machine learning algorithm next teased apart the trajectories that differentiate Alzheimer’s from a normally aging brain. The team found a slew of genetic changes in multiple cell types that differed between the two. Some cell types controlled immunity; others supported metabolism.

“Our study highlights that Alzheimer’s is a disease of many cells and their interactions, not just a single type of dysfunctional cell,” said study author Dr. Philip De Jager in a press release.

With these results, “we provide a cellular foundation for a new perspective” on how Alzheimer’s develops, which could inform personalized treatments by targeting different brain cell communities, the authors wrote in the study.

“We may need to modify cellular communities to preserve cognitive function,” said Jager.

The Brainy Bunch

Our brains are a bit like a suburban community. Multiple types of neighboring cells help each other out.

Neurons are the best known. These spark with electricity and form the networks underlying our emotions, thoughts, and memories. But they don’t act alone.

Astrocytes—named for their star-like shape (pictured above)—nurture neurons with supportive molecules, especially when they need a metabolic boost. Meanwhile, microglia—the neighborhood watch committee—keep watch for signs of danger. A type of immune cell, these rapidly destroy bacteria, viruses, and other intruders. They’re also like “gardeners” for neurons, snipping away some connections to optimize neural networks as we learn.

In Alzheimer’s disease, this neighborliness breaks down. Microglia go rogue and increase inflammation. Astrocytes lose their function. Neurons wilt and die. The downward spiral happens over years, if not decades. By the time symptoms are obvious, it’s too late.

With over 400 brain samples, the new study aimed to find new treatments by charting the molecular journey of these brain cells.

Scientists have previously analyzed donated brains from people with and without Alzheimer’s. But they focused mostly on overall structure or zoomed in on molecular details. They didn’t chart the long journey of each individual cell’s role that, together, led to Alzheimer’s.

“Past studies have analyzed brain samples as a whole, and they lose all cellular detail,” said De Jager. “We now have tools to look at the brain in finer resolution, at the level of individual cells.”

Jager’s team aimed to find changes in multiple types of brain cells involved in the disease. They also used autopsies to reconstruct a chain of cause-and-effect: That is, finding the genes that translate brain cell changes into cognitive decline, and eventually, Alzheimer’s.

Brain Bank

The study tapped into a long-running source for data. The Religious Orders Study and the Rush Memory and Aging Project (ROSMAP), which began in the 1990s, enrolled people 65 years of age and older and captured their health and mental status each year using standardized tests for up to two decades. The project also welcomed brain donations, yielding a valuable biobank.

Here, the team analyzed brain tissues from over 400 people—some with Alzheimer’s, others not. They used a popular method to gauge how individual cells work called single cell RNA sequencing. The technology has taken biology research by storm with its ability to map gene expression—that is, which genes are turned on—in individual cells.

It’s especially useful when studying the brain. Our noggins are incredibly complex, with many different cell types working together. The technology offers a way to peek into the genetic workings of each type and decipher how they all fit together in a functional “neighborhood.”

By looking at individual neurons and cognition test results from the donors, “we can reconstruct trajectories of brain aging from the earliest stages of the disease,” said De Jager.

The brain samples spanned brain aging and Alzheimer’s—roughly 60 percent showed signs of the disease—and the team captured the genetic readouts of 1.6 million brain cells of all types.

Microglia, the brain’s immune cells, were shuffled into 16 different populations based on their sequencing results, with some previously linked to Alzheimer’s in a mouse model. Astrocytes, the brain’s supportive cells, also showed 10 distinct gene expression types.

The team also documented different neurons, blood vessel cells that feed the brain, and other supporting cells that help maintain the brain’s overall structure.

Algorithm to Alzheimer’s

To make sense of the data, the team developed an algorithm to link different subpopulations of cells to the disease. They focused on three main problems related to Alzheimer’s. The first two are the presence of toxic protein clumps inside and outside of neurons. The third is the rate of cognitive decline before death.

With a custom-designed algorithm called BEYOND, the team sifted through the database and found two trajectories for aging brains. One aged normally, while the other showed signs of Alzheimer’s, with increased toxic protein buildup and cognitive decline. No single brain cell type, by itself, was the villain—rather, the whole community spiraled out of control.

During the disease’s early stages, a subset of microglia ramped up. These cells increased inflammation and accumulated toxic proteins.

“We propose that two different types of microglial cells—the immune cells of the brain—begin the process of amyloid and tau accumulation that define Alzheimer’s disease,” said De Jager.

The cells then triggered an Alzheimer’s cascade. A subset of astrocytes—the brain’s supporting cells—were the first victim, as they frantically tried to increase the activity of protective genes. Based on the analysis, astrocytes may be key to differentiating Alzheimer’s and aging.

The algorithm predicted these types of cells may be a “point of convergence” for processes that lead to dementia, as opposed to normal brain aging. Knowing how individual cells contribute to Alzheimer’s—and their journey into the disease—makes it possible to target specific cellular communities with new therapies to tackle both problems.

“These are exciting new insights that can guide innovative therapeutic development for Alzheimer’s and brain aging,” said De Jager.

Image Credit: Kevin Richetin / University of Lausanne via Flickr

]]>
158618
Newly Discovered Brain Wave Helps Lock in Memories While We Sleep https://singularityhub.com/2024/08/19/newly-discovered-brain-wave-helps-lock-in-memories-while-we-sleep/ Mon, 19 Aug 2024 21:20:55 +0000 https://singularityhub.com/?p=158439 Sleep works magic on memory.

You might’ve felt these frustrations before: Trying to learn a guitar riff, shoot a free-throw, or nail a difficult phrase in a new language, but despite hours of practice, you’re just not getting it right. Then with a good night’s sleep—voilà, somehow, you’ve nailed the skills.

Neuroscientists have long known that brain waves during sleep etch learnings from the previous day into neural circuits for long-term storage. As we drift off, our brains remain hard at work. One region, a seahorse-shaped structure called the hippocampus, especially sparks with activity. This area is essential for translating what we learn into long-term memories during sleep.

Disruptions to electrical activity in the hippocampus can lead to memory problems in multiple neurological disorders, including schizophrenia and Alzheimer’s disease. But one question has always troubled neuroscientists.

Brain cells, or neurons, need to stay in a “Goldilocks zone” of activity to encode and store memories. Learning new things spikes activity in a specific set of neurons. But when they further increase their activity during sleep—like a car with a gas pedal and no brake—what’s to prevent them from hyperactivating and, in turn, destroying the brain’s ability to learn?

A new study from Cornell University suggests a way the brain balances itself during sleep. In recordings from multiple areas of the hippocampus in mice and rats, the team discovered a previously undetected brain wave that keeps brain cells in check. Dubbed BARR (for barrage of action potentials), these brain waves reset neurons so they can encode new experiences the next day, while enhancing memories during sleep.

“Sleep is not just a time for the body to rest but also for the mind to solidify memories,” wrote Drs. Xiang Mou and Daoyun Ji at Baylor College of Medicine in Houston, Texas, who were not involved in the study.

The results help explain why sleep promotes memory, and how its disruption can lead to brain disorders in schizophrenia, Alzheimer’s disease, and other neurological conditions associated with memory problems.

“This mechanism could allow the brain to reuse the same resources, the same neurons, for new learning the next day,” said study author Dr. Azahara Oliva at Cornell University in a press release.

Under the Sea

As we drift into unconsciousness every night during sleep, the hippocampus is hard at work. Shaped like a seahorse, this brain region has long been known as a hub for memory.

Patients with damage to the hippocampus lose their ability to create new memories. And decades of research shows the area processes the day’s learnings for long-term storage in other parts of the brain—and holds the key to retrieving those memories when needed.

But the region is hardly a one-trick pony. Imagine it as a town with multiple neighborhoods and highways connecting it to other brain regions. Each neighborhood plays a slightly different role. Some encode new memories, which are then shuffled to the cortex—the outer part of the brain—for longer storage and retrieval. Others link specific memories to joy, sadness, and other feelings through wiring connected to regions of the brain associated with emotion.

Scientists have already mapped out these neighborhoods. CA1, sitting at the front, extensively connects to other parts of the brain involved in reasoning and memory. CA3 encodes memories and potentially helps separate similar ones—for example, did I get that cup of coffee yesterday at that café, or was that a memory from a few days ago?

But the role of the middle child, CA2, has always been mysterious.

Sing Me to Sleep

Every night we cycle through several stages of sleep. One stage, called non-rapid eye movement, occurs when we drift off to sleep and eventually transition from light sleep into deeper slumber.

This is when CA1 perks up. Neurons encoding memories from the day reactivate—kind of like replaying a memory on video, but at a faster rate.

These patterns, called sharp-wave ripples, help etch memories into the brain. Like waves on the ocean, they “splash” across other brain regions in electrical ebbs and flows that reconfigure neural connections. These waves help the hippocampus send learning to other regions where it can be stored in memory. But without a way to dampen the waves down, neurons hyperactivate, meaning they can no longer learn or store new information.

To study how sleep changes the brain, the team implanted electrodes into multiple parts of the hippocampus in mice and rats to monitor their brain activity.

The rodents then learned several tasks, for example, figuring out if an object had been removed. A bit like finding your favorite couch wasn’t where you expected it to be, this requires memory of its location. Other tests challenged the critters to navigate a maze and have social interactions—that is, remembering whether they’d met a previous acquaintance.

As the mice fell asleep, their brain activity showed signs of sharp-wave ripples. But surprisingly, CA2, the middle child, also sparked, with long-lasting bursts of activity spreading through the hippocampus. These BARR brain waves—never seen before—flared up in neurons that encode learning, which usually have higher levels of activity, to tamp them down in sleep.

In a way, as we sleep, our brain is in a kind of civil war. Neurons encoding memories reactivate to consolidate learning, while BARR brain waves keep them at bay so that they don’t overactivate.

 A Brainy Scale

The team focused on a type of brain cell that triggers BARR brain waves during sleep.

Using optogenetics—a way to turn neurons on or off using light—in rodents, they artificially disrupted BARR activity as the critters slept after learning several memory tasks. As a result, sharp-wave ripples, the type of brain activity usually associated with solidifying memory, lasted far longer.

Surprisingly, it made memory worse. On the surface, it doesn’t make sense: Wouldn’t more activity during sleep be better for memory? Not so much, explained the team. It’s all about balance.

“BARRs serve as a passive brake” that lowered increased neural activity in sleep, wrote Mou and Ji. The brain resets balance after a day of hard work. Disrupting BARR during sleep altered the animals’ memory, likely because their neural networks were functioning abnormally.

It’s not to say BARR is behind Alzheimer’s, schizophrenia, or other neurological disorders. Many questions remain. The team hasn’t yet determined where the brain waves start in the brain. How they counteract memory-making sharp-wave ripples during sleep also remains a mystery.

But by tinkering with these mechanisms, scientists can begin to battle memory disorders. They may also explore ways to re-write traumatic memories during sleep and help with depression, post-traumatic stress disorder, and other neurological conditions. Future studies could reveal more insights into how sleep controls memory, and why it breaks down in a variety of brain disorders.

Image Credit: Matteo Catanese / Unsplash

]]>
158439
A New Study Says AI Models Encode Language Like the Human Brain Does https://singularityhub.com/2024/08/07/a-new-study-says-ai-models-encode-language-like-the-human-brain-does/ Wed, 07 Aug 2024 17:34:05 +0000 https://singularityhub.com/?p=158195

Language enables people to transmit thoughts to each other because each person’s brain responds similarly to the meaning of words. In newly published research, my colleagues and I developed a framework to model the brain activity of speakers as they engaged in face-to-face conversations.

We recorded the electrical activity of two people’s brains as they engaged in unscripted conversations. Previous research has shown that when two people converse, their brain activity becomes coupled, or aligned, and that the degree of neural coupling is associated with better understanding of the speaker’s message.

A neural code refers to particular patterns of brain activity associated with distinct words in their contexts. We found that the speakers’ brains are aligned on a shared neural code. Importantly, the brain’s neural code resembled the artificial neural code of large language models.

The Neural Patterns of Words

A large language model is a machine learning program that can generate text by predicting what words most likely follow others. Large language models excel at learning the structure of language, generating humanlike text, and holding conversations. They can even pass the Turing test, making it difficult for someone to discern whether they are interacting with a machine or a human. Like humans, large language models learn how to speak by reading or listening to text produced by other humans.

By giving the large language model a transcript of the conversation, we were able to extract its “neural activations,” or how it translates words into numbers, as it “reads” the script. Then, we correlated the speaker’s brain activity with both the large language model’s activations and with the listener’s brain activity. We found that the large language model’s activations could predict the speaker and listener’s shared brain activity.

To understand each other, people have a shared agreement on the grammatical rules and the meaning of words in context. For instance, we know to use the past tense form of a verb to talk about past actions, as in the sentence: “He visited the museum yesterday.” Additionally, we intuitively understand that the same word can have different meanings in different situations. For instance, the word cold in the sentence “you are cold as ice” can refer either to one’s body temperature or personality trait, depending on the context. Due to the complexity and richness of natural language, until the recent success of large language models, we lacked a precise mathematical model to describe it.

Our study found that large language models can predict how linguistic information is encoded in the human brain, providing a new tool to interpret human brain activity. The similarity between the human brain’s and the large language model’s linguistic code has enabled us, for the first time, to track how information in the speaker’s brain is encoded into words and transferred, word by word, to the listener’s brain during face-to-face conversations. For example, we found that brain activity associated with the meaning of a word emerges in the speaker’s brain before articulating a word, and the same activity rapidly reemerges in the listener’s brain after hearing the word.

Powerful New Tool

Our study has provided insights into the neural code for language processing in the human brain and how both humans and machines can use this code to communicate. We found that large language models were better able to predict shared brain activity compared with different features of language, such as syntax, or the order in which words connect to form phrases and sentences. This is partly due to the large language models’ ability to incorporate the contextual meaning of words, as well as integrate multiple levels of the linguistic hierarchy into one model: from words to sentences to conceptual meaning. This suggests important similarities between the brain and artificial neural networks.

An important aspect of our research is using everyday recordings of natural conversations to ensure that our findings capture the brain’s processing in real life. This is called ecological validity. In contrast to experiments in which participants are told what to say, we relinquish control of the study and let the participants converse as naturally as possible. This loss of control makes it difficult to analyze the data because each conversation is unique and involves two interacting individuals who are spontaneously speaking. Our ability to model neural activity as people engage in everyday conversations attests to the power of large language models.

Other Dimensions

Now that we’ve developed a framework to assess the shared neural code between brains during everyday conversations, we’re interested in what factors drive or inhibit this coupling. For example, does linguistic coupling increase if a listener better understands the speaker’s intent? Or perhaps complex language, like jargon, may reduce neural coupling.

Another factor that can influence linguistic coupling may be the relationship between the speakers. For example, you may be able to convey a lot of information with a few words to a good friend but not to a stranger. Or you may be better neurally coupled to political allies rather than rivals. This is because differences in the way we use words across groups may make it easier to align and be coupled with people within rather than outside our social groups.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Mohamed HassanPixabay

]]>
158195
How a Mind-Controlling Parasite Could Deliver Medicine to the Brain https://singularityhub.com/2024/07/29/how-a-mind-controlling-parasite-could-deliver-medicine-to-the-brain/ Mon, 29 Jul 2024 20:55:31 +0000 https://singularityhub.com/?p=158120 The brain is like a medieval castle perched on a cliff, protected on all sides by high walls, making it nearly impenetrable.

Its shield is the blood-brain barrier, a layer of tightly connected cells that only allows an extremely selective group of molecules to pass. The barrier keeps delicate brain cells safely away from harmful substances, but it also blocks therapeutic proteins—like, for example, those that grab onto and neutralize toxic clumps in Alzheimer’s disease.

One way to smuggle proteins across? A cat parasite.

A new study in Nature Microbiology tapped into the strange world of mind-bending parasites, specifically, Toxoplasma gondii. Perhaps best known for its ability to rid infected mice of their fear of cats, the parasite naturally travels from the gut to the brain—including ours—and releases proteins that tweak behavior.

The international team hijacked T. gondii’s natural, brain-targeting impulses to engineer two delivery systems, one for a single-shot therapeutic boost and another that lasts longer.

The unconventional shuttle worked on brain cells in petri dishes and brain organoids. Often called “mini-brains,” these pea-sized blobs roughly capture the cell types and structure of a growing fetal human brain. However, they don’t usually produce a blood-brain barrier.

To show the shuttle could gain access to the brain, the team engineered a T. gondii shuttle with a therapeutic protein for Rett syndrome, a genetic disorder that leads to autism-like symptoms.

After one shot into the belly, the shuttle released the therapeutic proteins widely into the brains of lab mice within a few weeks. The proteins mostly accumulated in parts of the brain critical for perception, reasoning, and memory.

“For medicine, efficient and safe delivery of proteins could unlock a broad category of protein-based therapies,” wrote the authors.

U-Haul to the Brain

Getting protein-based drugs into the brain is a pain. Unlike gene therapy concoctions, proteins are extremely sensitive to heat and acid. They can’t be swallowed as a pill—the gut’s acid destroys them. Even injections straight into the blood stream are problematic. Immune cells, for example, may wipe out the proteins before they have a chance to reach the brain.

Thankfully, nature is a source of inspiration. All brain-targeting carriers need to bypass two “checkpoints”: The first is the blood-brain barrier, the second, the neuron’s membrane.

A popular approach uses a bio-engineered virus carrying the genetic instructions to make a protein once inside the neurons. Often employed in gene therapy, scientists make the virus relatively safe by stripping away its infectious tendencies. But like a small U-Haul van, it only has room for the genetic instructions of smaller proteins.

Another surprising carrier traces its roots to HIV. Scientists studying the virus found a small protein chunk that allows it to penetrate the blood-brain barrier and get past neuron membranes. By engineering these chunks—which aren’t infectious—into shuttles, scientists can then tag protein cargos onto them. One example (by yours truly) could tunnel into the brain after an injection into the bloodstream and protect rats’ brains from damage after a stroke.

These shuttles too are limited by size: They can only drag along very small protein snippets. Antibodies and other larger proteins are beyond reach.

T. gondii, in contrast, has a much larger capacity.

A Synthetic Fleet

A cat parasite hardly sounds like medicine. But it’s a worthy candidate.

Normally, T. gondii produces egg-like “offspring” in the guts of cats, which are then strewn into the wild as they poop. The parasite waits for potential hosts—say, a mouse sniffing for crumbs or a human changing the litter box—and infects the unsuspecting host, ultimately spreading into the brain. Once inside, T. gondii lingers in neurons, rather than other brain cells.

It sounds terrifying, but for people with a healthy immune system, the parasite usually doesn’t cause harm. “In fact, it is estimated that a third of the world population is chronically infected with the parasite,” Dr. Oded Rechavi’s lab, who led the study, wrote in a blog post.

To transform T. gondii into a delivery tool, the team focused on two secretion systems in the parasite that let the parasite pump proteins into target cells. These are “remarkable innate abilities,” wrote the team.

They first built a protein link between the two systems and their potential cargo, for example, proteins implicated in Parkinson’s disease, gene-editing proteins, and MECP2—which is linked to Rett syndrome. The team then tethered the proteins to one of the two systems and delivered them into a variety of cells in petri dishes.

Within a day, the proteins were thriving inside their hosts.

In neurons without MECP2, a dose of T. gondii carrying a synthetic version of the protein boosted its levels to roughly 58 percent of normal cells, which is similar to previous gene therapy studies of Rett syndrome. The added MECP2 worked like its natural counterpart, turning genes on or off inside neurons as expected.

T. gondii also reliably released its payload into mature brain organoids. The protein altered genetic transcription throughout the mini-brains, changing gene expression as predicted.

The two T. gondii systems had individual strengths. One is a “kiss-and-spit”: Like a fighter jet, T. gondii swoops in on a neuron, releases its protein payload, and leaves. The other takes a longer approach, requiring T. gondii to infiltrate and establish itself inside the cell, like a sleeper agent. Once in, however, the system can deliver its cargo for a longer time and at a higher level.

Cat and Mouse Game

As a final test, the team injected the engineered T. gondii, with an MECP2 payload, into the bellies of mice—like an insulin shot for people with diabetes.

Eighteen days later, the mice’s brains showed signs of cysts—which are harmless for people without immune problems—indicating the parasite was establishing itself inside the brain. Other tissues, including the liver, lung, and spleen, had very little T. gondii roaming around for up to three months after injection. Only the brain had a boost in MECP2.

“Many proteins require controlled targeting” to a specific part of the body, or otherwise they’re “ineffective or even deleterious if delivered elsewhere,” explained the team.

Surveying multiple regions of the brain, T. gondii seemed to prefer settling inside the cortex—the outermost region of the brain involved in perception, reasoning, and making decisions. Its second choice was the “memory center,” the hippocampus. That’s good news: Both regions are favorite targets for tackling neurological disorders. And the treatment didn’t alert the body’s immune system, with the therapeutic proteins easily getting along with the brain’s usual protein brigade.

T. gondii can be used…[for]…many of the challenges associated with protein delivery,” for both scientific research and therapeutics, wrote the team.

There’s still a long road to go. Although T. gondii is safe for healthy people, it has been linked to side effects in the brain for the immunocompromised. The next step is to strip away its toxicity in a way similar to the viral carriers now used for gene therapy. If it works, T. gondii is set for a genetic makeover as a safe shuttle to the brain—despite its cat parasite origin story.

Image Credit: T. gondii cyst in mouse brain tissue. Jitinder P. Dubey / Wikimedia Commons

]]>
158120
Your Brain on Mushrooms: Study Reveals What Psilocybin Does to the Brain—and for How Long https://singularityhub.com/2024/07/18/your-brain-on-mushrooms-study-reveals-what-psilocybin-does-to-the-brain-and-for-how-long/ Thu, 18 Jul 2024 21:35:34 +0000 https://singularityhub.com/?p=157997 Magic mushrooms have recently had a reputation revamp. Often considered a hippie drug, their main active component, psilocybin, is being tested in a variety of clinical trials as a therapy for the likes of depression and post-traumatic stress, bipolar, and eating disorders.

Psilocybin joins ketamine, LSD (commonly known as acid), and MDMA (often called ecstasy or molly) as part of the psychedelic therapy renaissance. But the field has had some ups and downs.

In 2019, the FDA approved a type of ketamine for severe depression that was resistant to other therapies. Then in early June, the agency rejected MDMA therapy for post-traumatic stress disorder, although it has been approved for limited use in Australia. Meanwhile, healthcare practitioners in Oregon are already using psilocybin, in combination with counseling, to treat depression, although the drug hasn’t yet been federally approved.

Despite its potential, no one knows how psilocybin works in the brain, especially over longer durations.

Now, a team from Washington University School of Medicine has comprehensively documented brain-wide changes before, during, and after a single dose of psilocybin over a period of weeks. As a control, the volunteers also took Ritalin, a stimulant, at a different time to mimic parts of the psilocybin high.

This fMRI brain scan of shows the effects of psilocybin on the brain over time.
An fMRI scan shows the effect of psilocybin on the brain. Yellows, oranges, and reds indicate an increasingly large departure from normal activity. Image Credit: Sara Moser/Washington University

In the study, psilocybin dramatically reset brain networks that hum along during active rest—say, while daydreaming or spacing out. These networks control our sense of self, time, and space. Although most effects were temporary, one connection showed changes for weeks.

In some participants, the alterations were so drastic that their brain connections resembled those of completely different people.

Normally, the brain synchronizes activity across regions. Psilocybin disrupts these connections, in turn making the brain more malleable and ready to form new networks.

This could be how magic mushrooms “contribute to persistent changes…in brain regions that are responsible for controlling a person’s sense of self, emotion, and life-narrative,” wrote Petros Petridis at the NYU Langone Center for Psychedelic Medicine, who was not involved in the study.

Magical Mystery Tour

The brain’s 100 billion neurons and trillions of connections are highly organized into local and brain-wide networks.

Local networks tackle immediate tasks such as processing vision, sound, or motor functions. Brain-wide networks integrate information from local networks to coordinate more complex tasks, such as decision-making, reasoning, or self-reflection.

Previous psilocybin studies mainly focused on local networks. In rodents, for example, the drug regrew neural connections that often wither away in people with severe depression. Scientists have also pinpointed a receptor—which psilocybin grabs onto—that triggers this growth.

But psilocybin’s effects on the whole human brain remained a mystery.

Several years back, one team sought an answer by giving people with severe depression a dose of psilocybin. Using functional MRI (fMRI), a type of imaging that captures brain activity based on changes in blood flow, they found the chemical desynchronized neural networks across the entire brain, essentially “rebooting” them out of a depressive state.

Daydream Believer

The new study used fMRI to track brain activity in seven adults without mental health struggles before, during, and for three weeks after they took psilocybin. The researchers gave participants a single dose on par with that commonly used in clinical trials for depression.

During the scans, the participants had two tasks. One sounds easy: They kept still and focused their gaze on white crosshairs on a computer screen, but remained otherwise relatively relaxed. Even so, tripping on mushrooms inside a noisy, claustrophobic machine is hardly relaxing—heart rate skyrockets, nerves are on high alert, and anxiety rapidly builds. To control for these side effects, the participants also took Ritalin—a stimulant commonly used to manage attention deficit hyperactivity disorder—at another point in time during the study.

The other task required more brain power. Like an audio version of a CAPTCHA, the researchers asked volunteers to match an image and a word prompt—for example, they’d have to pick a photo of a beach after hearing the word “beach.”

Throughout the study, each person had their brains scanned roughly every other day, on average totaling 18 scans.

Mapping brain connections over time in the same person can “minimize the effects of individual differences in brain networks organization,” wrote Petridis.

The study found psilocybin immediately desynchronized a brain-wide network, generating a brain activation “fingerprint” of sorts that differentiates it from a sober brain.

Dubbed the default mode network, this neural system is active when the mind is alert but wanders, like when reliving previous memories or imagining future scenarios. The network is distributed across the brain and is often studied for its role in consciousness and a sense of self. The chemical also desynchronized local networks across the cortex, the outermost layer of the brain that supports perception, reasoning, and decision-making.

However, the chemical partially lost its magic when the volunteers were focused on the image-audio task, at which point the scans showed less disruption to the default mode network.

This has implications for psilocybin-assisted treatment. Clinical studies have shown that during psychedelic therapy, a challenging experience—a bad trip—can be overcome by a method called “grounding,” which reconnects the person to the outside world.

These results could explain why adding eye masks and ear plugs can enhance the therapeutic experience by blocking outside stimulation, while grounding pulls one out of a bad trip.

Psilocybin’s effects lingered for a few days, after which most brain networks returned to normal—with one exception. A link between the default mode network and a part of the brain involved in creating memories, emotions, and a sense of time, space, and self was disrupted for weeks.

In a way, psilocybin opens a window during which neural connections become more malleable and easier to rewire. People with depression or post-traumatic stress disorder often have a rigid and maladaptive thought pattern that’s hard to shake off. With therapy, psilocybin allows the brain to reorganize those networks, potentially helping people with depression to escape negative ruminations or for people suffering from addiction to consider a new perspective on their relationship to substances.

“In other words, psilocybin could open the door to change, allowing the therapist to lead the patient through,” wrote Petridis.

Although the study offered a higher resolution image of the brain on mushrooms over a longer timeframe than ever before, it only captured scans of seven people. As the participants did not have mental health issues, their responses to psilocybin may differ from those most likely to benefit therapeutically.

Ultimately, larger studies in diverse patient populations—as in several recent MDMA trials—could offer more insights into the efficacy of psilocybin therapy. For example, the one persistent brain network disruption could be an indicator of treatment efficacy. Investigating whether other psychedelics alter the same neural connection is a worthy next step, wrote Petridis.

With the field of psychedelic therapy projected to reach over $10 billion by 2027, understanding how the drug affects the brain could bring new medications with fewer side effects.

Image Credit: Sara Moser/Washington University

]]>
157997