Shelly Fan, Author at Singularity Hub https://singularityhub.com/author/sfan/ News and Insights on Technology, Science, and the Future from Singularity Group Sun, 22 Dec 2024 23:33:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://singularityhub.com/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Shelly Fan, Author at Singularity Hub https://singularityhub.com/author/sfan/ 32 32 4183809 Exosomes Are Being Hyped as a ‘Silver Bullet’ Therapy. Scientists Say No. https://singularityhub.com/2024/12/26/exosomes-are-being-hyped-as-a-silver-bullet-therapy-scientists-say-no/ Thu, 26 Dec 2024 15:00:46 +0000 https://singularityhub.com/?p=159957 When human stem cells were discovered at the turn of the century, it sparked a frenzy. Scientists immediately dreamed of repairing damaged tissues due to aging or disease.

A few decades later, their dreams are on the brink of coming true. The US Food and Drug Administration (FDA) has approved blood stem cell transplantation for cancer and other disorders that affect the blood and immune system. More clinical trials are underway, investigating the use of stem cells from the umbilical cord to treat knee osteoarthritis—where the cartilage slowly wears down—and nerve problems from diabetes.

But the promise of stem cells came with a dark side.

Illegal stem cell clinics popped up soon after the cells’ discovery, touting their ability to rejuvenate aged skin, joints, or even treat severe brain disorders such as Parkinson’s disease. Despite FDA regulation, as of 2021, there were nearly 2,800 unlicensed clinics across the country, each advertising stem cells therapies with little scientific evidence.

“What started as a trickle became a torrent as businesses poured into this space,” wrote an expert team in the journal Cell Stem Cell in 2021.

History is now repeating itself with an up-and-coming “cure-all:” exosomes.

Exosomes are tiny bubbles made by cells to carry proteins and genetic material to other cells. While still early, research into these mysterious bubbles suggests they may be involved in aging or be responsible for cancers spreading across the body.

Multiple clinical trials are underway, ranging from exosome therapies to slow hair loss to treatments for heart attacks, strokes, and bone and cartilage loss. They have potential.

But a growing number of clinics are also advertising exosomes as their next best seller. One forecast analyzing exosomes in the skin care industry predicts a market value of over $674 million by 2030.

The problem? We don’t really know what exosomes are, what they do to the body, or their side effects. In a way, these molecular packages are like Christmas “mystery boxes,” each containing a different mix of biological surprises that could alter cellular functions, like turning genes on or off in unexpected ways.

There have already been reports of serious complications. “There is an urgent need to develop regulations to protect patients from serious risks associated with interventions based on little or no scientific evidence,” a team recently wrote in Stem Cell Reports.

Cellular Space Shuttles

In 1996, Graça Raposo, a molecular scientist in the Netherlands, noticed something strange: The immune cells she was studying seemed to send messages to each other in tiny bubbles. Under the microscope, she saw that when treated with a “toxin” of sorts, the cells slurped up the molecules, planted them on the surfaces of tiny bubbles inside the cell, and released the bubbles into the vast wilderness of the cell’s surroundings.

She collected the bubbles and squirted them onto other immune cells. Surprisingly, they triggered a similar immune response in the cells—as if directly exposed to the toxin. In other words, the bubbles seemed to shuttle information between cells.

Dubbed exosomes, scientists previously thought they were the cell’s garbage collectors, gathering waste molecules into a bubble and spewing it outside the cell. But two years later, Raposo and colleagues found that exosomes harvested from cells that naturally fight off tumors could be used as a therapy to suppress tumors in mice.

Interest in these mysterious blobs exploded.

Scientists soon found that most cells pump out exosome “spaceships,” and they can contain both proteins and types of RNA that turn genes on or off. But despite decades of research, we’re only scratching the surface of what cargo they can carry and their biological function.

It’s still unclear what exosomes do. Some could be messengers of a dying cell, warning neighbors to shore up defenses. They could also be co-opted by tumor cells to bamboozle nearby cells into supporting cancer growth and spread. In Alzheimer’s disease, they could potentially shuttle twisted protein clumps to other cells, spreading the disease across the brain.

They’re tough to study, in part, because they’re so small and unpredictable. About one-hundredth the size of a red blood cell, exosomes are hard to capture even with modern microscopy. Each type of cell seems to have a different release schedule, with some spewing many in one shot and others taking the slow-and-steady route. Until recently, scientists didn’t even agree on how to define exosomes.

Over several years, the International Society for Extracellular Vesicles, or exosomes, has begun uniting the field with naming conventions and standardized methods for preparing exosomes.

The Wild West

While scientists are rapidly coming together to cautiously make exosome-based treatment a reality, uncertified clinics have popped up across the globe. Their first pitch to the public was tackling Covid. One analysis found 60 clinics in the US advertising exosome-based therapy as a way to prevent or treat the virus—with zero scientific support. Another trending use has been in skin care or hair growth, garnering attention in the US, UK, and Japan.

Exosomes are regulated by the FDA in the US and the European Medicines Agency (EMA) in the EU as biological medicinal products, meaning they require approval from the agencies. That did not stop clinics from marketing them, with tragic consequences. In 2019, patients in Nebraska treated with unapproved exosomes became septic—a life-threating condition caused by infection across the whole body—leading the FDA to issue a warning.

Clinics that offer unregulated exosomes “deceive patients with unsubstantiated claims about the potential for these products to prevent, treat, or cure various diseases or conditions,” the agency wrote.

Japan is struggling to catch up. Exosomes are not regulated under their laws. Nearly 670 clinics have already popped up, representing a far larger market than the US or EU. Most services have been marketed for skin care, anti-aging, hair growth, and battling fatigue, wrote the authors. More rarely, some touted their ability to battle cancers.

The rogue clinics have already led to tragedies. In one case, “a well-known private cosmetic surgery clinic administered exosomes…to at least four patients, including relatives of staff members with stage IV lung cancer, and found that the cancer rapidly worsened after administration,” wrote the authors.

Because the clinics operate on the down-low, it’s tough to gauge the extent of harm, including potential deaths.

The worry isn’t that exosomes are harmful by themselves. How they’re obtained plays a huge role in safety. In unregulated settings, there’s a large chance of the bubbles being contaminated by endotoxins—which trigger dangerous inflammatory responses—or bacteria that lingers and grows.

For now, “from a very basic point of view, we don’t really know what they’re doing, good or bad… I wouldn’t take them, let’s put it that way,” James Edgar, an exosome researcher from the University of Cambridge, told MIT Technology Review.

Unregulated clinics don’t just harm patients. They could also set a promising field back.

Scientific advances may seem to move at a snail’s pace, but it’s to ensure safety and efficacy despite the glitz and glamor of a potential new panacea. Scientists are still forging ahead using exosomes for multiple health problems—while bearing in mind there’s much we still need to understand about these cellular spaceships.

Image Credit: Steve Johnson on Unsplash

]]>
159957
AI That Can Design Life’s Machinery From Scratch Had a Big Year. Here’s What Happens Next. https://singularityhub.com/2024/12/23/ai-that-can-design-lifes-machinery-from-scratch-had-a-big-year-heres-what-happens-next/ Mon, 23 Dec 2024 15:00:48 +0000 https://singularityhub.com/?p=159936 Proteins are biology’s molecular machines. They’re our bodies’ construction workers—making muscle, bone, and brain; regulators—keeping systems in check; and local internet—responsible for the transmission of information between cells and regions. In a word, proteins are crucial to our survival. When they work, we’re healthy. When they don’t, we aren’t.

Which is why recent leaps in our understanding of protein structure and the emerging ability to design entirely new proteins from scratch, mediated by AI, is such a huge development. It’s why three computer scientists won Nobel prizes in chemistry this year for their work in the field.

Things are by no means standing still. 2024 was another winning year for AI protein design.

Earlier this year, scientists expanded AI’s ability to model how proteins bind to other biomolecules, such as DNA, RNA, and the small molecules that regulate their shape and function. The study broadened the scope of RoseTTAFold, a popular AI tool for protein design, so that it could map out complex protein-based molecular machines at the atomic level—in turn, paving the way for more sophisticated therapies.

DeepMind soon followed with the release of AlphaFold3, an AI model that also predicts protein interactions with other molecules. Now available to researchers, the sophisticated AI tool will likely lead to a flood of innovations, therapeutics, and insights into biological processes.

Meanwhile, protein design went flexible this year. AI models generated “effector” proteins that could shape-shift in the presence of a molecular switch. This flip-flop structure altered their biological impact on cells. A subset of these morphed into a variety of arrangements, including cage-like structures that could encapsulate and deliver medicines like tiny spaceships.

They’re novel, but do any AI-designed proteins actually work? Yes, according to several studies.

One used AI to dream up a universe of potential CRISPR gene editors. Inspired by large language models—like those that gave birth to ChatGPT—the AI model in the study eventually designed a gene editing system as accurate as existing CRISPR-based tools when tested on cells. Another AI designed circle-shaped proteins that reliably turned stem cells into different blood vessel cell types. Other AI-generated proteins directed protein “junk” into the lysosome, a waste treatment blob filled with acid inside cells that keeps them neat and tidy.

Outside of medicine, AI designed mineral-forming proteins that, if integrated into aquatic microbes, could potentially soak up excess carbon and transform it into limestone. While still early, the technology could tackle climate change with a carbon sink that lasts millions of years.

It seems imagination is the only limit to AI-based protein design. But there are still a few cases that AI can’t yet fully handle. Nature has a comprehensive list, but these stand out.

Back to Basics: Binders

When proteins interact with each other, binder molecules can increase or break apart those interactions. These molecules initially caught the eyes of protein designers because they can serve as drugs that block damaging cellular responses or boost useful ones.

There have been successes. Generative AI models, such as RFdiffusion, can readily model binders, especially for free-floating proteins inside cells. These proteins coordinate much of the cell’s internal signaling, including signals that trigger senescence or cancer. Binders that break the chain of communication could potentially halt the processes. They can also be developed into diagnostic tools. In one example, scientists engineered a glow-in-the-dark tag to monitor a cell’s status, detecting the presence of a hormone when the binder grabbed onto it.

But binders remain hard to develop. They need to interact with key regions on proteins. But because proteins are dynamic 3D structures that twist and turn, it’s often tough to nail down which regions are crucial for binders to latch onto.

Then there’s the data problem. Thanks to hundreds of thousands of protein structures available in public databases, generative AI models can learn to predict protein-protein interactions. Binders, by contrast, are often kept secret by pharmaceutical companies—each organization has an in-house database cataloging how small molecules interact with proteins.

Several teams are now using AI to design simple binders for research. But experts stress these need to be tested in living organisms. AI can’t yet predict the biological consequences of a binder—it could either boost a process or shut it down. Then there’s the problem of hallucination, where an AI model dreams up binders that are completely unrealistic.

From here, the goal is to gather more and better data on how proteins grab onto molecules, and perhaps add a dose of their underlying biophysics.

Designing New Enzymes

Enzymes are proteins that catalyze life. They break down or construct new molecules, allowing us to digest food, build up our bodies, and maintain healthy brains. Synthetic enzymes can do even more, like sucking carbon dioxide from the atmosphere or breaking down plastic waste.

But designer enzymes are still tough to build. Most models are trained on natural enzymes, but biological function doesn’t always rely on the same structure to do the same thing. Enzymes that look vastly different can perform similar chemical reactions. AI evaluates structure, not function—meaning we’ll need to better understand how one leads to the other.

Like binders, enzymes also have “hotspots.” Scientists are racing to hunt these down with machine learning. There are early signs AI can design hotspots on new enzymes, but they still need to be heavily vetted. An active hotspot usually requires a good bit of scaffolding to work properly—without which it may not be able to grab its target or, if it does, let it go.

Enzymes are a tough nut to crack especially because they’re in motion. For now, AI struggles to model their transformations. This is, as it turns out, a challenge for the field at large.

Shape-Shifting Headaches

AI models are trained on static protein structures. These snapshots have been hard won with decades of work, in which scientists freeze a protein in time to image its structure. But these images only capture a protein’s most stable shape, rather than its shape in motion—like when a protein grabs onto a binder or when an enzyme twists to fit into a protein nook.

For AI to truly “understand” proteins, researchers will have to train models on the changing structures as proteins shapeshift. Biophysics can help model a protein’s twists and turns, but it’s extremely difficult. Scientists are now generating libraries of synthetic and natural proteins and gradually mutating each to see how simple changes alter their structures and flexibility.

Adding a bit of “randomness” to how an AI model generates new structures could also help. AF-Cluster, built on AlphaFold2, injected bits of uncertainty into its neural network processes when predicting a known shape-shifting protein and did well on multiple structures.

Protein prediction is a competitive race. But teams will likely need to work together too. Building a collaborative infrastructure for the rapid sharing of data could speed efforts. Adding so-called “negative data,” such as when AI-designed proteins or binders are toxic in cells, could also guide other protein designers. A harder problem is that verifying AI-designed proteins could take years—when the underlying algorithm has already been updated.

Regardless, there’s no doubt AI is speeding protein design. Let’s see what next year has to offer.

Image Credit: Baker Lab

]]>
159936
Textbook Depictions of Neurons May Be Wrong, According to Controversial Study https://singularityhub.com/2024/12/20/textbook-depictions-of-neurons-may-be-wrong-according-to-controversial-study/ Fri, 20 Dec 2024 15:00:54 +0000 https://singularityhub.com/?p=159889 In the late 1800s, Spanish neuroscientist Santiago Ramón y Cajal drew hundreds of images of neurons. His exquisite work influenced our understanding of what they look like: Cells with a bulbous center, a forest of tree-like branches on one end, and a long, smooth tail on the other.

Centuries later, these images remain textbook. But a controversial study now suggests Ramón y Cajal, and neuroscientists since, might have missed a crucial detail.

A team from Johns Hopkins University found tiny “bubbles” dotted along the long tail—called the axon. Normally depicted as a mostly smooth, cylindrical cable, axons may instead look like “pearls on a string.”

Why care? Axons transmit electrical signals connecting the neural networks that give rise to our thoughts, memories, and emotions. Small changes in their shape could alter these signals and potentially the brain’s output—that is, our behavior.

“Understanding the structure of axons is important for understanding brain cell signaling,” Shigeki Watanabe at the Johns Hopkins University School of Medicine, who led the study, said in a press release.

The work took advantage of a type of microscopy that better preserves neuron structure. In three types of mouse neurons—some grown in petri dishes, others from adult mice and mouse embryos—the team consistently saw the nanopearls, suggesting they’re part of an axon’s normal shape.

“These findings challenge a century of understanding about axon structure,” said Watanabe.

The nanopearls weren’t static. Adding sugar to the neurons’ liquid environment or stripping neurons of cholesterol in their membranes—the fatty protective outer layer—altered the nanopearls’ size and distribution and the speed signals traveled down axons.

Reactions to the study were split. Some scientist welcomed the findings. Over the last 70 years, scientists have extensively studied axon shape and recognized its complex structure. With improving microscope technologies, discovering new structures isn’t surprising, but it is rather exciting.

Others are more skeptical. Speaking to Science, Christophe Leterrier of Aix-Marseille University, who was not involved in the study, said: “I think it’s true that [the axon is] not a perfect tube, but it’s not also just this kind of accordion that they show.”

Cable With a Chance of Stress Balls

Axons stretch inches in the brain with diameters 100 times thinner than a human hair. Although mostly tubular in shape, they’re dotted with occasional bubbles, called synaptic varicosities, that contain chemicals for the transmission of information with neighboring neurons. These long branches mainly come in two types: Some are wrapped in fatty sheaths and others are “bare,” without the cushioning.

Although often compared to tree branches, axons are shapeshifters. A brief burst of electrical signaling, for example, causes synaptic varicosities to temporarily expand by 20 percent. The axons also grow slightly wider for a longer period, before settling back to their normal size.

These tiny changes have large impacts on brain computation. Like an electrical cable that can change its properties, they fine-tune signal strength between networks, and in turn, the overall function of neurons.

Axons have another trick up their sleeves: They shrink up into “stress balls” with injury, such as an unsuspected blow to the head during sports, or in Alzheimer’s or Parkinson’s disease. Stress balls are relatively large compared to synaptic varicosities. But they’re transient. The structures eventually loosen and regain a tubular shape. Rather than harmful, they likely protect the brain by limiting damage to smaller regions and nurture axons during recovery.

But axons’ shape-shifting prowess is temporary and often only under duress. What do axons look like in a healthy brain?

Pearls on a String

Roughly a decade ago, Watanabe noticed tiny bubbles in the axons of roundworms while developing a new microscopy technique. Although the structures were much smaller and more tightly packed than stress balls, he banked the results as a curiosity but didn’t investigate further. Years later, the University of Bergen’s Pawel Burkhardt also noticed pearly axons in comb jellies, a tiny marine invertebrate.

In the new study, Watanabe and colleagues revisited the head-scratching findings, armed with a newer microscopy technique: High-pressure freezing. To image fine details in the brain, scientists usually dose it with multiple chemicals to set neurons in place. The treated brains are then sliced extremely thin, and the pieces are individually scanned with a microscope.

The procedure takes days. Without care, it can distort a neuron’s membrane and damage or even shred delicate axons. In contrast, high-pressure freezing better locks in the cell’s shape.

Using an electron microscope—which outlines a cell’s structure by shooting beams of electrons at it—the team studied “bare” axons from three sources: mouse neurons grown in a lab dish and those from thin slices of adult and embryonic mouse brains.

All axons had the peculiar pearl-like blobs along their entire length. Roughly 200 nanometers across, the nanopearls are far smaller than stress balls, and they’re spaced closer together. The beads likely form due to biophysics. Recent studies show that under tension, sections of a long tube crumple into beads—a phenomenon dubbed “membrane-driven instability.” Why this happens and its impact on brain function remains mostly mysterious, but the team has ideas.

Seeing Is Believing?

Using mathematical simulations, they modeled how changes in the surrounding environment impacts an axon’s pearling and its electrical transmission.

Axons are surrounded by a goopy, protective protein gel, like a bubble suit. But they still experience physical forces—like when we rapidly snap our heads. Simulations found that physical tension surrounding neurons is a key player in managing axon pearling.

In another test, the team stripped cholesterol from the neurons—a component in their membranes—to make them more flexible and fluid-like. The tweak lessened pearling in simulations and slowed electrical signals as they passed through the simulated axon.

Recording electrical signals from living mouse neurons led to similar results. Smaller and more compactly packed nanopearls slowed signals down, whereas axons with larger and widely spaced ones led to faster transmission.

The results suggest an “intriguing idea” that changing biophysical forces could directly alter the speed of the brain’s electrical signaling, wrote the authors.

Not everyone is convinced.

Some scientists think the nanopearls are an artifact stemming from the preparation process. “While quick freezing is an extremely rapid process, something may happen during the manipulation of the sample” to cause beading, Pietro De Camilli at the Yale School of Medicine, who was not involved in the study, told Science. Others question if—like a stress ball—the nanopearls form during stress and will eventually unfold. We don’t yet know: Microscopy is a snapshot in time, rather than a movie.

Despite pushback, the team is turning to human axons. Healthy human brain tissue is hard to come by. They plan to look for signs of nanopearls in brain tissue removed during epilepsy surgery and from those who passed away due to neurodegenerative diseases. Brain organoids, or “mini-brains” developed from healthy people could also help decipher axon shape.

Regardless, the study spurs the question: When it comes to brain anatomy, what else have we missed?

Image Credit: Bioscience Image Library by Fayette Reynolds on Unsplash

]]>
159889
How to Be Healthy at 100: Centenarian Stem Cells Could Hold the Key https://singularityhub.com/2024/12/18/how-to-be-healthy-at-100-centenarian-stem-cells-could-hold-the-key/ Wed, 18 Dec 2024 15:00:19 +0000 https://singularityhub.com/?p=159860 When Jeanne Calment died at the age of 122, her longevity had researchers scratching their heads. Although physically active for most of her life, she was also a regular smoker and enjoyed wine—lifestyle choices that are generally thought to decrease healthy lifespan.

Teasing apart the intricacies of human longevity is complicated. Diet, exercise, and other habits can change the trajectory of a person’s health as they grow older. Genetics also plays a role—especially during the twilight years. But experiments to test these ideas are difficult, in part because of our relatively long lifespan. Following a large population of people as they age is prohibitively expensive, and results could take decades. So, most studies have turned to animal aging models—including flies, rodents, and dogs—with far shorter lives.

But what if we could model human “aging in a dish” using cells derived from people with exceptionally long lives?

A new study, published in Aging Cell, did just that. Leveraging blood draws from the New England Centenarian Study—the largest and most comprehensive database of centenarians—they transformed blood cells into induced-pluripotent stem cells (iPSCs).

These cells contain their donor’s genetic blueprint. In essence, the team created a biobank of cells that could aid researchers in their search for longevity-related genes.

“Models of human aging, longevity, and resistance to and/or resilience against disease that allow for the functional testing of potential interventions are virtually non-existent,” wrote the team.

They’ve already shared these “super-aging” stem cells with the rest of the longevity community to advance understanding of the genes and other factors contributing to a healthier, longer life.

“This bank is really exciting,” Chiara Herzog, a longevity researcher at Kings College London, who was not involved in the study, told Nature.

Precious Resource

Centenarians are rare. According to the Pew Research Center, based on data from the US Census Bureau, they make up only 0.03 percent of the country’s population. Across the globe, roughly 722,000 people have celebrated their 100th birthday—a tiny fraction of the over eight billion people currently on Earth.

Centenarians don’t just live longer. They’re also healthier, even in extreme old age, and less likely to suffer age-related diseases, such as dementia, Type 2 diabetes, cancer, or stroke. Some evade these dangerous health problems altogether until the very end.

What makes them special? In the last decade, several studies have begun digging into their genes to see which are active (or not) and how this relates to healthy aging. Others have developed aging clocks, which use myriad biomarkers to determine a person’s biological age—that is, how well their bodies are working. Centenarians frequently stood out, with a genetic landscape and bodily functions resembling people far younger than expected for their chronological age.

Realizing the potential for studying human aging, the New England Centenarian Study launched in 1995. Now based at Boston University and led by Tom Perls and Stacy Andersen, both authors of the new study, the project has recruited centenarians through a variety of methods—voter registries, news articles, or mail to elderly care facilities.

Because longevity may have a genetic basis, their children were also invited to join, with spouses serving as controls. All participants reported on their socioeconomic status and medical history. Researchers assessed their cognition on video calls and screened for potential mental health problems. Finally, some participants had blood samples taken. Despite their age, many centenarians remained sharp and could take care of themselves.

Super-Ager Stem Cells

The team first tested participants with a variety of aging clocks. These measured methylation, which shuts genes down without changing their DNA sequences. Matching previous results, centenarians were, on average, six and a half years younger than their chronological age.

The anti-aging boost wasn’t as prominent in their children. Some had higher biological ages and others lower. This could be because of variation in who inherited a genetic “signature” associated with longevity, wrote the team.

They then transformed blood cells from 45 centenarians into iPSCs. The people they chose were “at the extremes of health and functionality,” the team wrote. Because of their age, they initially expected that turning back the clock might not work on old blood cells.

Luckily, they were wrong. Several proteins showed the iPSCs were healthy and capable of making other cells. They also mostly maintained their genomic integrity—although surprisingly, cells from three male centenarians showed a slight loss of the Y chromosome.

Previous studies have found a similar deletion pattern in blood cells from males over 70 years of age. It could be a marker for aging and a potential risk factor for age-related conditions such as cancer and heart disease. Women, on average, live longer than men. The findings “allow for interesting research opportunities” to better understand why Y chromosome loss happens.

Unraveling Aging

Turning blood cells into stem cells erases signs of aging, especially those related to the cells’ epigenetic state. This controls whether genes are turned on or off, and it changes with age. But the underlying genetic code remains the same.

If the secrets to longevity are, even only partially, hidden in the genes, these super-aging stem cells could help researchers figure out what’s protective or damaging, in turn prompting new ideas that slow the ticking of the clock.

In one example, the team nudged the stem cells to become cortical neurons. These neurons form the outermost part of the brain responsible for sensing and reasoning. They’re also the first to decay in dementia or Alzheimer’s disease. Those derived from centenarians better fought off damage, such as rapidly limiting the spread of toxic proteins that accumulate with age.

Researchers are also using the cells to test for resilience against Alzheimer’s. Another experiment observed cell cultures made of healthy neurons, immune cells, and astrocytes. The latter, supporting cells that help keep brains healthy, were created using centenarian stem cells. Astrocytes have increasingly been implicated in Alzheimer’s, but their role has been hard to study in humans. Those derived from centenarian stem cells offer a way forward.

Each line of centenarian stem cells is linked to its donor—their demographics, cognitive, and physical state. This additional information could guide researchers in choosing the best centenarian cell line for their investigations into different aspects of aging. And because the cells can be transformed into a wide variety of tissues that decline with age—muscles, heart, or immune cells—they offer a new way to explore how aging affects different organs, and at what pace.

“The result of this work is a one-of-a-kind resource for studies of human longevity and resilience that can fuel the discovery and validation of novel therapeutics for aging-related disease,” wrote the authors.

Image Credit: Danie Franco on Unsplash

]]>
159860
Study Suggests an mRNA Shot Could Reverse This Deadly Pregnancy Condition https://singularityhub.com/2024/12/16/study-suggests-an-mrna-shot-could-reverse-this-deadly-pregnancy-condition/ Mon, 16 Dec 2024 15:00:19 +0000 https://singularityhub.com/?p=159825 With a single shot, scientists protected pregnant mice from a deadly complication called pre-eclampsia. The shot, inspired by mRNA vaccines, contains mRNA instructions to make a protein that reverses damage to the placenta—which occurs in the condition—protecting both mother and growing fetus.

Pre-eclampsia causes 75,000 maternal deaths and 500,000 fetal and newborn deaths every year around the globe. Trademark signs of the condition are extreme high blood pressure, reduced blood flow to the placenta, and sometimes seizures. Existing drugs, such as those that lower blood pressure, manage the symptoms but not the underlying causes.

“There aren’t any therapeutics that address the underlying problem, which is in the placenta,” study author Kelsey Swingle at the University of Pennsylvania told Nature.

Thanks to previous studies in mice, scientists already have an idea of what triggers pre-eclampsia: The placenta struggles to produce a protein crucial to the maintenance of structure and growth. Called vascular endothelial growth factor (VEGF), the condition inhibits the protein’s activity, interfering with the maternal blood vessels supporting placental health.

Restoring the protein could treat the condition at its core. The challenge is delivering it.

The team developed a lipid-nanoparticle system that directly targets the placenta. Like in Covid vaccines, these fatty “bubbles” are loaded with mRNA molecules that instruct cells to make the missing protein. But compared to standard lipid nanoparticles used in mRNA vaccines, the new bubbles—dubbed LNP 55—were 150 times more likely to home in on their target.

In two mouse models of pre-eclampsia, a single shot of the treatment boosted VEGF levels in the placenta, spurred growth of healthy blood vessels, and prevented symptoms. The treatment didn’t harm the fetuses. Rather, it helped them grow, and the newborn mouse pups were closer to a healthy weight.

The new approach is “an innovative method,” wrote Ravi Thadhani at Emory University and Ananth Karumanchi at the Cedars-Sinai Medical Center, who were not involved in the study.

A Surprising Start

The team didn’t originally focus on treating pre-eclampsia.

“We’re a drug delivery lab,” study author Michael Mitchell told Nature. But his interest was piqued when he started receiving emails from pregnant mothers, asking whether Covid-19 mRNA vaccines were safe for fetuses.

A quick recap: Covid vaccines contain two parts.

One is a strand of mRNA encoding the spike protein attached to the surface of the virus. Once in the body, the cell’s machinery processes the mRNA, makes the protein, and this triggers an immune response—so the body recognizes the actual virus after infection.

The other part is a lipid nanoparticle to deliver the mRNA cargo. These fatty bubbles are bioengineering wonders with multiple components. Some of these grab onto the mRNA; others stabilize the overall structure. A bit of cholesterol and other modified lipids lower the chance of immune attack.

Previously, scientists found that most lipid nanoparticles zoom towards the liver and release their cargo. But “being able to deliver lipid nanoparticles to parts of the body other than the liver is desirable, because it would allow designer therapeutics to be targeted specifically to the organ or tissue of interest,” wrote Thadhani and Karumanchi.

Inspired by the emails, the team first engineered a custom lipid nanoparticle that targets the placenta. They designed nearly 100 delivery bubbles—each with a slightly different lipid recipe—injected them into the bloodstream of pregnant mice, and tracked where they went.

One candidate, called LNP 55, especially stood out. The particles collected in the placenta, without going into the fetus. This is “ideal because the fetus is an ‘innocent bystander’ in pre-eclampsia” and likely not involved in triggering the complication, wrote Thadhani and Karumanchi. It could also lower any potential side effects to the fetus.

Compared to standard lipid nanoparticles, LNP 55 was 150 times more likely to move into multiple placental cell types, rather than the liver. The results got the team wondering: Can we use LNP 55 to treat pregnancy conditions?

Load It Up

The next step was finding the right cargo to tackle pre-eclampsia. The team decided on VEGF mRNA, which can fortify blood vessels in the placenta.

In two mouse models of pre-eclampsia in the middle of their pregnancy, a single injection reduced their high blood pressure almost immediately, and their blood pressure was stable until delivery of their pups. The treatment also lowered “toxins” secreted by the damaged placenta.

“This is really exciting outcome, and it suggests that perhaps we’re remolding the vasculature [blood vessel structure] to kind of see a really sustained therapeutic effect,” said Swingle.

The treatment also benefited the developing pups. Moms with pre-eclampsia often give birth to babies that weigh less. This is partly because doctors induce early delivery as a mother’s health declines. But an unhealthy placenta also contributes. Standard care for the condition can manage the mother’s symptoms, but it doesn’t change birth weight. The fetuses look almost “shriveled up” because of poor nutrient and lack of oxygen, said Mitchell.

Pups from moms treated with VEGF mRNA were far larger and healthier, looking almost exactly the same as normal mice born without pre-eclampsia.

A Long Road Ahead

Though promising, there are a few roadblocks before the treatment can help pregnant humans.

Our placentas are vastly different compared to those of mice, especially in their cellular makeup. The team is considering guinea pigs, which surprisingly have placentas more like humans, for future testing. Higher doses of VEGF may also trigger side effects, such as making blood vessels leakier—although the problem wasn’t seen in this study.

Dosing schedule is another problem. Mice are pregnant for roughly 20 days, a sliver of time compared to a human’s 40 weeks. While a single dose worked in mice, the effects may not last for longer pregnancies.

Then there’s timing. In humans, pre-eclampsia begins early when the placenta is just taking shape. Starting the treatment earlier, rather than in the middle of a pregnancy, could have different results.

Regardless, the study is welcome. Research into pregnancy complications has lagged cancer, heart conditions, metabolic disorders, and even some rare diseases. Limited funding aside, developing drugs for pregnancy is far more difficult because of stringent regulations in place to protect mother and fetus from unexpected and potentially catastrophic side effects.

The new work “offers a promising opportunity to tackle pre-eclampsia, one of the most common and devastating medical complications in pregnancy, and one that is in dire need of intervention,” wrote Thadhani and Karumanchi.

Image Credit: Isaac Quesada on Unsplash

]]>
159825
The Secret to Predicting How Your Brain Will Age May Be in Your Blood https://singularityhub.com/2024/12/13/the-secret-to-predicting-how-your-brain-is-aging-may-be-in-your-blood/ Fri, 13 Dec 2024 21:31:28 +0000 https://singularityhub.com/?p=159809 Brain aging occurs in distinctive phases. Its trajectory could be hidden in our blood—paving the way for early diagnosis and intervention.

A new study published in Nature Aging analyzed brain imaging data from nearly 11,000 healthy adults, middle-aged and older, using AI to gauge their “brain age.” Roughly half of participants had their blood proteins analyzed to fish out those related to aging.

Scientists have long looked for the markers of brain aging in blood proteins, but this study had a unique twist. Rather than mapping protein profiles to a person’s chronological age—the number of years on your birthday card—they used biological brain age, which better reflects the actual working state of the brain as the clock ticks on.

Thirteen proteins popped up—eight associated with faster brain aging and five that slowed down the clock. Most alter the brain’s ability to handle inflammation or are involved in cells’ ability to form connections.

From these, three unique “signatures” emerged at 57, 70, and 78 years of age. Each showed a combination of proteins in the blood marking a distinct phase of brain aging. Those related to neuron metabolism peaked early, while others spurring inflammation were more dominate in the twilight years.

These spikes signal a change in the way the brain functions with age. They may be points of intervention, wrote the authors. Rather than relying on brain scans, which aren’t often available to many people, the study suggests that a blood test for these proteins could one day be an easy way to track brain health as we age.

The protein markers could also help us learn to prevent age-related brain disorders, such as dementia, Alzheimer’s disease, stroke, or problems with movement. Early diagnosis is key. Although the protein “hallmarks” don’t test for the disorders directly, they offer insight into the brain’s biological age, which often—but not always—correlates with signs of aging.

The study helps bridge gaps in our understanding of how brains age, the team wrote.

Treasure Trove

Many people know folks who are far sharper than expected at their age. A dear relative of mine, now in their mid-80s, eagerly adopted ChatGPT, AI-assisted hearing aids, and “Ok Google.” Their eyes light up anytime they get to try a new technology. Meanwhile, I watched another relative—roughly the same age—rapidly lose their wit, sharp memory, and eventually, the ability to realize they were no longer logical.

My experiences are hardly unique. With the world rapidly aging, many of us will bear witness to, and experience, the brain aging process. Projections suggest that by 2050, over 1.5 billion people will be 65 or older, with many potentially experiencing age-related memory or cognitive problems.

But chronological age doesn’t reflect the brain’s actual functions. For years, scientists studying longevity have focused on “biological age” to gauge bodily functions, rather than the year on your birth certificate. This has led to the development of multiple aging clocks, with each measuring a slightly different aspect of cell aging. Hundreds of these clocks are now being tested, as clinical trials use them to gauge the efficacy of potential anti-aging treatments.

Many of the clocks were built by taking tiny samples from the body and analyzing certain gene expression patterns linked to the aging process. It’s tough to do that with the brain. Instead, scientists have largely relied on brain scans, showing structure and connectivity across regions, to build “brain clocks.” These networks gradually erode as we age.

The studies calculate the “brain age gap”— the difference between the brain’s structural integrity and your actual age. A ten-year gap, for example, means your brain’s networks are more similar to those of people a decade younger, or older, than you.

Most studies have had a small number of participants. The new study tapped into the UK Biobank, a comprehensive dataset of over a million people with regular checkups—including brain scans and blood draws—offering up a deluge of data for analysis.

The Brain Age Gap

Using machine learning, the study first sorted through brain scans of almost 11,000 people aged 45 to 82 to calculate their biological brain age. The AI model was trained on hundreds of structural features of the brain, such as overall size, thickness of the cortex—the outermost region—and the amount and integrity of white matter.

They then calculated the brain age gap for each person. On average, the gap was roughly three years, swinging both ways, meaning some people had either a slightly “younger” or “older” brain.

Next, the team tried to predict the brain age gap by measuring proteins in plasma, the liquid part of blood. Longevity research in mice has uncovered many plasma proteins that age or rejuvenate the brain.

After screening nearly 3,000 plasma proteins from 4,696 people, they matched each person’s protein profile to the participant’s brain age. They found 13 proteins associated with the brain age gap, with most involved in inflammation, movement, and cognition.

Two proteins particularly stood out.

One called Brevican, or BCAN, helps maintain the brain’s wiring and overall structure and supports learning and memory. The protein dwindles in Alzheimer’s disease. Higher levels, in contrast, were associated with slower brain aging and lower risk of dementia and stroke.

The other protein, growth differentiation factor 15 (GDF15), is released by the body when it senses damage. Higher levels correlated with a higher risk of age-related brain disease, likely because it sparks chronic inflammation—a “hallmark” of aging.

There was also a surprising result.

Plasma protein levels didn’t change linearly with age. Instead, changes peaked at three chronological ages—57, 70, and 78—with each stage marking a distinctive phase of brain aging.

At 57, for example, proteins related to brain metabolism and wound healing changed markedly, suggesting early molecular signs of brain aging. By 70, proteins that support the brain’s ability to rewire itself—some strongly associated with dementia and stroke—changed rapidly. Another peak, at 78, showed protein changes mostly related to inflammation and immunity.

“Our findings thus emphasize the importance and necessity of intervention and prevention at brain age 70 years to reduce the risk of multiple brain disorders,” wrote the authors

To be clear: These are early results. The participants are largely of European ancestry, and the results may not translate to other populations. The 13 proteins also need further testing in animals before any can be validated as biomarkers. But the study paves the way.

Their results, the authors conclude, suggest the possibility of earlier, simpler diagnosis of age-related brain disorders and the development of personalized therapies to treat them.

]]>
159809
Thousands of Undiscovered Genes May Be Hidden in DNA ‘Dark Matter’ https://singularityhub.com/2024/12/09/thousands-of-new-genes-may-be-hidden-in-dna-dark-matter/ Mon, 09 Dec 2024 22:05:27 +0000 https://singularityhub.com/?p=159759 Thousands of new genes are hidden inside the “dark matter” of our genome.

Previously thought to be noise left over from evolution, a new study found that some of these tiny DNA snippets can make miniproteins—potentially opening a new universe of treatments, from vaccines to immunotherapies for deadly brain cancers.

The preprint, not yet peer-reviewed, is the latest from a global consortium that hunts down potential new genes. Ever since the Human Genome Project completed its first draft at the turn of the century, scientists have tried to decipher the genetic book of life. Buried within the four genetic letters—A, T, C, and G—and the proteins they encode is a wealth of information that could help tackle our most frustrating medical foes, such as cancer.

The Human Genome Project’s initial findings came as a surprise. Scientists found less than 30,000 genes that build our bodies and keep them running—roughly a third of that previously predicted. Now, roughly 20 years later, as the technologies that sequence our DNA or map proteins have become increasingly sophisticated, scientists are asking: “What have we missed?”

The new study filled the gap by digging into relatively unexplored portions of the genome. Called “non-coding,” these parts haven’t yet been linked to any proteins. Combining several existing datasets, the team zeroed in on thousands of potential new genes that make roughly 3,000 miniproteins.

Whether these proteins are functional remains to be tested, but initial studies suggest some are involved in a deadly childhood brain cancer. The team is releasing their tools and results to the wider scientific community for further exploration. The platform isn’t just limited to deciphering the human genome; it can delve into the genetic blueprint of other animals and plants as well.

Even though mysteries remain, the results “help provide a more complete picture of the coding portion of the genome,” Ami Bhatt at Stanford University told Science.

What’s in a Gene?

A genome is like a book without punctuation. Sequencing one is relatively easy today, thanks to cheaper costs and higher efficiency. Making sense of it is another matter.

Ever since the Human Genome Project, scientists have searched our genetic blueprint to find the “words,” or genes, that make proteins. These DNA words are further broken down into three-letter codons, each one encoding a specific amino acid—the building block of a protein.

A gene, when turned on, is transcribed into messenger RNA. These molecules shuttle genetic information from DNA to the cell’s protein-making factory, called the ribosome. Picture it as a sliced bun, with an RNA molecule running through it like a piece of bacon.

When first defining a gene, scientists focus on open reading frames. These are made of specific DNA sequences that dictate where a gene starts and stops. Like a search function, the framework scans the genome for potential genes, which are then validated with lab experiments based on myriad criteria. These include whether they can make proteins of a certain size—more than 100 amino acids. Sequences that meet the mark are compiled into GENCODE, an international database of officially recognized genes.

Genes that encode proteins have attracted the most attention because they aid our understanding of disease and inspire ways to treat it. But much of our genome is “non-coding,” in that large sections of it don’t make any known proteins.

For years, these chunks of DNA were considered junk—the defunct remains of our evolutionary past. Recent studies, however, have begun revealing hidden value. Some bits regulate when genes turn on or off. Others, such as telomeres, protect against the degradation of DNA as it replicates during cell division and ward off aging.

Still, the dogma was that these sequences don’t make proteins.

A New Lens

Recent evidence is piling up that non-coding areas do have protein-making segments that affect health.

One study found that a small missing section in supposedly non-coding areas caused inherited bowel troubles in infants. In mice genetically engineered to mimic the same problem, restoring the DNA snippet—not yet defined as a gene—reduced their symptoms. The results highlight the need to go beyond known protein-coding genes to explain clinical findings, the authors wrote.

Dubbed non-canonical open reading frames (ncORFs), or “maybe-genes,” these snippets have popped up across human cell types and diseases, suggesting they have physiological roles.

In 2022, the consortium behind the new study began peeking into potential functions, hoping to broaden our genetic vocabulary. Rather than sequencing the genome, they looked at datasets that sequenced RNA as it was being turned into proteins in the ribosome.

The method captures the actual output of the genome—even extremely short amino acid chains normally thought too small to make proteins. Their search produced a catalog of over 7,000 human “maybe-genes,” some of which made microproteins that were eventually detected inside cancer and heart cells.

But overall, at that time “we did not focus on the questions of protein expression or functionality,” wrote the team. So, they broadened their collaboration in the new study, welcoming specialists in protein science from over 20 institutions across the globe to make sense of the “maybe-genes.”

They also included several resources that provide protein databases from various experiments—such as the Human Proteome Organization and the PeptideAtlas—and added data from published experiments that use the human immune system to detect protein fragments.

In all, the team analyzed over 7,000 “maybe-genes” from a variety of cells: Healthy, cancerous, and also immortal cell lines grown in the lab. At least a quarter of these “maybe-genes” translated into over 3,000 miniproteins. These are far smaller than normal proteins and have a unique amino acid makeup. They also seem to be more attuned to parts of the immune system—meaning they could potentially help scientists develop vaccines, autoimmune treatments, or immunotherapies.

Some of these newly found miniproteins may not have a biological role at all. But the study gives scientists a new way to interpret potential functions. For quality control, the team organized each miniprotein into a different tier, based on the amount of evidence from experiments, and integrated them into an existing database for others to explore.

We’re just beginning to probe our genome’s dark matter. Many questions remain.

“A unique capacity of our multi-consortium collaboration is the ability to develop consensus on the key challenges” that we feel need answers, wrote the team.

For example, some experiments used cancer cells, meaning that certain “maybe-genes” might only be active in those cells—but not in normal ones. Should they be called genes?

From here, deep learning and other AI methods may help speed up analysis. Although annotating genes is “historically rooted in manual inspection” of the data, wrote the authors, AI can churn through multiple datasets far faster, if only as a first pass to find new genes.

How many might scientists discover? “50,000 is in the realm of possibility,” study author Thomas Martinez told Science.

Image Credit: Miroslaw Miras from Pixabay

]]>
159759
Google DeepMind’s New AI Weatherman Tops World’s Most Reliable System https://singularityhub.com/2024/12/06/google-deepminds-new-ai-weatherman-tops-worlds-most-reliable-system/ Fri, 06 Dec 2024 18:14:33 +0000 https://singularityhub.com/?p=159735 This was another year of rollercoaster weather. Heat domes broiled the US southwest. California experienced a “second summer” in October, with multiple cities breaking heat records. Hurricane Helene—and just a few weeks later, Hurricane Milton—pummeled the Gulf Coast, unleashing torrential rainfall and severe flooding. What shocked even seasoned meteorologists was how fast the hurricanes intensified, with one choking up as he said “this is just horrific.”

When bracing for extreme weather, every second counts. But planning measures rely on accurate predictions. Here’s where AI comes in.

This week, Google DeepMind unveiled an AI that predicts weather 15 days in advance in minutes, rather than the hours usually needed with traditional models. In a head-to-head with the European Center for Medium-Range Weather Forecasts’ model (ENS)—the best “medium-range” weather forecaster today—the AI won over 90 percent of the time.

Dubbed GenCast, the algorithm is DeepMind’s latest foray into weather prediction. Last year, they unleashed a version with strikingly accurate prediction for a 10-day forecast. GenCast differs in its machine learning architecture. True to its name, it’s a generative AI model, roughly similar to those that power ChatGPT, Gemini, or generate images and videos with a text prompt.

The setup gives GenCast an edge over previous models, which usually provide a single weather path prediction. GenCast, in contrast, pumps out 50 or more predictions—each representing a potential weather trajectory, while assigning their likelihood.

In other words, the AI “imagines” a multiverse of future weather possibilities and picks the one with the largest chance of occurring.

GenCast didn’t just excel at day-to-day weather prediction. It also beat ENS at predicting extreme weather—heat, cold, and high wind speeds. Challenged with data from Typhoon Hagibis—the deadliest tropical cyclone to strike Japan in decades—GenCast visualized possible routes seven days before landfall.

“As climate change drives more extreme weather events, accurate and trustworthy forecasts are more essential than ever,” wrote study authors Ilan Price and Matthew Wilson in a DeepMind blog post.

Embracing Uncertainty

Predicting weather is notoriously difficult. This is largely because weather is a chaotic system. You might have heard of the “butterfly effect”—a butterfly flaps it wings, stirring a tiny change in the atmosphere and triggering tsunamis and other weather disasters a world apart. Although just a metaphor, it highlights that any small changes in initial weather conditions can rapidly spread across large regions, changing weather outcomes.

For decades, scientists have tried to emulate these processes using physical simulations of the Earth’s atmosphere. By gathering data from weather stations across the globe and satellites, they’ve written equations mapping current estimates of the weather and forecasting how they’ll change over time.

The problem? The deluge of data takes hours, if not days, to crunch on supercomputers, and consumes a huge amount of energy.

AI may be able to help. Rather than mimicking the physics of atmospheric shifts or the swirls of our oceans, these systems slurp up decades of data to find weather patterns. GraphCast, released in 2013, captured more than a million points across our planet’s surface to predict 10-day weather in less than a minute. Others in the race to improve weather forecasting are Huawei’s Pangu-Weather and NowcastNet, both based in China. The latter gauges the chance of rain with high accuracy—one of the toughest aspects of weather prediction.

But weather is finicky. GraphCast and other similar weather-prediction AI models, in contrast, are deterministic. They only forecast a single weather trajectory. The weather community is now increasingly embracing an “ensemble model,” which predicts a range of possible scenarios.

“Such ensemble forecasts are more useful than relying on a single forecast, as they provide decision makers with a fuller picture of possible weather conditions in the coming days and weeks and how likely each scenario is,” wrote the team.

Cloudy With a Chance of Rain

GenCast tackles the weather’s uncertainty head-on. The AI mainly relies on a diffusion model, a type of generative AI. Overall, it incorporates 12 metrics about the Earth’s surface and atmosphere—such as temperature, wind speed, humidity, and atmospheric pressure—traditionally used to gauge weather.

The team trained the AI on 40 years of historical weather data from a publicly available database up to 2018. Rather than asking for one prediction, they had GenCast spew out a number of forecasts, each one starting with a slightly different weather condition—a different “butterfly,” so to speak. The results were then combined into an ensemble forecast, which also predicted the chance of each weather pattern actually occurring.

When tested with weather data from 2019, which GenCast had never seen, the AI outperformed the current leader, ENS—especially for longer-term forecasting up to 15 days. Checked against recorded data, the AI outperformed ENS 97 percent of the time across 1,300 measures of weather prediction.

GenCast’s predictions are also blazingly fast. Compared to the hours on supercomputers usually needed to generate results, the AI churned out predictions in roughly eight minutes. If adopted, the system could add valuable time for emergency notices.

All for One

Although GenCast wasn’t explicitly trained to forecast severe weather patterns, it was able to predict the path of Typhoon Hagibis before landfall in central Japan. One of the deadliest storms in decades, the typhoon flooded neighborhoods up to the rooftops as water broke through levees and took out much of the region’s electrical power.

GenCast’s ensemble prediction was like a movie. It began with a relatively wide range of possible paths for Typhoon Hagibis seven days before landfall. As the storm edged closer, however, the AI got more accurate, narrowing its predictive path. Although not perfect, GenCast painted an overall trajectory of the devastating cyclone that closely matched recorded data.

Given a week of lead time, “GenCast can provide substantial value in decisions about

when and how to prepare for tropical cyclones,” wrote the authors.

Accurate and longer predictions don’t just help prepare for future climate challenges. They could also help optimize renewable energy planning. Take wind power. Predicting where, when, and how strong wind is likely to blow could increase the power source’s reliability—reducing costs and potentially upping adoption of the technology. In a proof-of-concept analysis, GenCast was more accurate than ENS at predicting total wind power generated by over 5,000 wind power plants across the globe, opening the possibility of building wind farms based on data.

GenCast isn’t the only AI weatherman. Nvidia’s FourCastNet also uses generative AI to predict weather with a lower energy cost than traditional methods. Google Research has also engineered myriad weather-predicting algorithms, including NeuralGCM and SEEDS. Some are being integrated into Google search and maps, including rain forecasts, wildfires, flooding, and heat alerts. Microsoft joined the race with ClimaX, a flexible AI that can be tailored to generate predictions from hours to months ahead (with varying accuracies).

All this is not to say AI will be taking jobs from meteorologists. The DeepMind team stresses that GenCast wouldn’t be possible without foundational work from climate scientists and physics-based models. To give back, they’re releasing aspects of GenCast to the wider weather community to gain further insights and feedback.

Image Credit: NASA

]]>
159735
Most Supposedly ‘Open’ AI Systems Are Actually Closed—and That’s a Problem https://singularityhub.com/2024/11/30/most-supposedly-open-ai-systems-are-actually-closed-and-thats-a-problem/ Sat, 30 Nov 2024 15:00:17 +0000 https://singularityhub.com/?p=159691 “Open” AI models have a lot to give. The practice of sharing source code with the public spurs innovation and democratizes AI as a tool.

Or so the story goes. A new analysis in Nature puts a twist on the narrative: Most supposedly “open” AI models, such as Meta’s Llama 3, are hardly that.

Rather than encouraging or benefiting small startups, the “rhetoric of openness is frequently wielded in ways that…exacerbate the concentration of power” in large tech companies, wrote David Widder at Cornell University, Meredith Whittaker at Signal Foundation, and Sarah West at AI Now Institute.

Why care? Debating AI openness seems purely academic. But with growing use of ChatGPT and other large language models, policymakers are scrambling to catch up. Can models be allowed in schools or companies? What guiderails should be in place to protect against misuse?

And perhaps most importantly, most AI models are controlled by Google, Meta, and other tech giants, which have the infrastructure and financial means to either develop or license the technology—and in turn, guide the evolution of AI to meet their financial incentives.

Lawmakers around the globe have taken note. This year, the European Union adopted the AI Act, the world’s first comprehensive legislation to ensure AI systems used are “safe, transparent, non-discriminatory, and environmentally friendly.” As of September, there were over 120 AI bills in Congress, chaperoning privacy, accountability, and transparency.

In theory, open AI models can deliver those needs. But “when policy is being shaped, definitions matter,” wrote the team.

In the new analysis, they broke down the concept of “openness” in AI models across the entire development cycle and pinpointed how the term can be misused.

What Is ‘Openness,’ Anyway?

The term “open source” is nearly as old as software itself.

At the turn of the century, small groups of computing rebels released code for free software that anyone could download and use in defiance of corporate control. They had a vision: Open-source software, such as freely available word processors similar to Microsoft’s, could level the playing field for little guys and allow access to people who couldn’t afford the technology. The code also became a playground, where eager software engineers fiddled around with the code to discover flaws in need of fixing—resulting in more usable and secure software.

With AI, the story’s different. Large language models are built with numerous layers of interconnected artificial “neurons.” Similar to their biological counterparts, the structure of those connections heavily influences a model’s performance in a specific task.

Models are trained by scraping the internet for text, images, and increasingly, videos. As this training data flows through their neural networks, they adjust the strengths of their artificial neurons’ connections—dubbed “weights”—so that they generate desired outputs. Most systems are then evaluated by people to judge the accuracy and quality of the results.

The problem? Understanding these systems’ internal processes isn’t straightforward. Unlike traditional software, sharing only the weights and code of an AI model, without the underlying training data, makes it difficult for other people to detect potential bugs or security threats.

This means previous concepts from open-source software are being applied in “ill-fitting ways to AI systems,” wrote the team, leading to confusion about the term.

Openwashing

Current “open” AI models span a range of openness, but overall, they have three main characteristics.

One is transparency, or how much detail about an AI model’s setup its creator publishes. Eleuther AI’s Pythia series, for example, allows anyone to download the source code, underlying training data, and full documentation. They also license the AI model for wide reuse, meeting the definition of “open source” from the Open Source Initiative, a non-profit that has defined the term as it has evolved over nearly three decades. In contrast, Meta’s Llama 3, although described as open, only allows people to build on their AI through an API—a sort of interface that lets different software communicate, without sharing the underlying code—or download just the model’s weights to tinker but with restrictions on their usage.

“This is ‘openwashing’ systems that are better understood as closed,” wrote the authors.

A second characteristic is reusability, in that openly licensed data and details of an AI model can be used by other people (although often only through a cloud service—more on that later.) The third characteristic, extensibility, lets people fine-tune existing models for their specific needs.

“[This] is a key feature championed particularly by corporate actors invested in open AI,” wrote the team. There’s a reason: Training AI models requires massive computing power and resources, often only available to large tech companies. Llama 3, for example, was trained on 15 trillion tokens—a unit for processing data, such as words or characters. These choke points make it hard for startups to build AI systems from scratch. Instead, they often retrain “open” systems to adapt them to a new task or run more efficiently. Stanford’s AI Alpaca model, based on Llama, for example, gained interest for the fact it could run on a laptop.

There’s no doubt that many people and companies have benefited from open AI models. But to the authors, they may also be a barrier to the democratization of AI.

The Dark Side

Many large-scale open AI systems today are trained on cloud servers, the authors note. The UAE’s Technological Innovation Institute developed Falcon 40B and trained it on Amazon’s AWS servers. MosaicML’s AI is “tied to Microsoft’s Azure.” Even OpenAI has partnered with Microsoft to offer its new AI models at a price.

While cloud computing is extremely useful, it limits who can actually run AI models to a handful of large companies—and their servers. Stanford’s Alpaca eventually shut down partially due to a lack of financial resources.

Secrecy around training data is another concern. “Many large-scale AI models described as open neglect to provide even basic information about the underlying data used to train the system,” wrote the authors.

Large language models process huge amounts of data scraped from the internet, some of which is copyrighted, resulting in a number of ongoing lawsuits. When datasets aren’t readily made available, or when they’re incredibly large, it’s tough to fact-check the model’s reported performance, or if the datasets “launder others’ intellectual property,” according to the authors.

The problem gets worse when building frameworks, often developed by large tech companies, to minimize the time “[reinventing] the wheel.” These pre-written pieces of code, workflows, and evaluation tools help developers quickly build on an AI system. However, most tweaks don’t change the model itself. In other words, whatever potential problems or biases that exist inside the models could also propagate to downstream applications.

An AI Ecosystem

To the authors, developing AI that’s more open isn’t about evaluating one model at a time. Rather, it’s about taking the whole ecosystem into account.

Most debates on AI openness miss the larger picture. As AI advances, “the pursuit of openness on its own will be unlikely to yield much benefit,” wrote the team. Instead, the entire cycle of AI development—from setting up, training, and running AI systems to their practical uses and financial incentives—has to be considered when building open AI policies.

“Pinning our hopes on ‘open’ AI in isolation will not lead us to that world,” wrote the team.

Image Credit: x / x

]]>
159691
Why Are Our Brains So Big? Because They Excel at Damage Control https://singularityhub.com/2024/11/26/why-are-our-brains-so-big-because-they-excel-at-damage-control/ Tue, 26 Nov 2024 15:00:39 +0000 https://singularityhub.com/?p=159622 Compared to other primates, our brains are exceptionally large. Why?

A new study comparing neurons from different primates pinpointed several genetic changes unique to humans that buffer our brains’ ability to handle everyday wear and tear. Dubbed “evolved neuroprotection,” the findings paint a picture of how our large brains gained their size, wiring patterns, and computational efficiency.

It’s not just about looking into the past. The results could also inspire new ideas to tackle schizophrenia, Parkinson’s disease, and addiction caused by the gradual erosion of one type of brain cell. Understanding these wirings may also spur artificial brains that learn like ours.

The results haven’t yet been reviewed by other scientists. But to Andre Sousa at the University of Wisconsin-Madison, who wasn’t involved in the work, the findings can help us understand “human brain evolution and all the potentially negative and positive things that come with it.”

Bigger Brain, Bigger Price

Six million years ago, we split from a common ancestor with our closest evolutionary relative, the chimpanzee.

Our brains rapidly exploded in size—but crucially, only in certain regions. One of these was at the front of the brain. Called the prefrontal cortex, it’s an “executive control” center that lets us reason, make difficult decisions, and exercise self-control. Another region, buried deep in the brain, processes emotions and gives us the ability to easily move with just a thought.

The two regions are in ready communication, and their chatter may give rise to parts of our intellect and social interactions, such as theory of mind—where we can gauge another person’s emotions, beliefs, and intentions. Dopamine neurons, a type of brain cell, bridge this connection.

They may sound familiar. Dopamine, which these neurons pump out, is known as the “feel-good” molecule. But they do so much more. Dopamine neurons are spread across the entire brain and often dial the activity of certain neural networks up or down, including those regulating emotion and movement. Dopamine neurons are like light dimmers—rather than brain networks flipping on or off like a simple switch, the neurons fine-tune the level of action.

These cells “coordinate multiple aspects” of brain function, wrote study author Alex Pollen at the University of California, San Francisco and colleagues.

The puzzle? Compared to our primate relatives, we only have twice the number of dopamine neurons, a measly increase compared to the expansion of brain size. By scanning the brains of humans and macaque monkeys—which are often used in neuroscience research—the team found that our prefrontal cortex is 18 times larger, and the striatum has ballooned roughly 7 times.

In other words, each dopamine neuron must work harder to supply these larger brain regions.

Though they have long “branches,” neurons aren’t passive wires. To connect and function normally, they require high amounts of energy. Most of this comes from the cells’ energy factories, pea-like structures called mitochondria. While highly efficient, neurons degrade as we age or in cases of neurodegeneration, including Parkinson’s disease.

Dopamine neurons are also especially vulnerable to decay compared to other types of neurons because making dopamine generates toxic byproducts. Called reactive oxygen species, these chemicals are like tiny bullets that destroy the cells’ mitochondria and their outer wrappers.

Dopamine neurons have several natural methods of fighting back. They pump out antioxidants and have evolved ways to buffer toxic molecules. But eventually these defenses break down—especially in a bigger brain. In turn, the connection between the “reasoning” and “emotion” parts of the brain starts to fray.

Accumulating damage to these neural workhorses should be a nonstarter for building larger, more complex brains during evolution. Yet somehow our brains mostly skirted the trauma. The new study asked how.

Evolution in a Dish

The team grew 3D blobs made of stem cells from human, chimpanzee, orangutan, and macaque monkeys. After a month, the hybrid mini-brains began pumping out dopamine.

It may sound like a strange strategy, but pooling cells from different species establishes a baseline for further genetic analysis. Because they’re all growing in the same environment in a single blob, any differences in a cell’s gene expression are likely due to the species it came from, rather than environmental conditions or other effects, explained the team.

The final pool included cells from eight humans, seven chimpanzees, one orangutan, and three macaque monkeys.

The cells worked well together, developing an overall pattern mimicking dopamine neurons around the striatum—ones that reach out to the frontal parts of the brain. After growing them for up to 100 days, the team captured genes from each cell to gauge which ones were turned on or off. In total, they analyzed over 105,000 cells.

Compared to other species, human stem cells seemed most versatile. They gave birth not just to dopamine neurons, but also other brain cell types. And they had another edge: Compared to chimpanzees, human dopamine neurons dialed up genes to tackle damaging reactive oxygen “bullets.”

Gene expression tests showed that human dopamine cells had far higher levels of several genes that break down the toxic chemicals compared to other non-human primates—in turn limiting their damage to the sensitive neurons.

When challenged with a pesticide that elevates reactive oxygen species, human brain cells fought off the assault with a boost of a nurturing protein called brain-derived neurotrophic factor (BDNF). The molecule has long been a neuroscience darling for its ability to spur the birth and growth of new neurons and rewire old ones. Scientists have suggested BDNF may help ketamine reverse depressive symptoms by reshaping the brain’s networks.

In contrast, chimpanzee neurons from the same mini-brains couldn’t boost the protective protein when doused with the pesticide.

Keep on Fighting

The team analyzed the hybrid mini-brains at a very early stage of their development, when there was no chance of them developing any sort of sentience.

Their goal was to understand how our brains—especially dopamine neurons—have become resilient against damage and can tolerate the energy costs that come with a larger brain.

But the results could also boost cellular defense systems in people with dopamine-related disorders. Mutations in protective genes found in the study, for example, may increase disease vulnerability in some people. Testing them in animal models paves the way for more targeted therapies against these disorders.

Knowing how dopamine works in the brain at a molecular level across species provides a snapshot of what sets us apart from our evolutionary cousins. This “can advance our understanding of the origins of human-enriched disorders and identify new therapeutic targets and strategies for drug development,” wrote the team.

Image Credit: Marek Pavlík on Unsplash

]]>
159622