Science Archives - Singularity Hub https://singularityhub.com/tag/science/ News and Insights on Technology, Science, and the Future from Singularity Group Sun, 22 Dec 2024 22:34:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://singularityhub.com/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Science Archives - Singularity Hub https://singularityhub.com/tag/science/ 32 32 4183809 ET May Look Nothing Like Life on Earth. Scientists Want a Universal Theory of Life to Describe It. https://singularityhub.com/2024/12/24/et-may-look-nothing-like-life-on-earth-scientists-want-a-universal-theory-of-life-to-describe-it/ Tue, 24 Dec 2024 15:00:28 +0000 https://singularityhub.com/?p=159910

We have only one example of biology forming in the universe—life on Earth. But what if life can form in other ways? How do you look for alien life when you don’t know what alien life might look like?

These questions are preoccupying astrobiologists—scientists who look for life beyond Earth. Astrobiologists have attempted to come up with universal rules that govern the emergence of complex physical and biological systems both on Earth and beyond.

I’m an astronomer who has written extensively about astrobiology. Through my research, I’ve learned that the most abundant form of extraterrestrial life is likely to be microbial, since single cells can form more readily than large organisms. But just in case there’s advanced alien life out there, I’m on the international advisory council for the group designing messages to send to those civilizations.

Detecting Life Beyond Earth

Since the first discovery of an exoplanet in 1995, over 5,000 exoplanets, or planets orbiting other stars, have been found.

Many of these exoplanets are small and rocky, like Earth, and in the habitable zones of their stars. The habitable zone is the range of distances between the surface of a planet and the star it orbits that would allow the planet to have liquid water and thus support life as we on Earth know it.

The sample of exoplanets detected so far projects 300 million potential biological experiments in our galaxy—or 300 million places, including exoplanets and other bodies such as moons, with suitable conditions for biology to arise.

The uncertainty for researchers starts with the definition of life. It feels like defining life should be easy, since we know life when we see it, whether it’s a flying bird or a microbe moving in a drop of water. But scientists don’t agree on a definition, and some think a comprehensive definition might not be possible.

NASA defines life as a “self-sustaining chemical reaction capable of Darwinian evolution.” That means organisms with a complex chemical system that evolve by adapting to their environment. Darwinian evolution says that the survival of an organism depends on its fitness in its environment.

The evolution of life on Earth has progressed over billions of years from single-celled organisms to large animals and other species, including humans.

Exoplanets are remote and hundreds of millions of times fainter than their parent stars, so studying them is challenging. Astronomers can inspect the atmospheres and surfaces of Earth-like exoplanets using a method called spectroscopy to look for chemical signatures of life.

Spectroscopy might detect signatures of oxygen in a planet’s atmosphere, which microbes called blue-green algae created by photosynthesis on Earth several billion years ago, or chlorophyll signatures, which indicate plant life.

NASA’s definition of life leads to some important but unanswered questions. Is Darwinian evolution universal? What chemical reactions can lead to biology off Earth?

Evolution and Complexity

All life on Earth, from a fungal spore to a blue whale, evolved from a microbial last common ancestor about four billion years ago.

The same chemical processes are seen in all living organisms on Earth, and those processes might be universal. They also may be radically different elsewhere.

In October 2024, a diverse group of scientists gathered to think outside the box on evolution. They wanted to step back and explore what sorts of processes created order in the universe—biological or not—to figure out how to study the emergence of life totally unlike life on Earth.

Two researchers present argued that complex systems of chemicals or minerals, when in environments that allow some configurations to persist better than others, evolve to store larger amounts of information. As time goes by, the system will grow more diverse and complex, gaining the functions needed for survival, through a kind of natural selection.

A rock made up of metal, with translucent olivine crystals suspended within.
Minerals are an example of a nonliving system that has increased in diversity and complexity over billions of years. Image Credit: Doug Bowman, CC BY

They speculated that there might be a law to describe the evolution of a wide variety of physical systems. Biological evolution through natural selection would be just one example of this broader law.

In biology, information refers to the instructions stored in the sequence of nucleotides on a DNA molecule, which collectively make up an organism’s genome and dictate what the organism looks like and how it functions.

If you define complexity in terms of information theory, natural selection will cause a genome to grow more complex as it stores more information about its environment.

Complexity might be useful in measuring the boundary between life and non-life.

However, it’s wrong to conclude that animals are more complex than microbes. Biological information increases with genome size, but evolutionary information density drops. Evolutionary information density is the fraction of functional genes within the genome, or the fraction of the total genetic material that expresses fitness for the environment.

Organisms that people think of as primitive, such as bacteria, have genomes with high information density and so appear better designed than the genomes of plants or animals.

A universal theory of life is still elusive. Such a theory would include the concepts of complexity and information storage, but it would not be tied to DNA or the particular kinds of cells we find in terrestrial biology.

Implications for the Search for Extraterrestial Life

Researchers have explored alternatives to terrestrial biochemistry. All known living organisms, from bacteria to humans, contain water, and it is a solvent that is essential for life on Earth. A solvent is a liquid medium that facilitates chemical reactions from which life could emerge. But life could potentially emerge from other solvents, too.

Astrobiologists Willam Bains and Sara Seager have explored thousands of molecules that might be associated with life. Plausible solvents include sulfuric acid, ammonia, liquid carbon dioxide, and even liquid sulfur.

Alien life might not be based on carbon, which forms the backbone of all life’s essential molecules—at least here on Earth. It might not even need a planet to survive.

Advanced forms of life on alien planets could be so strange that they’re unrecognizable. As astrobiologists try to detect life off Earth, they’ll need to be creative.

One strategy is to measure mineral signatures on the rocky surfaces of exoplanets, since mineral diversity tracks terrestrial biological evolution. As life evolved on Earth, it used and created minerals for exoskeletons and habitats. The hundred minerals present when life first formed have grown to about 5,000 today.

For example, zircons are simple silicate crystals that date back to the time before life started. A zircon found in Australia is the oldest known piece of Earth’s crust. But other minerals, such as apatite, a complex calcium phosphate mineral, are created by biology. Apatite is a primary ingredient in bones, teeth, and fish scales.

Another strategy for finding life unlike that on Earth is to detect evidence of a civilization, such as artificial lights, or the industrial pollutant nitrogen dioxide in the atmosphere. These are examples of tracers of intelligent life called technosignatures.

It’s unclear how and when a first detection of life beyond Earth will happen. It might be within the solar system, or by sniffing exoplanet atmospheres, or by detecting artificial radio signals from a distant civilization.

The search is a twisting road, not a straightforward path. And that’s for life as we know it—for life as we don’t know it, all bets are off.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA’s Goddard Space Flight Center/Francis Reddy

]]>
159910
The Secret to Predicting How Your Brain Will Age May Be in Your Blood https://singularityhub.com/2024/12/13/the-secret-to-predicting-how-your-brain-is-aging-may-be-in-your-blood/ Fri, 13 Dec 2024 21:31:28 +0000 https://singularityhub.com/?p=159809 Brain aging occurs in distinctive phases. Its trajectory could be hidden in our blood—paving the way for early diagnosis and intervention.

A new study published in Nature Aging analyzed brain imaging data from nearly 11,000 healthy adults, middle-aged and older, using AI to gauge their “brain age.” Roughly half of participants had their blood proteins analyzed to fish out those related to aging.

Scientists have long looked for the markers of brain aging in blood proteins, but this study had a unique twist. Rather than mapping protein profiles to a person’s chronological age—the number of years on your birthday card—they used biological brain age, which better reflects the actual working state of the brain as the clock ticks on.

Thirteen proteins popped up—eight associated with faster brain aging and five that slowed down the clock. Most alter the brain’s ability to handle inflammation or are involved in cells’ ability to form connections.

From these, three unique “signatures” emerged at 57, 70, and 78 years of age. Each showed a combination of proteins in the blood marking a distinct phase of brain aging. Those related to neuron metabolism peaked early, while others spurring inflammation were more dominate in the twilight years.

These spikes signal a change in the way the brain functions with age. They may be points of intervention, wrote the authors. Rather than relying on brain scans, which aren’t often available to many people, the study suggests that a blood test for these proteins could one day be an easy way to track brain health as we age.

The protein markers could also help us learn to prevent age-related brain disorders, such as dementia, Alzheimer’s disease, stroke, or problems with movement. Early diagnosis is key. Although the protein “hallmarks” don’t test for the disorders directly, they offer insight into the brain’s biological age, which often—but not always—correlates with signs of aging.

The study helps bridge gaps in our understanding of how brains age, the team wrote.

Treasure Trove

Many people know folks who are far sharper than expected at their age. A dear relative of mine, now in their mid-80s, eagerly adopted ChatGPT, AI-assisted hearing aids, and “Ok Google.” Their eyes light up anytime they get to try a new technology. Meanwhile, I watched another relative—roughly the same age—rapidly lose their wit, sharp memory, and eventually, the ability to realize they were no longer logical.

My experiences are hardly unique. With the world rapidly aging, many of us will bear witness to, and experience, the brain aging process. Projections suggest that by 2050, over 1.5 billion people will be 65 or older, with many potentially experiencing age-related memory or cognitive problems.

But chronological age doesn’t reflect the brain’s actual functions. For years, scientists studying longevity have focused on “biological age” to gauge bodily functions, rather than the year on your birth certificate. This has led to the development of multiple aging clocks, with each measuring a slightly different aspect of cell aging. Hundreds of these clocks are now being tested, as clinical trials use them to gauge the efficacy of potential anti-aging treatments.

Many of the clocks were built by taking tiny samples from the body and analyzing certain gene expression patterns linked to the aging process. It’s tough to do that with the brain. Instead, scientists have largely relied on brain scans, showing structure and connectivity across regions, to build “brain clocks.” These networks gradually erode as we age.

The studies calculate the “brain age gap”— the difference between the brain’s structural integrity and your actual age. A ten-year gap, for example, means your brain’s networks are more similar to those of people a decade younger, or older, than you.

Most studies have had a small number of participants. The new study tapped into the UK Biobank, a comprehensive dataset of over a million people with regular checkups—including brain scans and blood draws—offering up a deluge of data for analysis.

The Brain Age Gap

Using machine learning, the study first sorted through brain scans of almost 11,000 people aged 45 to 82 to calculate their biological brain age. The AI model was trained on hundreds of structural features of the brain, such as overall size, thickness of the cortex—the outermost region—and the amount and integrity of white matter.

They then calculated the brain age gap for each person. On average, the gap was roughly three years, swinging both ways, meaning some people had either a slightly “younger” or “older” brain.

Next, the team tried to predict the brain age gap by measuring proteins in plasma, the liquid part of blood. Longevity research in mice has uncovered many plasma proteins that age or rejuvenate the brain.

After screening nearly 3,000 plasma proteins from 4,696 people, they matched each person’s protein profile to the participant’s brain age. They found 13 proteins associated with the brain age gap, with most involved in inflammation, movement, and cognition.

Two proteins particularly stood out.

One called Brevican, or BCAN, helps maintain the brain’s wiring and overall structure and supports learning and memory. The protein dwindles in Alzheimer’s disease. Higher levels, in contrast, were associated with slower brain aging and lower risk of dementia and stroke.

The other protein, growth differentiation factor 15 (GDF15), is released by the body when it senses damage. Higher levels correlated with a higher risk of age-related brain disease, likely because it sparks chronic inflammation—a “hallmark” of aging.

There was also a surprising result.

Plasma protein levels didn’t change linearly with age. Instead, changes peaked at three chronological ages—57, 70, and 78—with each stage marking a distinctive phase of brain aging.

At 57, for example, proteins related to brain metabolism and wound healing changed markedly, suggesting early molecular signs of brain aging. By 70, proteins that support the brain’s ability to rewire itself—some strongly associated with dementia and stroke—changed rapidly. Another peak, at 78, showed protein changes mostly related to inflammation and immunity.

“Our findings thus emphasize the importance and necessity of intervention and prevention at brain age 70 years to reduce the risk of multiple brain disorders,” wrote the authors

To be clear: These are early results. The participants are largely of European ancestry, and the results may not translate to other populations. The 13 proteins also need further testing in animals before any can be validated as biomarkers. But the study paves the way.

Their results, the authors conclude, suggest the possibility of earlier, simpler diagnosis of age-related brain disorders and the development of personalized therapies to treat them.

]]>
159809
Thousands of Undiscovered Genes May Be Hidden in DNA ‘Dark Matter’ https://singularityhub.com/2024/12/09/thousands-of-new-genes-may-be-hidden-in-dna-dark-matter/ Mon, 09 Dec 2024 22:05:27 +0000 https://singularityhub.com/?p=159759 Thousands of new genes are hidden inside the “dark matter” of our genome.

Previously thought to be noise left over from evolution, a new study found that some of these tiny DNA snippets can make miniproteins—potentially opening a new universe of treatments, from vaccines to immunotherapies for deadly brain cancers.

The preprint, not yet peer-reviewed, is the latest from a global consortium that hunts down potential new genes. Ever since the Human Genome Project completed its first draft at the turn of the century, scientists have tried to decipher the genetic book of life. Buried within the four genetic letters—A, T, C, and G—and the proteins they encode is a wealth of information that could help tackle our most frustrating medical foes, such as cancer.

The Human Genome Project’s initial findings came as a surprise. Scientists found less than 30,000 genes that build our bodies and keep them running—roughly a third of that previously predicted. Now, roughly 20 years later, as the technologies that sequence our DNA or map proteins have become increasingly sophisticated, scientists are asking: “What have we missed?”

The new study filled the gap by digging into relatively unexplored portions of the genome. Called “non-coding,” these parts haven’t yet been linked to any proteins. Combining several existing datasets, the team zeroed in on thousands of potential new genes that make roughly 3,000 miniproteins.

Whether these proteins are functional remains to be tested, but initial studies suggest some are involved in a deadly childhood brain cancer. The team is releasing their tools and results to the wider scientific community for further exploration. The platform isn’t just limited to deciphering the human genome; it can delve into the genetic blueprint of other animals and plants as well.

Even though mysteries remain, the results “help provide a more complete picture of the coding portion of the genome,” Ami Bhatt at Stanford University told Science.

What’s in a Gene?

A genome is like a book without punctuation. Sequencing one is relatively easy today, thanks to cheaper costs and higher efficiency. Making sense of it is another matter.

Ever since the Human Genome Project, scientists have searched our genetic blueprint to find the “words,” or genes, that make proteins. These DNA words are further broken down into three-letter codons, each one encoding a specific amino acid—the building block of a protein.

A gene, when turned on, is transcribed into messenger RNA. These molecules shuttle genetic information from DNA to the cell’s protein-making factory, called the ribosome. Picture it as a sliced bun, with an RNA molecule running through it like a piece of bacon.

When first defining a gene, scientists focus on open reading frames. These are made of specific DNA sequences that dictate where a gene starts and stops. Like a search function, the framework scans the genome for potential genes, which are then validated with lab experiments based on myriad criteria. These include whether they can make proteins of a certain size—more than 100 amino acids. Sequences that meet the mark are compiled into GENCODE, an international database of officially recognized genes.

Genes that encode proteins have attracted the most attention because they aid our understanding of disease and inspire ways to treat it. But much of our genome is “non-coding,” in that large sections of it don’t make any known proteins.

For years, these chunks of DNA were considered junk—the defunct remains of our evolutionary past. Recent studies, however, have begun revealing hidden value. Some bits regulate when genes turn on or off. Others, such as telomeres, protect against the degradation of DNA as it replicates during cell division and ward off aging.

Still, the dogma was that these sequences don’t make proteins.

A New Lens

Recent evidence is piling up that non-coding areas do have protein-making segments that affect health.

One study found that a small missing section in supposedly non-coding areas caused inherited bowel troubles in infants. In mice genetically engineered to mimic the same problem, restoring the DNA snippet—not yet defined as a gene—reduced their symptoms. The results highlight the need to go beyond known protein-coding genes to explain clinical findings, the authors wrote.

Dubbed non-canonical open reading frames (ncORFs), or “maybe-genes,” these snippets have popped up across human cell types and diseases, suggesting they have physiological roles.

In 2022, the consortium behind the new study began peeking into potential functions, hoping to broaden our genetic vocabulary. Rather than sequencing the genome, they looked at datasets that sequenced RNA as it was being turned into proteins in the ribosome.

The method captures the actual output of the genome—even extremely short amino acid chains normally thought too small to make proteins. Their search produced a catalog of over 7,000 human “maybe-genes,” some of which made microproteins that were eventually detected inside cancer and heart cells.

But overall, at that time “we did not focus on the questions of protein expression or functionality,” wrote the team. So, they broadened their collaboration in the new study, welcoming specialists in protein science from over 20 institutions across the globe to make sense of the “maybe-genes.”

They also included several resources that provide protein databases from various experiments—such as the Human Proteome Organization and the PeptideAtlas—and added data from published experiments that use the human immune system to detect protein fragments.

In all, the team analyzed over 7,000 “maybe-genes” from a variety of cells: Healthy, cancerous, and also immortal cell lines grown in the lab. At least a quarter of these “maybe-genes” translated into over 3,000 miniproteins. These are far smaller than normal proteins and have a unique amino acid makeup. They also seem to be more attuned to parts of the immune system—meaning they could potentially help scientists develop vaccines, autoimmune treatments, or immunotherapies.

Some of these newly found miniproteins may not have a biological role at all. But the study gives scientists a new way to interpret potential functions. For quality control, the team organized each miniprotein into a different tier, based on the amount of evidence from experiments, and integrated them into an existing database for others to explore.

We’re just beginning to probe our genome’s dark matter. Many questions remain.

“A unique capacity of our multi-consortium collaboration is the ability to develop consensus on the key challenges” that we feel need answers, wrote the team.

For example, some experiments used cancer cells, meaning that certain “maybe-genes” might only be active in those cells—but not in normal ones. Should they be called genes?

From here, deep learning and other AI methods may help speed up analysis. Although annotating genes is “historically rooted in manual inspection” of the data, wrote the authors, AI can churn through multiple datasets far faster, if only as a first pass to find new genes.

How many might scientists discover? “50,000 is in the realm of possibility,” study author Thomas Martinez told Science.

Image Credit: Miroslaw Miras from Pixabay

]]>
159759
Why Are Our Brains So Big? Because They Excel at Damage Control https://singularityhub.com/2024/11/26/why-are-our-brains-so-big-because-they-excel-at-damage-control/ Tue, 26 Nov 2024 15:00:39 +0000 https://singularityhub.com/?p=159622 Compared to other primates, our brains are exceptionally large. Why?

A new study comparing neurons from different primates pinpointed several genetic changes unique to humans that buffer our brains’ ability to handle everyday wear and tear. Dubbed “evolved neuroprotection,” the findings paint a picture of how our large brains gained their size, wiring patterns, and computational efficiency.

It’s not just about looking into the past. The results could also inspire new ideas to tackle schizophrenia, Parkinson’s disease, and addiction caused by the gradual erosion of one type of brain cell. Understanding these wirings may also spur artificial brains that learn like ours.

The results haven’t yet been reviewed by other scientists. But to Andre Sousa at the University of Wisconsin-Madison, who wasn’t involved in the work, the findings can help us understand “human brain evolution and all the potentially negative and positive things that come with it.”

Bigger Brain, Bigger Price

Six million years ago, we split from a common ancestor with our closest evolutionary relative, the chimpanzee.

Our brains rapidly exploded in size—but crucially, only in certain regions. One of these was at the front of the brain. Called the prefrontal cortex, it’s an “executive control” center that lets us reason, make difficult decisions, and exercise self-control. Another region, buried deep in the brain, processes emotions and gives us the ability to easily move with just a thought.

The two regions are in ready communication, and their chatter may give rise to parts of our intellect and social interactions, such as theory of mind—where we can gauge another person’s emotions, beliefs, and intentions. Dopamine neurons, a type of brain cell, bridge this connection.

They may sound familiar. Dopamine, which these neurons pump out, is known as the “feel-good” molecule. But they do so much more. Dopamine neurons are spread across the entire brain and often dial the activity of certain neural networks up or down, including those regulating emotion and movement. Dopamine neurons are like light dimmers—rather than brain networks flipping on or off like a simple switch, the neurons fine-tune the level of action.

These cells “coordinate multiple aspects” of brain function, wrote study author Alex Pollen at the University of California, San Francisco and colleagues.

The puzzle? Compared to our primate relatives, we only have twice the number of dopamine neurons, a measly increase compared to the expansion of brain size. By scanning the brains of humans and macaque monkeys—which are often used in neuroscience research—the team found that our prefrontal cortex is 18 times larger, and the striatum has ballooned roughly 7 times.

In other words, each dopamine neuron must work harder to supply these larger brain regions.

Though they have long “branches,” neurons aren’t passive wires. To connect and function normally, they require high amounts of energy. Most of this comes from the cells’ energy factories, pea-like structures called mitochondria. While highly efficient, neurons degrade as we age or in cases of neurodegeneration, including Parkinson’s disease.

Dopamine neurons are also especially vulnerable to decay compared to other types of neurons because making dopamine generates toxic byproducts. Called reactive oxygen species, these chemicals are like tiny bullets that destroy the cells’ mitochondria and their outer wrappers.

Dopamine neurons have several natural methods of fighting back. They pump out antioxidants and have evolved ways to buffer toxic molecules. But eventually these defenses break down—especially in a bigger brain. In turn, the connection between the “reasoning” and “emotion” parts of the brain starts to fray.

Accumulating damage to these neural workhorses should be a nonstarter for building larger, more complex brains during evolution. Yet somehow our brains mostly skirted the trauma. The new study asked how.

Evolution in a Dish

The team grew 3D blobs made of stem cells from human, chimpanzee, orangutan, and macaque monkeys. After a month, the hybrid mini-brains began pumping out dopamine.

It may sound like a strange strategy, but pooling cells from different species establishes a baseline for further genetic analysis. Because they’re all growing in the same environment in a single blob, any differences in a cell’s gene expression are likely due to the species it came from, rather than environmental conditions or other effects, explained the team.

The final pool included cells from eight humans, seven chimpanzees, one orangutan, and three macaque monkeys.

The cells worked well together, developing an overall pattern mimicking dopamine neurons around the striatum—ones that reach out to the frontal parts of the brain. After growing them for up to 100 days, the team captured genes from each cell to gauge which ones were turned on or off. In total, they analyzed over 105,000 cells.

Compared to other species, human stem cells seemed most versatile. They gave birth not just to dopamine neurons, but also other brain cell types. And they had another edge: Compared to chimpanzees, human dopamine neurons dialed up genes to tackle damaging reactive oxygen “bullets.”

Gene expression tests showed that human dopamine cells had far higher levels of several genes that break down the toxic chemicals compared to other non-human primates—in turn limiting their damage to the sensitive neurons.

When challenged with a pesticide that elevates reactive oxygen species, human brain cells fought off the assault with a boost of a nurturing protein called brain-derived neurotrophic factor (BDNF). The molecule has long been a neuroscience darling for its ability to spur the birth and growth of new neurons and rewire old ones. Scientists have suggested BDNF may help ketamine reverse depressive symptoms by reshaping the brain’s networks.

In contrast, chimpanzee neurons from the same mini-brains couldn’t boost the protective protein when doused with the pesticide.

Keep on Fighting

The team analyzed the hybrid mini-brains at a very early stage of their development, when there was no chance of them developing any sort of sentience.

Their goal was to understand how our brains—especially dopamine neurons—have become resilient against damage and can tolerate the energy costs that come with a larger brain.

But the results could also boost cellular defense systems in people with dopamine-related disorders. Mutations in protective genes found in the study, for example, may increase disease vulnerability in some people. Testing them in animal models paves the way for more targeted therapies against these disorders.

Knowing how dopamine works in the brain at a molecular level across species provides a snapshot of what sets us apart from our evolutionary cousins. This “can advance our understanding of the origins of human-enriched disorders and identify new therapeutic targets and strategies for drug development,” wrote the team.

Image Credit: Marek Pavlík on Unsplash

]]>
159622
A 4.45-Billion-Year-Old Crystal From Mars Reveals the Planet Had Water From the Beginning https://singularityhub.com/2024/11/25/a-4-45-billion-year-old-crystal-from-mars-reveals-the-planet-had-water-from-the-beginning/ Mon, 25 Nov 2024 22:20:15 +0000 https://singularityhub.com/?p=159603

Water is ubiquitous on Earth—about 70 percent of Earth’s surface is covered by the stuff. Water is in the air, on the surface, and inside rocks. Geologic evidence suggests water has been stable on Earth since about 4.3 billion years ago.

The history of water on early Mars is less certain. Determining when water first appeared, where, and for how long, are all burning questions that drive Mars exploration. If Mars was once habitable, some amount of water was required.

My colleagues and I studied the mineral zircon in a meteorite from Mars and found evidence that water was present when the zircon crystal formed 4.45 billion years ago. Our results, published in the journal Science Advances, may represent the oldest evidence for water on Mars.

A Wet Red Planet

Water has long been recognized to have played an important role in early Martian history. To place our results in a broader context, let’s first consider what “early Mars” means in terms of the Martian geological timescale and then consider the different ways to look for water on Mars.

Like Earth, Mars formed about 4.5 billion years ago. The history of Mars has four geological periods. These are the Amazonian (from today back to 3 billion years), the Hesperian (3 billion to 3.7 billion years ago), the Noachian (3.7 billion to 4.1 billion years ago) and the Pre-Noachian (4.1 billion to about 4.5 billion years ago).

Chart: The Conversation | Created with Datawrapper

Evidence for water on Mars was first reported in the 1970s when NASA’s Mariner 9 spacecraft captured images of river valleys on the Martian surface. Later orbital missions, including Mars Global Surveyor and Mars Express, detected the widespread presence of hydrated clay minerals on the surface. These would have needed water.

The Martian river valleys and clay minerals are mainly found in Noachian terrains, which cover about 45 percent of Mars. In addition, orbiters also found large flood channels—called outflow channels—in Hesperian terrains. These suggest the short-lived presence of water on the surface, perhaps from groundwater release.

Most reports of water on Mars are in materials or terrains older than 3 billion years. More recent than that, there isn’t much evidence for stable liquid water on Mars.

But what about during the Pre-Noachian? When did water first show up on Mars?

A landscape of sepia orange ground with long grooves in it stretches towards a dusty horizon.
Kasei Valles is the largest outflow channel on Mars. Image Credit: NASA/JPL/Arizona State University, R. Luk

A Window to Pre-Noachian Mars

There are three ways to hunt for water on Mars. The first is using observations of the surface made by orbiting spacecraft. The second is using ground-based observations such as those taken by Mars rovers.

The third way is to study Martian meteorites that have landed on Earth, which is what we did.

In fact, the only Pre-Noachian material we have available to study directly is found in meteorites from Mars. A small number of all meteorites that have landed on Earth have come from our neighboring planet.

An even smaller subset of those meteorites, believed to have been ejected from Mars during a single asteroid impact, contain Pre-Noachian material.

The “poster child” of this group is an extraordinary rock called NWA7034, or Black Beauty.

Black Beauty is a famous Martian meteorite made of broken-up surface material, or regolith. In addition to rock fragments, it contains zircons that formed from 4.48 billion to 4.43 billion years ago. These are the oldest pieces of Mars known.

While studying trace elements in one of these ancient zircons we found evidence of hydrothermal processes—meaning they were exposed to hot water when they formed in the distant past.

Trace Elements, Water, and a Connection to Ore Deposits

The zircon we studied is 4.45 billion years old. Within it, iron, aluminum, and sodium are preserved in abundance patterns like concentric layers, similar to an onion.

This pattern, called oscillatory zoning, indicates that incorporation of these elements into the zircon occurred during its igneous history, in magma.

Iron elemental zoning in the 4.45-billion-year-old martian zircon. Darker blue areas indicate the highest iron abundances. Image Credit: Aaron Cavosie and Jack Gillespie

The problem is that iron, aluminum, and sodium aren’t normally found in crystalline igneous zircon—so how did these elements end up in the Martian zircon?

The answer is hot water.

In Earth rocks, finding zircon with growth zoning patterns for elements like iron, aluminum, and sodium is rare. One of the only places where it has been described is from Olympic Dam in South Australia, a giant copper, uranium, and gold deposit.

The metals in places like Olympic Dam were concentrated by hydrothermal (hot water) systems moving through rocks during magmatism.

Hydrothermal systems form anywhere that hot water, heated by volcanic plumbing systems, moves through rocks. Spectacular geysers at places like Yellowstone National Park in the United States form when hydrothermal water erupts at Earth’s surface.

Finding a hydrothermal Martian zircon raises the intriguing possibility of ore deposits forming on early Mars.

Previous studies have proposed a wet Pre-Noachian Mars. Unusual oxygen isotope ratios in a 4.43-billion-year-old Martian zircon were previously interpreted as evidence for an early hydrosphere. It has even been suggested that Mars may have had an early global ocean 4.45 billion years ago.

The big picture from our study is that magmatic hydrothermal systems were active during the early formation of Mars’ crust 4.45 billion years ago.

It’s not clear whether this means surface water was stable at this time, but we think it’s possible. What is clear is that the crust of Mars, like Earth, had water shortly after it formed—a necessary ingredient for habitability.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: JPL-Caltech/NASA

]]>
159603
This Ambitious Project Wants to Sequence the DNA of All Complex Life on Earth https://singularityhub.com/2024/11/11/this-ambitious-project-wants-to-sequence-the-dna-of-all-complex-life-on-earth/ Mon, 11 Nov 2024 22:29:55 +0000 https://singularityhub.com/?p=159475 “We’re only just beginning to understand the full majesty of life on Earth,” wrote the founding members of the Earth BioGenome Project in 2018. The ambitious project raised eyebrows when first announced. It seeks to genetically profile over a million plants, animals, and fungi. Documenting these genomes is the first step to building an atlas of complex life on Earth.

Many living species remain mysterious to science. A database resulting from the project would be a precious resource for monitoring biodiversity. It could also shed light on the genetic “dark matter” of complex life to inspire new biomaterials, medicines, or spark ideas for synthetic biology. Other insights could tailor agricultural practices to ramp up food production and feed a growing global population.

In other words, digging into living creatures’ genetic data is set to unveil “unimaginable biological secrets,” wrote the team.

The problem? A hefty price tag. With an estimated cost of $4.7 billion, even the founders of the project called it a moonshot. However, against all odds, the project has made progress, with 3,000 genomes already sequenced and 10,000 more species expected by 2026.

While lagging its original goal of sequencing roughly 1.7 million genomes in a decade, the project still hopes to hit this goal by 2032—later than the original goalpost, but with a much lower price tag thanks to more efficient DNA sequencing technologies.

Meanwhile, the international team has also built infrastructure to share gene sequencing data, and machine learning methods are further helping the consortium analyze thousands of datasets—helping characterize new species and monitor DNA data for endangered ones.

Expanding the Scope

Genetic material is everywhere. It’s an abundant resource to make sense of life of Earth. As genetic sequencing becomes faster, cheaper, and more reliable, recent studies have begun digging into information represented by DNA from species across the globe.

One method, dubbed metagenomics, captures and analyzes microbial DNA gathered in a variety of environments, from city sewers to boiling hot springs. The method captures and analyzes all DNA from a particular source to paint a broad genetic picture of bacteria from a given environment. Rather than bacteria, the Earth BioGenome Project, or EBP, is aiming to sequence the genomes of individual eukaryotic creatures—basically, those that keep most of their DNA in a nut-like structure, or nucleus, inside each cell.

Humans, plants, fungi, and other animals all fall into this group. In one estimate, there are roughly 10 to 15 million eukaryotic species on our planet. But just a little over two million have been documented.

Sequencing DNA from eukaryotic cells could vastly expand our knowledge of Earth’s genetic diversity. Such a database could also be a treasure trove for synthetic biology. Scientists have already tinkered with the genetic blueprints of life in bacteria and yeast cells. Deciphering—and then reprogramming—their genes has led to advances such as coaxing bacteria cells to pump out biofuels, degradable materials, and medicines such as insulin.

Charting eukaryotes’ genomes could further inspire new materials or medicines. For example, cytarabine, a chemotherapy drug, was initially isolated from a sponge-like sea creature and approved by the FDA to treat blood cancers that spread to the brain. Other plant-derived medications are already being used to tackle viral infections or to control pain. From nearly 400,000 different plant species, hundreds of medicines have already been approved and are on the market. Similarly, deciphering plant genetics have galvanized ideas for new biodegradable materials and biofuels.

Genetic sequences from complex organisms can “provide the raw materials for genome engineering and synthetic biology to produce valuable bioproducts at industrial scale,” wrote the team.

Medical and industrial uses aside, the effort also documents biodiversity. Creating a DNA digital library of all known eukaryotic life can pinpoint which species are most at risk—including species not yet fully characterized—providing data for earlier intervention.

“For the first time in history, it is possible to efficiently sequence the genomes of all known species and to use genomics to help discover the remaining 80 to 90 percent of species that are currently hidden from science,” wrote the team.

Soldiering On

The project has three phases.

Phase one lays the groundwork. It establishes the species to be sequenced, builds digital infrastructure for data sharing, develops an analysis toolkit. The most important goal is to build a reference DNA sequence for species similar in genetic makeup—that is, those in a “family.”

Reference genomes are incredibly important for genetic studies. True to their name, scientists rely on them as a baseline when comparing genetic variants—for example, to track down genes related to inherited diseases in humans or sugar content in different variants of crops.

Phase two of the project will begin analyzing the sequencing data and form strategies to maintain biodiversity. The last phase integrates all previous work to potentially revise how different species fit into our evolutionary tree. Scientists will also integrate climate data into this phase and tease out the impacts of climate change on biodiversity.

The international project began in 2018 and included the US, UK, Denmark, and China, with most DNA specimens sequenced at facilities in China and the UK. Today, 28 countries spanning six continents have signed on. Most DNA material isolated from individual species is directly sequenced on site, reducing the cost of transportation while increasing fidelity.

Not all participants have easy access to DNA sequencing facilities. One institution, Wellcome Sanger, developed a portable DNA sequencing lab that could help scientists working in rural areas to capture the genetic blueprints of exotic plants and animals. The device sequenced the DNA of a type of sunflower with potential medicinal properties in Africa, among other specimens from exotic locations.

EBP follows in the footsteps of other global projects aiming to sequence the Earth’s microbes, such as the National Microbiome Initiative or the Earth Microbiome Project. Once also considered moonshots, these have secured funding from government agencies and private investments.

Despite the enthusiasm of its participants, EBP is still short billions of dollars to guide it to full completion. But the project’s price tag—originally estimated in the billions of dollars—may be far less.

Thanks to more efficient and cheaper genetic sequencing methods, the current cost of phase one is expected to be half the original estimate—around $265 million.

It’s still a hefty sum, but for participants, the resulting database and methods are worth it. “We now have a common forum to learn together about how to produce genomes with the highest possible quality,” Alexandre Aleixo at the Vale Institute of Technology, who participated in the project, told Science.

Given the influence bacterial genetics has already had on biomedicine and biofuels, it’s likely that deciphering eukaryote DNA can spur further inspiration. In the end, the project relies on a global collaboration to benefit humanity.

“The far-reaching potential benefits of creating an open digital repository of genomic information for life on Earth can be realized only by a coordinated international effort,” wrote the team.

Image Credit: M. Richter on Pixabay

]]>
159475
The First Cells May Have Formed From Simple Fatty Bubbles Like These Ones https://singularityhub.com/2024/11/07/the-first-cells-may-have-formed-from-simple-fatty-bubbles-like-these-ones/ Thu, 07 Nov 2024 22:56:04 +0000 https://singularityhub.com/?p=159456 The first spark of cellular life on Earth likely needed gift packaging.

Let me explain. With the holidays around the corner, we’re all beginning to order presents. Each is carefully packaged inside a box or bubble-wrapped envelope and addressed for shipping. Without packaging, items would tumble together in a chaotic mess and miss their destination.

Life’s early chemicals were, in a way, like these “presents.” They floated around in a primordial soup, eventually forming the longer molecules that make up life as we know it. But without a “wrapper” encapsulating them in individual packages, different molecules bumped into each other but eventually drifted away, missing the necessary connections to spark life.

In other words, cellular “wrappers,” or cell membranes, are key to packaging the molecular machinery of life together. Made of fatty molecules, these wrappers are the foundation of our cells and the basis of multicellular life. They keep bacteria and other pathogens at bay while triggering the biological mechanisms that power normal cellular functions.

Scientists have long debated how the first cell membranes formed. Their building blocks, long-chain lipids, were hard to find on early Earth. Shorter fatty molecules, on the other hand, were abundant. Now, a new study in Nature Chemistry offers a bridge between these short fatty molecules and the first primordial cells.

Led by Neal Devaraj at the University of California, San Diego, the team coaxed short fatty molecules into bubbles that can encapsulate biological molecules. The team then added modern RNA molecules to drive chemical reactions inside the bubbles—and watched the reactions work, similar to those in a functional cell.

The engineered cell membranes also resisted high concentrations of substances abundant in early Earth puddles that could damage their integrity, shielding molecular carriers of genetic information and allowing them to work normally.

The resulting protocells are the latest to probe the origins of life. To be clear, they only mimic parts of normal living cells. They don’t have the molecular machinery to replicate, and their wrappers are rudimentary compared to ours.

But the “fascinating” result “opens up a new avenue” for understanding how the first cells appeared, Sheref Mansy at the University of Trento, who was not involved in the study, told Science.

At the Beginning

The origins of life’s molecules are highly debated. But most scientists agree that life stemmed from three main ones: DNA, RNA, and amino acids (the building blocks of proteins).

Today, in most organisms, DNA stores the genetic blueprint, and RNA carries this genetic information to the cell’s protein-making factories. But many viruses store genes only in RNA, and studies of early life suggest RNA may have been the first carrier of inheritance. RNA can also spur chemical reactions—including ones that glue amino acids into different types of proteins.

But regardless of which molecule came first, “all life on Earth requires lipid membranes,” the authors of the new paper write.

Made of a double layer of fatty molecules, the modern cell membrane is a work of art. It’s the first defense against bacterial and viral invaders. It’s also dotted with protein “tunnels” that tweak the functions of cells—for example, helping brain cells encode memories or heart cells beat in sync. These living cellular walls also act as scaffolds for biochemical reactions that often dictate the fate of cells—if they live, die, or turn into “zombie cells” that contribute to aging.

Since they’re so important for biology, scientists have long wondered how the first cell membranes came about. What made up “the very first, primordial cell membrane-like structure on Earth before the emergence of life?” asked the authors.

Our cell membranes are built on long chains of lipids, but these have complex chemical structures and require multiple steps to synthesize—likely beyond what was possible on early Earth. In contrast, the first protocell membranes were likely formed from molecules already present, including short fatty acids that self-organized.

Back to the Future

Previously, the team found an amino acid that “staples” fatty acids together. Called cysteine, the molecule was likely prevalent in our planet’s primordial soup. In a computer simulation, adding cysteine to short fatty acids caused them to form synthetic membranes.

The new study built on those results in the lab.

The team added cysteine to two types of short lipids and watched as the amino acid gathered the lipids into bubbles within 30 minutes. The lipids were similar in length to those likely present on early Earth, and the molecular concentrations also mimicked those during the period.

The team next took a closer look with an electron microscope. The generated membranes were about as thick as those in normal cells and highly stable. Finally, the team simulated a hypothetical early-Earth scenario where RNA serves as the first genetic material.

“The RNA world hypothesis is accepted as one of the most plausible scenarios of the origin of life,” wrote the authors. This is partly because RNA can also act as enzyme. These enzymes, dubbed ribozymes, can spark different chemical reactions, like, for example, those that might stitch amino acids and lipids into bubbles. However, they need a duo of minerals—calcium and magnesium—to work. While these minerals were likely highly abundant on early Earth, in some cases, they can damage artificial cell membranes.

But in several tests, the lab-grown protocells easily withstood the mineral onslaught. Meanwhile, the protocells showed they could generate chemical reactions using RNA, suggesting that short fatty molecules can build cell membranes in the primordial soup.

To Claudia Bonfio at the University of Cambridge, the study was “really, really cool and very well done.” But the mystery of life remains. Most fatty acids generated in the protocell aren’t found in modern cell membranes. A next step would be to show that the protocells can act more like normal ones—growing and dividing with a healthy metabolism.

But for now, the team is focused on deciphering the beginnings of cellular life. The work shows that reactions between simple chemicals in water can “assemble into giant” blobs, expanding the ways that protocell membranes can form, they wrote.

Image Credit: Max Kleinen on Unsplash

]]>
159456
Did the Early Cosmos Inflate Like a Balloon? A Mirror Universe Going Backwards in Time May Be a Simpler Explanation https://singularityhub.com/2024/10/29/did-the-early-cosmos-inflate-like-a-balloon-a-mirror-universe-going-backwards-in-time-may-be-a-simpler-explanation/ Tue, 29 Oct 2024 20:46:46 +0000 https://singularityhub.com/?p=159320

We live in a golden age for learning about the universe. Our most powerful telescopes have revealed that the cosmos is surprisingly simple on the largest visible scales. Likewise, our most powerful “microscope,” the Large Hadron Collider, has found no deviations from known physics on the tiniest scales.

These findings were not what most theorists expected. Today, the dominant theoretical approach combines string theory, a powerful mathematical framework with no successful physical predictions as yet, and “cosmic inflation”—the idea that, at a very early stage, the universe ballooned wildly in size. In combination, string theory and inflation predict the cosmos to be incredibly complex on tiny scales and completely chaotic on very large scales.

The nature of the expected complexity could take a bewildering variety of forms. On this basis, and despite the absence of observational evidence, many theorists promote the idea of a “multiverse”: an uncontrolled and unpredictable cosmos consisting of many universes, each with totally different physical properties and laws.

So far, the observations indicate exactly the opposite. What should we make of the discrepancy? One possibility is that the apparent simplicity of the universe is merely an accident of the limited range of scales we can probe today, and that when observations and experiments reach small enough or large enough scales, the asserted complexity will be revealed.

The other possibility is that the universe really is very simple and predictable on both the largest and smallest scales. I believe this possibility should be taken far more seriously. For, if it is true, we may be closer than we imagined to understanding the universe’s most basic puzzles. And some of the answers may already be staring us in the face.

The Trouble With String Theory and Inflation

The current orthodoxy is the culmination of decades of effort by thousands of serious theorists. According to string theory, the basic building blocks of the universe are minuscule, vibrating loops and pieces of sub-atomic string. As currently understood, the theory only works if there are more dimensions of space than the three we experience. So, string theorists assume that the reason we don’t detect them is that they are tiny and curled up.

Unfortunately, this makes string theory hard to test, since there are an almost unimaginable number of ways in which the small dimensions can be curled up, with each giving a different set of physical laws in the remaining, large dimensions.

Meanwhile, cosmic inflation is a scenario proposed in the 1980s to explain why the universe is so smooth and flat on the largest scales we can see. The idea is that the infant universe was small and lumpy, but an extreme burst of ultra-rapid expansion blew it up vastly in size, smoothing it out and flattening it to be consistent with what we see today.

Inflation is also popular because it potentially explains why the energy density in the early universe varied slightly from place to place. This is important because the denser regions would have later collapsed under their own gravity, seeding the formation of galaxies.

Over the past three decades, the density variations have been measured more and more accurately both by mapping the cosmic microwave background—the radiation from the big bang—and by mapping the three-dimensional distribution of galaxies.

In most models of inflation, the early extreme burst of expansion which smoothed and flattened the universe also generated long-wavelength gravitational waves—ripples in the fabric of space-time. Such waves, if observed, would be a “smoking gun” signal confirming that inflation actually took place. However, so far the observations have failed to detect any such signal. Instead, as the experiments have steadily improved, more and more models of inflation have been ruled out.

Furthermore, during inflation, different regions of space can experience very different amounts of expansion. On very large scales, this produces a multiverse of post-inflationary universes, each with different physical properties.

The history of the universe according to the model of cosmic inflation.
The history of the universe according to the model of cosmic inflation. Image Credit: Wikipedia, CC BY-SA

The inflation scenario is based on assumptions about the forms of energy present and the initial conditions. While these assumptions solve some puzzles, they create others. String and inflation theorists hope that somewhere in the vast inflationary multiverse, a region of space and time exists with just the right properties to match the universe we see.

However, even if this is true (and not one such model has yet been found), a fair comparison of theories should include an “Occam factor,” quantifying Occam’s razor, which penalizes theories with many parameters and possibilities over simpler and more predictive ones. Ignoring the Occam factor amounts to assuming that there is no alternative to the complex, unpredictive hypothesis—a claim I believe has little foundation.

Over the past several decades, there have been many opportunities for experiments and observations to reveal specific signals of string theory or inflation. But none have been seen. Again and again, the observations turned out simpler and more minimal than anticipated.

It is high time, I believe, to acknowledge and learn from these failures and to start looking seriously for better alternatives.

A Simpler Alternative

Recently, my colleague Latham Boyle and I have tried to build simpler and more testable theories that do away with inflation and string theory. Taking our cue from the observations, we have attempted to tackle some of the most profound cosmic puzzles with a bare minimum of theoretical assumptions.

Our first attempts succeeded beyond our most optimistic hopes. Time will tell whether they survive further scrutiny. However, the progress we have already made convinces me that, in all likelihood, there are alternatives to the standard orthodoxy—which has become a straitjacket we need to break out of.

I hope our experience encourages others, especially younger researchers, to explore novel approaches guided strongly by the simplicity of the observations—and to be more skeptical about their elders’ preconceptions. Ultimately, we must learn from the universe and adapt our theories to it rather than vice versa.

Boyle and I started out by tackling one of cosmology’s greatest paradoxes. If we follow the expanding universe backward in time, using Einstein’s theory of gravity and the known laws of physics, space shrinks away to a single point, the “initial singularity.”

In trying to make sense of this infinitely dense, hot beginning, theorists including Nobel laureate Roger Penrose pointed to a deep symmetry in the basic laws governing light and massless particles. This symmetry, called “conformal” symmetry, means that neither light nor massless particles actually experience the shrinking away of space at the big bang.

By exploiting this symmetry, one can follow light and particles all the way back to the beginning. Doing so, Boyle and I found we could describe the initial singularity as a “mirror”: a reflecting boundary in time (with time moving forward on one side, and backward on the other).

Picturing the big bang as a mirror neatly explains many features of the universe which might otherwise appear to conflict with the most basic laws of physics. For example, for every physical process, quantum theory allows a “mirror” process in which space is inverted, time is reversed, and every particle is replaced with its anti-particle (a particle similar to it in almost all respects, but with the opposite electric charge).

According to this powerful symmetry, called CPT symmetry, the “mirror” process should occur at precisely the same rate as the original one. One of the most basic puzzles about the universe is that it appears to violate CPT symmetry because time always runs forward and there are more particles than anti-particles.

Our mirror hypothesis restores the symmetry of the universe. When you look in a mirror, you see your mirror image behind it: if you are left-handed, the image is right-handed and vice versa. The combination of you and your mirror image are more symmetrical than you are alone.

Likewise, when Boyle and I extrapolated our universe back through the big bang, we found its mirror image, a pre-bang universe in which (relative to us) time runs backward and antiparticles outnumber particles. For this picture to be true, we don’t need the mirror universe to be real in the classical sense (just as your image in a mirror isn’t real). Quantum theory, which rules the microcosmos of atoms and particles, challenges our intuition so at this point the best we can do is think of the mirror universe as a mathematical device which ensures that the initial condition for the universe does not violate CPT symmetry.

Surprisingly, this new picture provided an important clue to the nature of the unknown cosmic substance called dark matter. Neutrinos are very light, ghostly particles which, typically, move at close to the speed of light and which spin as they move along, like tiny tops. If you point the thumb of your left hand in the direction the neutrino moves, then your four fingers indicate the direction in which it spins. The observed, light neutrinos are called “left-handed” neutrinos.

Heavy “right-handed” neutrinos have never been seen directly, but their existence has been inferred from the observed properties of light, left-handed neutrinos. Stable, right-handed neutrinos would be the perfect candidate for dark matter because they don’t couple to any of the known forces except gravity. Before our work, it was unknown how they might have been produced in the hot early universe.

Our mirror hypothesis allowed us to calculate exactly how many would form and to show they could explain the cosmic dark matter.

A testable prediction followed: If the dark matter consists of stable, right-handed neutrinos, then one of three light neutrinos that we know of must be exactly massless. Remarkably, this prediction is now being tested using observations of the gravitational clustering of matter made by large-scale galaxy surveys.

The Entropy of Universes

Encouraged by this result, we set about tackling another big puzzle: Why is the universe so uniform and spatially flat, not curved, on the largest visible scales? The cosmic inflation scenario was, after all, invented by theorists to solve this problem.

Entropy is a concept which quantifies the number of different ways a physical system can be arranged. For example, if we put some air molecules in a box, the most likely configurations are those which maximize the entropy—with the molecules more or less smoothly spread throughout space and sharing the total energy more or less equally. These kinds of arguments are used in statistical physics, the field which underlies our understanding of heat, work, and thermodynamics.

The late physicist Stephen Hawking and collaborators famously generalized statistical physics to include gravity. Using an elegant argument, they calculated the temperature and the entropy of black holes. Using our “mirror” hypothesis, Boyle and I managed to extend their arguments to cosmology and to calculate the entropy of entire universes.

To our surprise, the universe with the highest entropy (meaning it is the most likely, just like the atoms spread out in the box) is flat and expands at an accelerated rate, just like the real one. So statistical arguments explain why the universe is flat and smooth and has a small positive accelerated expansion, with no need for cosmic inflation.

How would the primordial density variations, usually attributed to inflation, have been generated in our symmetrical mirror universe? Recently, we showed that a specific type of quantum field (a dimension zero field) generates exactly the type of density variations we observe, without inflation. Importantly, these density variations aren’t accompanied by the long wavelength gravitational waves which inflation predicts—and which haven’t been seen.

These results are very encouraging. But more work is needed to show that our new theory is both mathematically sound and physically realistic.

Even if our new theory fails, it has taught us a valuable lesson. There may well be simpler, more powerful and more testable explanations for the basic properties of the universe than those the standard orthodoxy provides.

By facing up to cosmology’s deep puzzles, guided by the observations and exploring directions as yet unexplored, we may be able to lay more secure foundations for both fundamental physics and our understanding of the universe.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: The mirror universe, with the big bang at the center / Neil Turok, CC BY-SA

]]>
159320
Witness 1.8 Billion Years of Earth’s Tectonic Dance in a New Animation https://singularityhub.com/2024/10/08/witness-1-8-billion-years-of-earths-tectonic-dance-in-a-new-animation/ Tue, 08 Oct 2024 14:00:19 +0000 https://singularityhub.com/?p=159052

Using information from inside the rocks on Earth’s surface, my colleagues and I have reconstructed the plate tectonics of the planet over the last 1.8 billion years.

It is the first time Earth’s geological record has been used like this, looking so far back in time. This has enabled us to make an attempt at mapping the planet over the last 40 percent of its history, which you can see in the animation below.

The work, led by Xianzhi Cao from the Ocean University in China, was recently published in the open-access journal Geoscience Frontiers.

A Beautiful Dance

Mapping our planet through its long history creates a beautiful continental dance—mesmerizing in itself and a work of natural art.

It starts with the map of the world familiar to everyone. Then India rapidly moves south, followed by parts of Southeast Asia as the past continent of Gondwana forms in the Southern Hemisphere.

Around 200 million years ago (Ma or mega-annum in the reconstruction), when the dinosaurs walked the earth, Gondwana linked with North America, Europe, and northern Asia to form a large supercontinent called Pangaea.

Then, the reconstruction carries on back through time. Pangaea and Gondwana were themselves formed from older plate collisions. As time rolls back, an earlier supercontinent called Rodinia appears. It doesn’t stop here. Rodinia, in turn, is formed by the breakup of an even older supercontinent called Nuna about 1.35 billion years ago.

Why Map Earth’s Past?

Among the planets in the solar system, Earth is unique for having plate tectonics. Its rocky surface is split into fragments (plates) that grind into each other and create mountains or split away and form chasms that are then filled with oceans.

Apart from causing earthquakes and volcanoes, plate tectonics also pushes up rocks from the deep earth into the heights of mountain ranges. This way, elements which were far underground can erode from the rocks and wash into rivers and oceans. From there, living things can make use of these elements.

Among these essential elements is phosphorus, which forms the framework of DNA molecules, and molybdenum, which is used by organisms to strip nitrogen out of the atmosphere and make proteins and amino acids—building blocks of life.

Plate tectonics also exposes rocks that react with carbon dioxide in the atmosphere. Rocks locking up carbon dioxide is the main control on Earth’s climate over long time scales—much, much longer than the tumultuous climate change we are responsible for today.

A Tool for Understanding Deep Time

Mapping the past plate tectonics of the planet is the first stage in being able to build a complete digital model of Earth through its history.

Such a model will allow us to test hypotheses about Earth’s past. For example, why Earth’s climate has gone through extreme “Snowball Earth” fluctuations or why oxygen built up in the atmosphere when it did.

Indeed, it will allow us to much better understand the feedback between the deep planet and the surface systems of Earth that support life as we know it.

So Much More to Learn

Modeling our planet’s past is essential if we’re to understand how nutrients became available to power evolution. The first evidence for complex cells with nuclei—like all animal and plant cells—dates to 1.65 billion years ago.

This is near the start of this reconstruction and close to the time the supercontinent Nuna formed. We aim to test whether the mountains that grew at the time of Nuna formation may have provided the elements to power complex cell evolution.

Much of Earth’s life photosynthesizes and liberates oxygen. This links plate tectonics with the chemistry of the atmosphere, and some of that oxygen dissolves into the oceans. In turn, a number of critical metals—like copper and cobalt—are more soluble in oxygen-rich water. In certain conditions, these metals are then precipitated out of the solution: In short, they form ore deposits.

Many metals form in the roots of volcanoes that occur along plate margins. By reconstructing where ancient plate boundaries lay through time, we can better understand the tectonic geography of the world and assist mineral explorers in finding ancient metal-rich rocks now buried under much younger mountains.

In this time of exploration of other worlds in the solar system and beyond, it is worth remembering there’s so much about our own planet we are only just beginning to glimpse.

There are 4.6 billion years of it to investigate, and the rocks we walk on contain the evidence for how Earth has changed over this time.

This first attempt at mapping the last 1.8 billion years of Earth’s history is a leap forward in the scientific grand challenge to map our world. But it is just that—a first attempt. The next years will see considerable improvement from the starting point we have now made.

The author would like to acknowledge this research was largely done by Xianzhi Cao, Sergei Pisarevsky, Nicolas Flament, Derrick Hasterok, Dietmar Muller and Sanzhong Li; as a co-author, he is just one cog in the research network. The author also acknowledges the many students and researchers from the Tectonics and Earth Systems Group at The University of Adelaide and national and international colleagues who did the fundamental geological work this research is based on.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio

]]>
159052
DeepMind and BioNTech Bet AI Lab Assistants Will Accelerate Science https://singularityhub.com/2024/10/07/deepmind-and-biontech-bet-ai-lab-assistants-will-accelerate-science/ Mon, 07 Oct 2024 14:00:16 +0000 https://singularityhub.com/?p=159120 There has long been hope that AI could help accelerate scientific progress. Now, companies are betting the latest generation of chatbots could make useful research assistants.

Most efforts to accelerate scientific progress using AI have focused on solving fundamental conceptual problems, such as protein folding or the physics of weather modeling. But a big chunk of the scientific process is considerably more prosaic—deciding what experiments to do, coming up with experimental protocols, and analyzing data.

This can suck up an enormous amount of an academic’s time, distracting them from higher value work. That’s why both Google DeepMind and BioNTech are currently developing tools designed to automate many of these more mundane jobs, according to the Financial Times.

At a recent event, DeepMind CEO Demis Hassabis said his company was working on a science-focused large language model that could act as a research assistant, helping design experiments to tackle specific hypotheses and even predict the outcome. BioNTech also announced at an AI innovation day last week that it had used Meta’s open-source Llama 3.1 model to create an AI assistant called Laila with a “detailed knowledge of biology.”

“We see AI agents like Laila as a productivity accelerator that’s going to allow the scientists, the technicians, to spend their limited time on what really matters,” Karim Beguir, chief executive of the company’s InstaDeep AI-subsidiary, told the Financial Times.

The bot showed off its capabilities in a live demonstration, where scientists used it to automate the analysis of DNA sequences and visualize results. According to Constellation Research, the model comes in various sizes and is integrated with InstaDeep’s DeepChain platform, which hosts various other AI models specializing in things like protein design or analyzing DNA sequences.

BioNTech and DeepMind aren’t the first to try turning the latest AI tech into an extra pair of helping hands around the lab. Last year, researchers showed that combining OpenAI’s GPT-4 model with tools for searching the web, executing code, and manipulating laboratory automation equipment could create a “Coscientist” that could design, plan, and execute complex chemistry experiments.

There’s also evidence that AI could help decide what research direction to take. Scientists used Anthropic’s Claude 3.5 model to generate thousands of new research ideas, which the model then ranked on originality. When human reviewers assessed the ideas on criteria like novelty, feasibility, and expected effectiveness, they found they were on average more original and exciting than those dreamed up by human participants.

However, there are likely limits to how much AI can contribute to scientific process. A collaboration between academics and Tokyo-based startup Sakana AI made waves with an “AI scientist” focused on machine learning research. It was able to conduct literature reviews, formulate hypotheses, carry out experiments, and write up a paper. But the research produced was judged incremental at best, and other researchers suggested the output was likely unreliable due to the nature of large language models.

This highlights a central problem for using AI to accelerate science—simply churning out papers or research results is of little use if they’re not any good. As a case in point, when researchers dug into a collection of two million AI-generated crystals produced by DeepMind, they found almost none met the important criteria of “novelty, credibility, and utility.”

Academia is already blighted by paper mills that churn out large quantities of low-quality research, Karin Verspoor at the Royal Melbourne Institute of Technology in Australia, writes in The Conversation. Without careful oversight, new AI tools could turbocharge this trend.

However, it would be unwise to ignore the potential of AI to improve the scientific process. The ability to automate much of science’s grunt work could prove invaluable, and as long as these tools are deployed in ways that augment humans rather than replacing them, their contribution could be significant.

Image Credit: Shrinath / Unsplash

]]>
159120