While companies like Neuralink have recently provided some flashy demos of what could be achieved by hooking brains up to computers, the technology still has serious limitations preventing wider use.
Non-invasive approaches like electroencephalograms (EEGs) provide only coarse readings of neural signals, limiting their functionality. Directly implanting electrodes in the brain can provide a much clearer connection, but such risky medical procedures are hard to justify for all but the most serious conditions.
California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision. In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.
“The principal advantages of a biohybrid implant are that it can dramatically change the scaling laws of how many neurons you can interface with versus how much damage you do to the brain,” Alan Mardinly, director of biology at Science Corporation, told New Scientist.
The company’s CEO Max Hodak is a former president of Neuralink, and his company also produces a retinal implant using more conventional electronics that can restore vision in some patients. But the company has been experimenting with so-called “biohybrid” approaches, which Hodak thinks could provide a more viable long-term solution for brain-machine interfaces.
“Placing anything into the brain inevitably destroys some amount of brain tissue,” he wrote in a recent blog post. “Destroying 10,000 cells to record from 1,000 might be perfectly justified if you have a serious injury and those thousand neurons create a lot of value—but it really hurts as a scaling characteristic.”
Instead, the company has developed a honeycomb-like structure made of silicon featuring more than 100,000 “microwells”—cylindrical holes roughly 15 micrometers deep. Individual neurons are inserted into each of these microwells, and the array can then be surgically implanted onto the surface of the brain.
The idea is that while the neurons remain housed in the implant, their axons—long strands that carry nerve signals away from the cell body—and their dendrites—the branched structures that form synapses with other cells—will be free to integrate with the host’s brain cells.
To see if the idea works in practice they installed the device in mice, using neurons genetically modified to react to light. Three weeks after implantation, they carried out a series of experiments where they trained the mice to respond whenever a light was shone on the device. The mice were able to detect when this happened, suggesting the light-sensitive neurons had merged with their native brain cells.
While it’s early days, the approach has significant benefits. You can squeeze a lot more neurons into a millimeter-scale chip than electrodes and each of those neurons can form many connections. That means the potential bandwidth of a biohybrid device could be much more than a conventional neural implant. The approach is also much less damaging to the patient’s brain.
However, the lifetime of these kinds of devices could be a concern—after 21 days, only 50 percent of the neurons had survived. And the company needs to find a way to ensure the neurons don’t illicit a negative immune response in the patient.
If the approach works though, it could be an elegant and potentially safer way to merge man and machine.
Image Credit: Science Corporation
]]>Quantum computing promises to solve problems that are beyond classical computers by harnessing the strange effects of quantum mechanics. But to do so we’ll need processors made up of hundreds of thousands, if not millions, of qubits (the quantum equivalent of bits).
Having just crossed the 1,000-qubit mark, today’s devices area a long way off, but more importantly their qubits are incredibly unreliable. The devices are highly susceptible to errors which can derail any attempt to carry out calculations long before an algorithm has run its course.
That’s why error correction has been a major focus for quantum computing companies in recent years. Now, Google’s new Willow quantum processor, unveiled Monday, has crossed a critical threshold suggesting that as the company’s devices get larger, their ability to suppress errors will improve exponentially.
“This is the most convincing prototype for a scalable logical qubit built to date,” Hartmut Neven, founder and lead of Google Quantum AI, wrote in a blog post. “It’s a strong sign that useful, very large quantum computers can indeed be built.”
Quantum error-correction schemes typically work by spreading the information needed to carry out calculations across multiple qubits. This introduces redundancy to the systems, so that even if one of the underlying qubits experiences an error, the information can be recovered. Using this approach, many “physical qubits” can be combined to create a single “logical qubit.”
In general, the more physical qubits you use to create each logical qubit, the more resistant it is to errors. But this is only true if the error rate of the individual qubits is below a certain threshold. Otherwise, the increased chance of an error from adding more faulty qubits outweighs the benefits of redundancy.
While other groups have demonstrated error correction that produces modest accuracy improvements, Google’s results are definitive. In a series of experiments reported in Nature, they encoded logical qubits into increasingly large arrays—starting with a three-by-three grid—and found that each time they increased the size the error rate halved. Crucially, the team found that the logical qubits they created lasted more than twice as long as the physical qubits that make them up.
“The more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes,” wrote Neven.
This was made possible by significant improvements in the underlying superconducting qubit technology Google uses to build its processors. In the company’s previous Sycamore processor, the average operating lifetime of each physical qubit was roughly 20 microseconds. But thanks to new fabrication techniques and circuit optimizations, Willow’s qubits have more than tripled this to 68 microseconds.
As well as showing off the chip’s error-correction prowess, the company’s researchers also demonstrated its speed. They carried out a computation in under five minutes that would take the world’s second fastest supercomputer, Frontier, 10 septillion years to complete. However, the test they used is a contrived one with little practical use. The quantum computer simply has to execute random circuits with no useful purpose, and the classical computer then has to try and emulate it.
The big test for companies like Google is to go from such proofs of concept to solving commercially relevant problems. The new error-correction result is a big step in the right direction, but there’s still a long way to go.
Julian Kelly, who leads the company’s quantum hardware division, told Nature that solving practical challenges will likely require error rates of around one per ten million steps. Achieving that will necessitate logical qubits made of roughly 1,000 physical qubits each, though breakthroughs in error-correction schemes could bring this down by several hundred qubits.
More importantly, Google’s demonstration simply involved storing information in its logical qubits rather than using them to carry out calculations. Speaking to MIT Technology Review in September, when a preprint of the research was posted to arXiv, Kenneth Brown from Duke University noted that carrying out practical calculations would likely require a quantum computer to perform roughly a billion logical operations.
So, despite the impressive results, there’s still a long road ahead to large-scale quantum computers that can do anything useful. However, Google appears to have reached an important inflection point that suggests this vision is now within reach.
Image Credit: Google
]]>While there has been significant progress in the development of wearable and implantable technology in recent years, most electronic materials are hard, rigid, and feature toxic metals. A variety of approaches for creating “soft electronics” has emerged, but finding ones that are durable, power-efficient, and easy to manufacture is a significant challenge.
Organic ferroelectric materials are promising because they exhibit spontaneous polarization, which means they have a stable electric field pointing in a particular direction. This polarization can be flipped by applying an external electrical field, allowing them to function like a bit in a conventional computer.
The most successful soft ferroelectric is a material called polyvinylidene fluoride (PVDF), which has been used in commercial products like wearable sensors, medical imaging, underwater navigation devices, and soft robots. But PVDF’s electrical properties can break down when exposed to higher temperatures, and it requires high voltages to flip its polarization.
Now, in a paper published in Nature, researchers at Northwestern University have shown that combining the material with short chains of amino acids known as peptides can dramatically reduce power requirements and boost heat tolerance. And the incorporation of biomolecules into the material opens the prospect of directly interfacing electronics with the body.
To create their new “electric plastic” the team used a type of molecule known as a peptide amphiphile. These molecules feature a water-repelling component that helps them self-assemble into complex structures. The researchers connected these peptides to short strands of PVDF and exposed them to water, causing the peptides to cluster together.
This made the strands coalesce into long, flexible ribbons. In testing, the team found the material could withstand temperatures of 110 degrees Celsius, which is roughly 40 degrees higher than previous PVDF materials. Switching the material’s polarization also required significantly lower voltages, despite being made up of 49 percent peptides by weight.
The researchers told Science that as well as being able to store energy or information in the material’s polarization, it’s also biocompatible. This means it could be used in everything from wearable devices that monitor vital signs to flexible implants that can replace pacemakers. The peptides could also be connected to proteins inside cells to record biological activity or even stimulate it.
One challenge is that although PVDF is biocompatible, it can break down into so-called “forever chemicals,” which remain in the environment for centuries and studies have linked to health and environmental problems. Several other chemicals the researchers used to fabricate their material also fall into this category.
“This advance has enabled a number of attractive properties compared to other organic polymers,” Frank Leibfarth, of UNC Chapel Hill, told Science. But he pointed out that the researchers had only tested very small amounts of the molecule, and it’s unclear how easy it will be to scale them up.
If the researchers can extend the approach to larger scales, however, it could bring a host of exciting new possibilities at the interface between our bodies and technology.
Image Credit: Mark Seniw/Center for Regenerative Nanomedicine/Northwestern University
]]>Humans are increasingly engaging with wearable technology as it becomes more adaptable and interactive. One of the most intimate ways gaining acceptance is through augmented reality glasses.
Last week, Meta debuted a prototype of the most recent version of their AR glasses—Orion. They look like reading glasses and use holographic projection to allow users to see graphics projected through transparent lenses into their field of view.
Meta chief Mark Zuckerberg called Orion “the most advanced glasses the world has ever seen.” He said they offer a “glimpse of the future” in which smart glasses will replace smartphones as the main mode of communication.
But is this true or just corporate hype? And will AR glasses actually benefit us in new ways?
The technology used to develop Orion glasses is not new.
In the 1960s, computer scientist Ivan Sutherland introduced the first augmented reality head-mounted display. Two decades later, Canadian engineer and inventor Stephen Mann developed the first glasses-like prototype.
Throughout the 1990s, researchers and technology companies developed the capability of this technology through head-worn displays and wearable computing devices. Like many technological developments, these were often initially focused on military and industry applications.
In 2013, after smartphone technology emerged, Google entered the AR glasses market. But consumers were disinterested, citing concerns about privacy, high cost, limited functionality, and a lack of a clear purpose.
This did not discourage other companies—such as Microsoft, Apple, and Meta—from developing similar technologies.
Meta cites a range of reasons for why Orion are the world’s most advanced glasses, such as their miniaturized technology with large fields of view and holographic displays. It said these displays provide “compelling AR experiences, creating new human-computer interaction paradigms […] one of the most difficult challenges our industry has ever faced.”
Orion also has an inbuilt smart assistant (Meta AI) to help with tasks through voice commands, eye and hand tracking, and a wristband for swiping, clicking, and scrolling.
With these features, it is not difficult to agree that AR glasses are becoming more user-friendly for mass consumption. But gaining widespread consumer acceptance will be challenging.
Meta will have to address four types of challenges:
These factors are not unlike what we saw in the 2000s when smartphones gained acceptance. Just like then, there are early adopters who will see more benefits than risks in adopting AR glasses, creating a niche market that will gradually expand.
Similar to what Apple did with the iPhone, Meta will have to build a digital platform and ecosystem around Orion.
This will allow for broader applications in education (for example, virtual classrooms), remote work, and enhanced collaboration tools. Already, Orion’s holographic display allows users to overlay digital content on the real world, and because it is hands-free, communication will be more natural.
Smart glasses are already being used in many industrial settings, such as logistics and healthcare. Meta plans to launch Orion for the general public in 2027.
By that time, AI will have likely advanced to the point where virtual assistants will be able to see what we see and the physical, virtual, and artificial will co-exist. At this point, it is easy to see that the need for bulky smartphones may diminish and that through creative destruction, one industry may replace another.
This is supported by research indicating the virtual and augmented reality headset industry will be worth $370 billion by 2034.
The remaining question is whether this will actually benefit us.
There is already much debate about the effect of smartphone technology on productivity and wellbeing. Some argue that it has benefited us, mainly through increased connectivity, access to information, and productivity applications.
But others say it has just created more work, distractions, and mental fatigue.
If Meta has its way, AR glasses will solve this by enhancing productivity. Consulting firm Deloitte agrees, saying the technology will provide hands-free access to data, faster communication and collaboration through data-sharing.
It also claims smart glasses will reduce human errors, enable data visualization, and monitor the wearer’s health and wellbeing. This will ensure a quality experience, social acceptance, and seamless integration with physical processes.
But whether or not that all comes true will depend on how well companies such as Meta address the many challenges associated with AR glasses.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Meta
]]>Unlike data centers, DNA is incredibly compact. These molecules package an entire organism’s genetic blueprint into tiny but sophisticated structures inside each cell. Kept cold—say, inside a freezer or in the Siberian tundra—DNA and the data encoded within can last millennia.
But DNA is hardly just a storage device. Myriad molecules turn genes on and off—a bit like selectively running bits of code—to orchestrate everyday cellular functions. The body “reads” bits of the genetic code in a particular cell at a specific time and, together, compiles the data into a smoothly operating, healthy life.
Scientists have long eyed DNA as a computing device to complement everyday laptops. With the world’s data increasing at an exponential rate, silicon chips are struggling to meet the demands of data storage and computation. The rise of large language models and other modes of artificial intelligence is further pushing the need for alternative solutions.
But the problem with DNA storage is it often gets destroyed after “reading” the data within.
Last month, a team from North Carolina State University and Johns Hopkins University found a workaround. They embedded DNA molecules, encoding multiple images, into a branched gel-like structure resembling a brain cell.
Dubbed “dendricolloids,” the structures stored DNA files far better than those freeze-dried alone. DNA within dendricolloids can be repeatedly dried and rehydrated over roughly 170 times without damaging stored data. According to one estimate, each DNA strand could last over two million years at normal freezer temperatures.
Unlike previous DNA computers, the data can be erased and replaced like memory on classical computers to solve multiple problems—including a simple chess game and sudoku.
Until now, DNA was mainly viewed as a long-term storage device or single-use computer. Developing DNA technology that can store, read, “rewrite, reload, or compute specific data files” repeatedly seemed difficult or impossible, said study author Albert Keung in a press release.
However, “we’ve demonstrated that these DNA-based technologies are viable, because we’ve made one,” he said.
This is hardly the first attempt to hijack the code of life to increase storage and computation.
The first steps taken were in data storage. Our computers run on binary bits of information encoded in zeros and ones. DNA, in contrast, uses four different molecules typically represented by the letters A, T, C, and G. This means that different pairs of zeros and ones—00, 01, 10, 11—can be encoded into different DNA letters. Because of the way it’s packaged in cells, DNA can theoretically store far more data in less space than digital devices.
“You could put a thousand laptops’ worth of data into DNA-based storage that’s the same size as a pencil eraser,” said Keung.
With any computer, we need to be able to search and retrieve information. Our cells have evolved mechanisms that read specific parts of a DNA strand on demand—a sort of random access memory that extracts a particular piece of data. Previous studies have tapped into these systems to store and retrieve books, images, and GIFs inside DNA files. Scientists have also used microscopic glass beads with DNA “labels” as a kind of filing system for easy extraction.
But storing and extracting data is only half of the story. A computer needs to, well, compute.
Last year, a team developed a programmable DNA computer that can run billions of different circuits with minimal energy. Traditionally, these molecular machines work by allowing different strands to grab onto each other depending on calculation needs. Different pairs could signal “and,” “or,” and “not” logic gates—recapitulating the heart of today’s digital computers.
But reading and computing often destroys the original DNA data, making most DNA-based systems single-use. Scientists have also developed another type of DNA computer, which monitors changes in the molecule’s structures. These can be rewritten. Similar to standard hard drives, they can encode multiple rounds of data, but they’re also harder to scale.
The new study combined the best of both worlds. The team engineered a DNA computer that can store information, perform computations, and reset the system for another round.
The core of the system relies on a central dogma in biology. DNA sits in a small cage within cells. When genes are turned on, their data is translated into RNA, which converts the genetic blueprint into proteins. If DNA is safely stored, adding protein “switches” that turn genes up or down changes the genetic readout in RNA but keeps the original genetic sequences intact.
Because the original data doesn’t change, it’s possible to run multiple rounds of RNA-based calculations from a single DNA-encoded dataset—with improvements.
Based on these ideas, the team engineered a jelly-like structure with branches similar to a brain cell. Dubbed “dendricolloids,” the soft materials allowed each DNA strand to grab onto surrounding material “without sacrificing the data density that makes DNA attractive for data storage in the first place,” said study author Orlin Velev.
“We can copy DNA information directly from the material’s surface without harming the DNA. We can also erase targeted pieces of DNA and then rewrite to the same surface, like deleting and rewriting information stored on the hard drive,” said study author Kevin Lin.
To test out their system, the team embedded a synthetic DNA sequence of 200 letters into the material. Adding a molecular cocktail that converts DNA sequences into RNA, the material was able to generate RNA repeatedly over 10 rounds. In theory, the resulting RNA could encode 46 terabytes of data stored at normal fridge and freezer temperatures.
The dendricolloids could also absorb over 2,700 different DNA strands, each nearly 250 letters long to protect their data. In one test, the team encoded three different JPEG files into the structures, translating digital data into biological data. In simulations that mimicked accessing the DNA files, the team could reconstruct the data 10 times without losing it in the process.
The team next took inspiration from a biological “eraser” of sorts. These proteins eat away at RNA without damaging the DNA blueprint. This process controls how a cell performs its usual functions—for example, by destroying RNA strands detrimental to health.
As a proof of concept, the team developed 1,000 different DNA snippets to solve multiple puzzles. For a simple game of chess, each DNA molecule encoded nine potential positions. The molecules were pooled, with each representing a potential configuration. This data allowed the system to learn. For example, one gene, when turned on, could direct a move on the chessboard by replicating itself in RNA. Another could lower RNA levels detrimental to the game.
These DNA to RNA processes were controlled by an engineered protein whose job it was to keep the final results in check. As a last step, all RNA strands violating the rules were destroyed, leaving behind only those representing the final, expected solution. In addition to chess, the team implemented this process to solve simple sudoku puzzles too.
The DNA computer is still in its infancy. But unlike previous generations, this one captures storage and compute in one system.
“There’s a lot of excitement about molecular data storage and computation, but there have been significant questions about how practical the field may be,” said Keung. “We wanted to develop something that would inspire the field of molecular computing.”
Image Credit: Luke Jones / Unsplash
]]>It’s long been prophesied that modern cryptography, employed universally across the devices we use every day, will die at the hands of the first practical quantum computer.
Naturally, researchers have been searching for secure alternatives.
In 2016, the US National Institute of Standards and Technology (NIST) announced a competition to create the first post-quantum cryptographic algorithms. These programs would run on today’s computers but defeat attacks by future quantum computers.
Beginning with a pool of 82 submissions from around the world, NIST narrowed the list to four in 2022. The finalists went by the names CRYSTALS-Kyber, CRYSTALS-Dilithium, Sphincs+, and FALCON. This week, NIST announced three of these have become the first standardized post-quantum algorithms. They’ll release a standard draft of the last, FALCON, by the end of the year.
The algorithms, according to NIST, represent the best of the best. Kyber, Dilithium, and FALCON employ an approach called lattice-based cryptography, while Sphincs+ uses an alternative hash-based method. They’ve survived several years of stress testing by security experts and are ready for immediate use.
The release includes code for the algorithms alongside instructions on how to implement them and their intended uses. Like earlier encryption standards developed by the agency in the 1970s, it’s hoped wide adoption will ensure interoperability between digital products and consistency, lowering the risk of error. The first of the group, renamed ML-KEM, is for general encryption, while the latter three (now ML-DSA, SLH-DSA, and FN-DSA) are for digital signatures—that is, proving that sources are who they say they are.
Arriving at standards was a big effort, but broad adoption will be bigger.
While the idea that future quantum computers could defeat standard encryption is fairly uncontroversial, when it will happen is murkier. Today’s machines, still small and finicky, are nowhere near up to the task. The first machines able to complete useful tasks faster than classical computers aren’t expected until later this decade at the very earliest. But it’s not clear how powerful these computers will have to be to break encryption.
Still, there are solid reasons to get started now, according to proponents. For one, it’ll take as long as 10 to 15 years to roll out post-quantum cryptography. So, the earlier we kick things off the better. Also, hackers may steal and store encrypted data today with the expectation it can be cracked later—a strategy known as “harvest now, decrypt later.”
“Today, public key cryptography is used everywhere in every device,” Lily Chen, head of cryptography at NIST, told IEEE Spectrum. “Now our task is to replace the protocol in every device, which is not an easy task.”
There are already some early movers, however. The Signal Protocol underpinning Signal, WhatsApp, and Google Messages—products used by more than a billion people—implemented post-quantum cryptography based on NIST’s Kyber algorithm alongside more traditional encryption in late 2023. Apple did the same for iMessages earlier this year.
It’s notable both opted to run the two in parallel, as opposed to going all-in on post-quantum security. NIST’s algorithms have been scrutinized, but they haven’t been out in the wild for nearly as long as traditional approaches. There’s no guarantee they won’t be defeated in the future.
An algorithm in the running two years ago, SIKE, met a quick and shocking end when researchers took it down with some clever math and a desktop computer. And this April, Tsinghua University’s, Yilei Chen, published a pre-print on the arXiv in which he claimed to show lattice-based cryptography actually was vulnerable to quantum computers—though his work was later shown to be flawed and lattice cryptography still secure.
To be safe, NIST is developing backup algorithms. The agency is currently vetting two groups representing alternative approaches for general encryption and digital signatures. In parallel, scientists are working on other forms of secure communication using quantum systems themselves, though these are likely years from completion and may complement rather than replace post-cryptographic algorithms like those NIST is standardizing.
“There is no need to wait for future standards,” said Dustin Moody, a NIST mathematician heading the project, in a release. “Go ahead and start using these three. We need to be prepared in case of an attack that defeats the algorithms in these three standards, and we will continue working on backup plans to keep our data safe. But for most applications, these new standards are the main event.”
Image Credit: IBM
]]>Crypto mining can be a profitable but highly volatile endeavor. It involves creating massive datacenters packed with specialized computer chips and using them to solve the mathematical puzzles underpinning the security of various cryptocurrencies. In exchange, the miners win some of that cryptocurrency as a reward.
Most miners make the bulk of their money from bitcoin. But earlier this year, an event called “the halving” seriously hit earnings. Every four years, the bitcoin protocol halves the mining reward—that is, how much bitcoin miners receive in exchange for solving math puzzles—to increase the scarcity of the coin. Normally, this causes the price of bitcoin to jump in response, but this time around that didn’t happen, severely impacting the profitability of miners.
Fortunately for them, another industry with a voracious appetite for computing has arrived just in time. The rush to train massive generative AI models has left companies scrabbling for chips, datacenter space, and reliable access to large amounts of cheap power, things many miners already have in abundance.
“It [normally] takes 3-5 years to build an HPC-grade data center from scratch,” JPMorgan analysts wrote in a recent note, according to the Financial Times. “This scramble for power puts a premium on companies with access to cheap power today.”
While crypto mining and training AI aren’t exactly the same, they share crucial similarities. Both require huge datacenters specialized to carry out one particular job, and they both consume large amounts of power. But because miners have been playing this game for a long time and most AI companies have only started trying to train truly massive models since the launch of ChatGPT less than two years ago, the companies have a big head start.
They’ve already spent years scouring the country for places with abundant cheap power and plenty of space to build large datacenters. More importantly, they’ve already gone through the time-consuming process of getting approvals, negotiating power licenses, and getting the facilities up and running.
The rapid expansion in demand for AI training is straining grids in some areas, and so, many jurisdictions in North America have implemented long waitlists for new datacenters, according to Time. Already, roughly 83 percent of datacenter capacity currently under construction has been leased in advance, says Bloomberg.
This means the biggest bottleneck for many AI companies is finding the hardware to train their models, and that presents a new opportunity for crypto miners. “You’ve seen a number of crypto miners that were sort of struggling that have actually made a full pivot away,” Kent Draper, chief commercial officer of crypto miner IREN, told Time.
Converting a bitcoin mine into an AI training cluster isn’t a straight swap. AI training is typically done on GPUs while bitcoin mining uses specialized mining chips from Bitmain. But often, it’s not so much the chips AI companies are after, but the infrastructure and power access the mine has already set up.
In June, crypto miner Core Scientific announced it would host 270 megawatts of GPUs for the AI infrastructure startup CoreWeave. “We view the opportunity in AI today to be one where we can convert existing infrastructure we own to host clients who are looking to install very large arrays of GPUs for their clients that are ultimately AI clients,” Core Scientific CEO Adam Sullivan told Bloomberg.
Some miners are also operating GPUs themselves. German miner Northern Data had been focused on mining the Ethereum cryptocurrency, but a major software update to the coin’s blockchain in 2022 did away with mining. Pivoting, the company purchased $800 million worth of Nvidia’s latest GPUs to launch a 20,000-GPU AI cluster, one of the largest in Europe, according to Bloomberg.
Other miners like Hut 8 and IREN are investing heavily in new chips to more proactively chase the AI boom. Often, AI training is happening side-by-side with crypto mining. “We view them as mutually complementary,” IREN’s Draper told Time. “Bitcoin is instant revenue but somewhat more volatile. AI is customer-dependent—but once you have customers, it’s contracted and more stable.”
This new trend could provide some modest environmental benefits too. People are concerned about the enormous power consumption of both AI training and bitcoin mining. If increasing demand for AI simply displaces existing mining infrastructure, rather than requiring new power-hungry datacenters, that could help curtail the growing carbon impact of the industry.
However, for miners, chasing the latest gold rush can be a risky strategy. There are growing concerns the AI industry is in a bubble close to bursting. If that happens, the rich new seam miners have started to tap could dry up very quickly.
Update 8/8/2024: The article previously stated Northern Data bought $800 million worth of new Nvidia chips to mine Ethereum and repurposed them to train AI models. However, Northern Data bought the chips exclusively for their AI business. The article has been updated to reflect this.
]]>Of these, Cerebras is one of the weirdest. The company makes computer chips the size of tortillas bristling with just under a million processors, each linked to its own local memory. The processors are small but lightning quick as they don’t shuttle information to and from shared memory located far away. And the connections between processors—which in most supercomputers require linking separate chips across room-sized machines—are quick too.
This means the chips are stellar for specific tasks. Recent preprint studies in two of these—one simulating molecules and the other training and running large language models—show the wafer-scale advantage can be formidable. The chips outperformed Frontier, the world’s top supercomputer, in the former. They also showed a stripped down AI model could use a third of the usual energy without sacrificing performance.
The materials we make things with are crucial drivers of technology. They usher in new possibilities by breaking old limits in strength or heat resistance. Take fusion power. If researchers can make it work, the technology promises to be a new, clean source of energy. But liberating that energy requires materials to withstand extreme conditions.
Scientists use supercomputers to model how the metals lining fusion reactors might deal with the heat. These simulations zoom in on individual atoms and use the laws of physics to guide their motions and interactions at grand scales. Today’s supercomputers can model materials containing billions or even trillions of atoms with high precision.
But while the scale and quality of these simulations has progressed a lot over the years, their speed has stalled. Due to the way supercomputers are designed, they can only model so many interactions per second, and making the machines bigger only compounds the problem. This means the total length of molecular simulations has a hard practical limit.
Cerebras partnered with Sandia, Lawrence Livermore, and Los Alamos National Laboratories to see if a wafer-scale chip could speed things up.
The team assigned a single simulated atom to each processor. So they could quickly exchange information about their position, motion, and energy, the processors modeling atoms that would be physically close in the real world were neighbors on the chip too. Depending on their properties at any given time, atoms could hop between processors as they moved about.
The team modeled 800,000 atoms in three materials—copper, tungsten, and tantalum—that might be useful in fusion reactors. The results were pretty stunning, with simulations of tantalum yielding a 179-fold speedup over the Frontier supercomputer. That means the chip could crunch a year’s worth of work on a supercomputer into a few days and significantly extend the length of simulation from microseconds to milliseconds. It was also vastly more efficient at the task.
“I have been working in atomistic simulation of materials for more than 20 years. During that time, I have participated in massive improvements in both the size and accuracy of the simulations. However, despite all this, we have been unable to increase the actual simulation rate. The wall-clock time required to run simulations has barely budged in the last 15 years,” Aidan Thompson of Sandia National Laboratories said in a statement. “With the Cerebras Wafer-Scale Engine, we can all of a sudden drive at hypersonic speeds.”
Although the chip increases modeling speed, it can’t compete on scale. The number of simulated atoms is limited to the number of processors on the chip. Next steps include assigning multiple atoms to each processor and using new wafer-scale supercomputers that link 64 Cerebras systems together. The team estimates these machines could model as many as 40 million tantalum atoms at speeds similar to those in the study.
While simulating the physical world could be a core competency for wafer-scale chips, they’ve always been focused on artificial intelligence. The latest AI models have grown exponentially, meaning the energy and cost of training and running them has exploded. Wafer-scale chips may be able to make AI more efficient.
In a separate study, researchers from Neural Magic and Cerebras worked to shrink the size of Meta’s 7-billion-parameter Llama language model. To do this, they made what’s called a “sparse” AI model where many of the algorithm’s parameters are set to zero. In theory, this means they can be skipped, making the algorithm smaller, faster, and more efficient. But today’s leading AI chips—called graphics processing units (or GPUs)—read algorithms in chunks, meaning they can’t skip every zeroed out parameter.
Because memory is distributed across a wafer-scale chip, it can read every parameter and skip zeroes wherever they occur. Even so, extremely sparse models don’t usually perform as well as dense models. But here, the team found a way to recover lost performance with a little extra training. Their model maintained performance—even with 70 percent of the parameters zeroed out. Running on a Cerebras chip, it sipped a meager 30 percent of the energy and ran in a third of the time of the full-sized model.
While all this is impressive, Cerebras is still niche. Nvidia’s more conventional chips remain firmly in control of the market. At least for now, that appears unlikely to change. Companies have invested heavily in expertise and infrastructure built around Nvidia.
But wafer-scale may continue to prove itself in niche, but still crucial, applications in research. And it may be the approach becomes more common overall. The ability to make wafer-scale chips is only now being perfected. In a hint at what’s to come for the field as a whole, the biggest chipmaker in the world, TSMC, recently said it’s building out its wafer-scale capabilities. This could make the chips more common and capable.
For their part, the team behind the molecular modeling work say wafer-scale’s influence could be more dramatic. Like GPUs before them, adding wafer-scale chips to the supercomputing mix could yield some formidable machines in the future.
“Future work will focus on extending the strong-scaling efficiency demonstrated here to facility-level deployments, potentially leading to an even greater paradigm shift in the Top500 supercomputer list than that introduced by the GPU revolution,” the team wrote in their paper.
Image Credit: Cerebras
]]>While there has been significant progress in building ever larger quantum processors, the technology is still light years from the kind of scale seen in conventional computer chips.
The inherent fragility of most qubit technologies combined with the complex control systems required to manipulate them mean that leading quantum computers based on superconducting qubits have only just crossed the 1,000-qubit mark.
A new platform designed by engineers at MIT and the MITRE Corporation could present a more scalable solution though. In a recent paper in Nature, they incorporated more than 4,000 qubits made from tiny defects in diamonds onto an integrated circuit, which was used to control them. In the future, several of these so-called “quantum systems-on-a-chip” could be connected using optical networking to create large-scale quantum computers, the researchers say.
“We will need a large number of qubits, and great control over them, to really leverage the power of a quantum system and make it useful,” lead author Linsen Li from MIT said in a press release. “We are proposing a brand-new architecture and a fabrication technology that can support the scalability requirements of a hardware system for a quantum computer.”
Defects in diamonds known as color centers are promising qubit candidates because they hold their quantum states for much longer than competing technologies and can be entangled with distant qubits using light signals. What’s more, they’re solid-state systems compatible with conventional electronics manufacturing.
One of the main downsides is diamond color centers are not uniform. Information is stored in a quantum property known as “spin,” but scientists use optical signals to manipulate or read the qubits. The frequency of light each color center uses can vary significantly. In one sense, this is beneficial because they can be individually addressed, but it also makes controlling large numbers of them challenging.
The researchers got around this by integrating their qubits on top of a chip that can apply voltages to them. They can then use these voltages to tune the qubits’ frequencies. This makes it possible to tune all 4,000 to the same frequency and allows every qubit to be connected to every other one.
“The conventional assumption in the field is that the inhomogeneity of the diamond color center is a drawback,” MIT’s Dirk Englund said in the press release. “However, we turn this challenge into an advantage by embracing the diversity of the artificial atoms: Each atom has its own spectral frequency. This allows us to communicate with individual atoms by voltage tuning them into resonance with a laser, much like tuning the dial on a tiny radio.”
Key to their breakthrough was a novel fabrication technique allowing the team to create 64 “quantum microchiplets”—small slivers of diamond featuring multiple color centers—which they then slotted into sockets on the integrated circuits.
They say the approach could be applied to other solid-state quantum technologies and predict they’ll ultimately achieve qubit densities comparable to the transistor densities found in conventional electronics.
However, the team has yet to actually use the device to do any computing. They show they can efficiently prepare and measure spin states, but there’s still some way to go before they can run quantum algorithms on the device.
They’re not the only ones assembling large numbers of qubits that can’t do very much yet. Earlier this year researchers from Caltech reported they had made an array of 6,100 “neutral-atom” qubits.
Nonetheless, this highly scalable modular architecture holds considerable promise for getting us closer to the millions of qubits needed to achieve the technology’s true promise.
Image Credit: Sampson Wilcox and Linsen Li, RLE
]]>Of course, building and scaling systems for quantum communications is no easy task. Scientists have been steadily chipping away at the problem for years. A Harvard team recently took another noteworthy step in the right direction. In a paper published this week in Nature, the team says they’ve sent entangled photons between two quantum memory nodes 22 miles (35 kilometers) apart on existing fiber optic infrastructure under the busy streets of Boston.
“Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers,” Mikhail Lukin, who led the project and is a physics professor at Harvard, said in a press release.
One way a quantum network can transmit information is by using entanglement, a quantum property where two particles, likely photons in this case, are linked so a change in the state of one tells us about the state of the other. If the sender and receiver of information each have one of a pair of entangled photons, they can securely transmit data using them. This means quantum communications will rely on generating enormous numbers of entangled photons and reliably sending them to far-off destinations.
Scientists have sent entangled particles long distances over fiber optic cables before, but to make a quantum internet work, particles will need to travel hundreds or thousands of miles. Because cables tend to absorb photons over such distances, the information will be lost—unless it can be periodically refreshed.
Enter quantum repeaters.
You can think of a repeater as a kind of internet gas station. Information passing through long stretches of fiber optic cables naturally degrades. A repeater refreshes that information at regular intervals, strengthening the signal and maintaining its fidelity. A quantum repeater is the same thing, only it also preserves entanglement.
That scientists have yet to build a quantum repeater is one reason we’re still a ways off from a working quantum internet at scale. Which is where the Harvard study comes in.
The team of researchers from Harvard and Amazon Web Services (AWS) have been working on quantum memory nodes. Each node houses a piece of diamond with an atom-sized hole, or silicon-vacancy center, containing two qubits: one for storage, one for communication. The nodes are basically small quantum computers, operating at near absolute zero, that can receive, record, and transmit quantum information. The Boston experiment, according to the team, is the longest distance anyone has sent information between such devices and a big step towards a quantum repeater.
“Our experiment really put us in a position where we’re really close to working on a quantum repeater demonstration,” Can Knaut, a Harvard graduate student in Lukin’s lab, told New Scientist.
Next steps include expanding the system to include multiple nodes.
Along those lines, a separate group in China, using a different technique for quantum memory involving clouds of rubidium atoms, recently said they’d linked three nodes 6 miles (10 kilometers) apart. The same group, led by Xiao-Hui Bao at the University of Science and Technology of China, had previously entangled memory nodes 13.6 miles (22 kilometers) apart.
It’ll take a lot more work to make the technology practical. Researchers need to increase the rate at which their machines entangle photons, for example. But as each new piece falls into place, the prospect of unhackable communications gets a bit closer.
]]>