Edd Gent, Author at Singularity Hub https://singularityhub.com/author/egent/ News and Insights on Technology, Science, and the Future from Singularity Group Wed, 18 Dec 2024 21:22:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://singularityhub.com/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Edd Gent, Author at Singularity Hub https://singularityhub.com/author/egent/ 32 32 4183809 Neuralink Rival’s Biohybrid Implant Connects to the Brain With Living Neurons https://singularityhub.com/2024/12/19/neuralink-rival-says-its-biohybrid-implant-connects-to-the-brain-with-living-neurons/ Thu, 19 Dec 2024 15:00:22 +0000 https://singularityhub.com/?p=159881 Brain implants have improved dramatically in recent years, but they’re still invasive and unreliable. A new kind of brain-machine interface using living neurons to form connections could be the future.

While companies like Neuralink have recently provided some flashy demos of what could be achieved by hooking brains up to computers, the technology still has serious limitations preventing wider use.

Non-invasive approaches like electroencephalograms (EEGs) provide only coarse readings of neural signals, limiting their functionality. Directly implanting electrodes in the brain can provide a much clearer connection, but such risky medical procedures are hard to justify for all but the most serious conditions.

California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision. In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.

“The principal advantages of a biohybrid implant are that it can dramatically change the scaling laws of how many neurons you can interface with versus how much damage you do to the brain,” Alan Mardinly, director of biology at Science Corporation, told New Scientist.

The company’s CEO Max Hodak is a former president of Neuralink, and his company also produces a retinal implant using more conventional electronics that can restore vision in some patients. But the company has been experimenting with so-called “biohybrid” approaches, which Hodak thinks could provide a more viable long-term solution for brain-machine interfaces.

“Placing anything into the brain inevitably destroys some amount of brain tissue,” he wrote in a recent blog post. “Destroying 10,000 cells to record from 1,000 might be perfectly justified if you have a serious injury and those thousand neurons create a lot of value—but it really hurts as a scaling characteristic.”

Instead, the company has developed a honeycomb-like structure made of silicon featuring more than 100,000 “microwells”—cylindrical holes roughly 15 micrometers deep. Individual neurons are inserted into each of these microwells, and the array can then be surgically implanted onto the surface of the brain.

The idea is that while the neurons remain housed in the implant, their axons—long strands that carry nerve signals away from the cell body—and their dendrites—the branched structures that form synapses with other cells—will be free to integrate with the host’s brain cells.

To see if the idea works in practice they installed the device in mice, using neurons genetically modified to react to light. Three weeks after implantation, they carried out a series of experiments where they trained the mice to respond whenever a light was shone on the device. The mice were able to detect when this happened, suggesting the light-sensitive neurons had merged with their native brain cells.

While it’s early days, the approach has significant benefits. You can squeeze a lot more neurons into a millimeter-scale chip than electrodes and each of those neurons can form many connections. That means the potential bandwidth of a biohybrid device could be much more than a conventional neural implant. The approach is also much less damaging to the patient’s brain.

However, the lifetime of these kinds of devices could be a concern—after 21 days, only 50 percent of the neurons had survived. And the company needs to find a way to ensure the neurons don’t illicit a negative immune response in the patient.

If the approach works though, it could be an elegant and potentially safer way to merge man and machine.

Image Credit: Science Corporation

]]>
159881
Google’s Latest Quantum Computing Breakthrough Shows Practical Machines Are Within Reach https://singularityhub.com/2024/12/12/googles-latest-quantum-computing-breakthrough-shows-practical-machines-are-within-reach/ Thu, 12 Dec 2024 21:38:28 +0000 https://singularityhub.com/?p=159799 One of the biggest barriers to large-scale quantum computing is the error-prone nature of the technology. This week, Google announced a major breakthrough in quantum error correction, which could lead to quantum computers capable of tackling real-world problems.

Quantum computing promises to solve problems that are beyond classical computers by harnessing the strange effects of quantum mechanics. But to do so we’ll need processors made up of hundreds of thousands, if not millions, of qubits (the quantum equivalent of bits).

Having just crossed the 1,000-qubit mark, today’s devices area a long way off, but more importantly their qubits are incredibly unreliable. The devices are highly susceptible to errors which can derail any attempt to carry out calculations long before an algorithm has run its course.

That’s why error correction has been a major focus for quantum computing companies in recent years. Now, Google’s new Willow quantum processor, unveiled Monday, has crossed a critical threshold suggesting that as the company’s devices get larger, their ability to suppress errors will improve exponentially.

“This is the most convincing prototype for a scalable logical qubit built to date,” Hartmut Neven, founder and lead of Google Quantum AI, wrote in a blog post. “It’s a strong sign that useful, very large quantum computers can indeed be built.”

Quantum error-correction schemes typically work by spreading the information needed to carry out calculations across multiple qubits. This introduces redundancy to the systems, so that even if one of the underlying qubits experiences an error, the information can be recovered. Using this approach, many “physical qubits” can be combined to create a single “logical qubit.”

In general, the more physical qubits you use to create each logical qubit, the more resistant it is to errors. But this is only true if the error rate of the individual qubits is below a certain threshold. Otherwise, the increased chance of an error from adding more faulty qubits outweighs the benefits of redundancy.

While other groups have demonstrated error correction that produces modest accuracy improvements, Google’s results are definitive. In a series of experiments reported in Nature, they encoded logical qubits into increasingly large arrays—starting with a three-by-three grid—and found that each time they increased the size the error rate halved. Crucially, the team found that the logical qubits they created lasted more than twice as long as the physical qubits that make them up.

“The more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes,” wrote Neven.

This was made possible by significant improvements in the underlying superconducting qubit technology Google uses to build its processors.  In the company’s previous Sycamore processor, the average operating lifetime of each physical qubit was roughly 20 microseconds. But thanks to new fabrication techniques and circuit optimizations, Willow’s qubits have more than tripled this to 68 microseconds.

As well as showing off the chip’s error-correction prowess, the company’s researchers also demonstrated its speed. They carried out a computation in under five minutes that would take the world’s second fastest supercomputer, Frontier, 10 septillion years to complete. However, the test they used is a contrived one with little practical use. The quantum computer simply has to execute random circuits with no useful purpose, and the classical computer then has to try and emulate it.

The big test for companies like Google is to go from such proofs of concept to solving commercially relevant problems. The new error-correction result is a big step in the right direction, but there’s still a long way to go.

Julian Kelly, who leads the company’s quantum hardware division, told Nature that solving practical challenges will likely require error rates of around one per ten million steps. Achieving that will necessitate logical qubits made of roughly 1,000 physical qubits each, though breakthroughs in error-correction schemes could bring this down by several hundred qubits.

More importantly, Google’s demonstration simply involved storing information in its logical qubits rather than using them to carry out calculations. Speaking to MIT Technology Review in September, when a preprint of the research was posted to arXiv, Kenneth Brown from Duke University noted that carrying out practical calculations would likely require a quantum computer to perform roughly a billion logical operations.

So, despite the impressive results, there’s still a long road ahead to large-scale quantum computers that can do anything useful. However, Google appears to have reached an important inflection point that suggests this vision is now within reach.

Image Credit: Google

]]>
159799
Automated Cyborg Cockroach Factory Could Churn Out a Bug a Minute for Search and Rescue https://singularityhub.com/2024/12/05/automated-cyborg-cockroach-factory-could-churn-out-a-bug-a-minute-for-search-and-rescue/ Thu, 05 Dec 2024 18:19:08 +0000 https://singularityhub.com/?p=159723 Envisioning armies of electronically controllable insects is probably nightmare fuel for most people. But scientists think they could help rescue workers scour challenging and hazardous terrain. An automated cyborg cockroach factory could help bring the idea to life.

The merger of living creatures with machines is a staple of science fiction, but it’s also a serious line of research for academics. Several groups have implanted electronics into moths, beetles, and cockroaches that allow simple control of the insects.

However, building these cyborgs is tricky as it takes considerable dexterity and patience to surgically implant electrodes in their delicate bodies. This means that creating enough for most practical applications is simply too time-consuming.

To overcome this obstacle, researchers at Nanyang Technological University in Singapore have automated the process, using a robotic arm with computer vision to install electrodes and tiny backpacks full of electronics on Madagascar hissing cockroaches. The approach cuts the time required to attach the equipment from roughly half an hour to just over a minute.

“In the future, factories for insect-computer hybrid robot[s] could be built to satisfy the needs for fast preparation and application of the hybrid robots,” the researchers write in a non-peer-reviewed paper on arXiv.

“Different sensors could be added to the backpack to develop applications on the inspection and search missions based on the requirements.”

Cyborg insects could be a promising alternative to conventional robots thanks to their small size, ability to operate for hours on little food, and their adaptability to new environments. As well as helping with search and rescue operations, the researchers suggest that swarms of these robot bugs could be used to inspect factories.

The researchers had already shown that signals from electrodes implanted into cockroach abdomens could be used to control the direction of travel and get them to slow down and even stop. But installing these electrodes and a small backpack with control electronics required painstaking work from a trained researcher.

That kind of approach makes it difficult to scale up to the hundreds or even thousands of insects required for practically useful swarms. So, the team developed an automated system that could install the electronics on a cockroach with minimal human involvement.

First, the researchers anesthetized the cockroaches by exposing them to carbon dioxide for 10 minutes. They then placed the bugs on a platform where a pair of rods powered by a motor pressed down on two segments of their hard exoskeletons to expose a soft membrane just behind the head.

A computer vision system then identified where to implant the electrodes and used this information to guide a robotic arm carrying the electronic backpack. Electrodes in place, the arm pressed the backpack down until its mounting mechanism hooked into another section of the insect’s body. The arm then released the backpack, and the rods retracted to free the cyborg bug.

The entire assembly process takes just 68 seconds, and the resulting cockroaches are just as controllable as ones made manually, the researchers found. A four-bug team was able to cover 80 percent of a 20-square-foot outdoor test environment filled with obstacles in about 10 minutes.

Fabian Steinbeck at Bielefeld University in Germany told New Scientist that using these cyborg bugs for search and rescue might be tricky as they currently have to be controlled remotely. Getting signal in collapsed buildings and similar challenging terrain would be difficult, and we don’t yet have the technology to get them to navigate autonomously.

Rapid improvements in both AI and communication technologies could soon change that though. So, it may not be too far-fetched to imagine swarms of robot bugs coming to your rescue in the near future.

Image Credit: Erik Karits from Pixabay

]]>
159723
OpenAI’s GPT-4o Makes AI Clones of Real People With Surprising Ease https://singularityhub.com/2024/11/29/openais-gpt-4o-makes-ai-clones-of-real-people-with-surprising-ease/ Fri, 29 Nov 2024 15:00:22 +0000 https://singularityhub.com/?p=159630 AI has become uncannily good at aping human conversational capabilities. New research suggests its powers of mimicry go a lot further, making it possible to replicate specific people’s personalities.

Humans are complicated. Our beliefs, character traits, and the way we approach decisions are products of both nature and nurture, built up over decades and shaped by our distinctive life experiences.

But it appears we might not be as unique as we think. A study led by researchers at Stanford University has discovered that all it takes is a two-hour interview for an AI model to predict people’s responses to a battery of questionnaires, personality tests, and thought experiments with an accuracy of 85 percent.

While the idea of cloning people’s personalities might seem creepy, the researchers say the approach could become a powerful tool for social scientists and politicians looking to simulate responses to different policy choices.

“What we have the opportunity to do now is create models of individuals that are actually truly high-fidelity,” Stanford’s Joon Sung Park from, who led the research, told New Scientist.We can build an agent of a person that captures a lot of their complexities and idiosyncratic nature.”

AI wasn’t used only to create virtual replicas of the study participants, it also helped gather the necessary training data. The researchers got a voice-enabled version of OpenAI’s GPT-4o to interview people using a script from the American Voices Project—a social science initiative aimed at gathering responses from American families on a wide range of issues.

As well as asking preset questions, the researchers also prompted the model to ask follow-up questions based on how people responded. The model interviewed 1,052 people across the US for two hours and produced transcripts for each individual.

Using this data, the researchers created GPT-4o-powered AI agents to answer questions in the same way the human participant would. Every time an agent fielded a question, the entire interview transcript was included alongside the query, and the model was told to imitate the participant.

To evaluate the approach, the researchers had the agents and human participants go head-to-head on a range of tests. These included the General Social Survey, which measures social attitudes to various issues; a test designed to judge how people score on the Big Five personality traits; several games that test economic decision making; and a handful of social science experiments.

Humans often respond quite differently to these kinds of tests at different times, which would throw off comparisons to the AI models. To control for this, the researchers asked the humans to complete the test twice, two weeks apart, so they could judge how consistent participants were.

When the team compared responses from the AI models against the first round of human responses, the agents were roughly 69 percent accurate. But taking into account how the humans’ responses varied between sessions, the researchers found the models hit an accuracy of 85 percent.

Hassaan Raza, the CEO of Tavus, a company that creates “digital twins” of customers, told MIT Technology Review that the biggest surprise from the study was how little data it took to create faithful copies of real people. Tavus normally needs a trove of emails and other information to create their AI clones.

“What was really cool here is that they show you might not need that much information,” he said. “How about you just talk to an AI interviewer for 30 minutes today, 30 minutes tomorrow? And then we use that to construct this digital twin of you.”

Creating realistic AI replicas of humans could prove a powerful tool for policymaking, Richard Whittle at the University of Salford, UK, told New Scientist, as AI focus groups could be much cheaper and quicker than ones made up of humans.

But it’s not hard to see how the same technology could be put to nefarious uses. Deepfake video has already been used to pose as a senior executive in an elaborate multi-million-dollar scam. The ability to mimic a target’s entire personality would likely turbocharge such efforts.

Either way, the research suggests that machines that can realistically imitate humans in a wide range of settings are imminent.

Image Credit: Richmond Fajardo on Unsplash

]]>
159630
‘Droidspeak’: AI Agents Now Have Their Own Language Thanks to Microsoft https://singularityhub.com/2024/11/21/droidspeak-ai-agents-now-have-their-own-language-thanks-to-microsoft/ Thu, 21 Nov 2024 21:04:59 +0000 https://singularityhub.com/?p=159581 Getting AIs to work together could be a powerful force multiplier for the technology. Now, Microsoft researchers have invented a new language to help their models talk to each other faster and more efficiently.

AI agents are the latest buzzword in Silicon Valley. These are AI models that can carry out complex, multi-step tasks autonomously. But looking further ahead, some see a future where multiple AI agents collaborate to solve even more challenging problems.

Given that these agents are powered by large language models (LLMs), getting them to work together usually relies on agents speaking to each other in natural language, often English. But despite their expressive power, human languages might not be the best medium of communication for machines that fundamentally operate in ones and zeros.

This prompted researchers from Microsoft to develop a new method of communication that allows agents to talk to each other in the high-dimensional mathematical language underpinning LLMs. They’ve named the new approach Droidspeak—a reference to the beep and whistle-based language used by robots in Star Wars—and in a preprint paper published on the arXiv, the Microsoft team reports it enabled models to communicate 2.78 times faster with little accuracy lost.

Typically, when AI agents communicate using natural language, they not only share the output of the current step they’re working on, but also the entire conversation history leading up to that point. Receiving agents must process this big chunk of text to understand what the sender is talking about.

This creates considerable computational overhead, which grows rapidly if agents engage in a repeated back-and-forth. Such exchanges can quickly become the biggest contributor to communication delays, say the researchers, limiting the scalability and responsiveness of multi-agent systems.

To break the bottleneck, the researchers devised a way for models to directly share the data created in the computational steps preceding language generation. In principle, the receiving model would use this directly rather than processing language and then creating its own high-level mathematical representations.

However, it’s not simple transferring the data between models. Different models represent language in very different ways, so the researchers focused on communication between versions of the same underlying LLM.

Even then, they had to be smart about what kind of data to share. Some data can be reused directly by the receiving model, while other data needs to be recomputed. The team devised a way of working this out automatically to squeeze the biggest computational savings from the approach.

Philip Feldman at the University of Maryland, Baltimore County told New Scientist that the resulting communication speed-ups could help multi-agent systems tackle bigger, more complex problems than possible using natural language.

But the researchers say there’s still plenty of room for improvement. For a start, it would be helpful if models of different sizes and configurations could communicate. And they could squeeze out even bigger computational savings by compressing the intermediate representations before transferring them between models.

However, it seems likely this is just the first step towards a future in which the diversity of machine languages rivals that of human ones.

Image Credit: Shawn Suttle from Pixabay

]]>
159581
MIT’s New Robot Dog Learned to Walk and Climb in a Simulation Whipped Up by Generative AI https://singularityhub.com/2024/11/15/mits-new-robot-dog-learned-to-walk-and-climb-in-a-simulation-whipped-up-by-generative-ai/ Fri, 15 Nov 2024 23:09:02 +0000 https://singularityhub.com/?p=159498 A big challenge when training AI models to control robots is gathering enough realistic data. Now, researchers at MIT have shown they can train a robot dog using 100 percent synthetic data.

Traditionally, robots have been hand-coded to perform particular tasks, but this approach results in brittle systems that struggle to cope with the uncertainty of the real world. Machine learning approaches that train robots on real-world examples promise to create more flexible machines, but gathering enough training data is a significant challenge.

One potential workaround is to train robots using computer simulations of the real world, which makes it far simpler to set up novel tasks or environments for them. But this approach is bedeviled by the “sim-to-real gap”—these virtual environments are still poor replicas of the real world and skills learned inside them often don’t translate.

Now, MIT CSAIL researchers have found a way to combine simulations and generative AI to enable a robot, trained on zero real-world data, to tackle a host of challenging locomotion tasks in the physical world.

“One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments,” Shuran Song from Stanford University, who wasn’t involved in the research, said in a press release from MIT.

“The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks.”

Leading simulators used to train robots today can realistically reproduce the kind of physics robots are likely to encounter. But they are not so good at recreating the diverse environments, textures, and lighting conditions found in the real world. This means robots relying on visual perception often struggle in less controlled environments.

To get around this, the MIT researchers used text-to-image generators to create realistic scenes and combined these with a popular simulator called MuJoCo to map geometric and physics data onto the images. To increase the diversity of images, the team also used ChatGPT to create thousands of prompts for the image generator covering a huge range of environments.

After generating these realistic environmental images, the researchers converted them into short videos from a robot’s perspective using another system they developed called Dreams in Motion. This computes how each pixel in the image would shift as the robot moves through an environment, creating multiple frames from a single image.

The researchers dubbed this data-generation pipeline LucidSim and used it to train an AI model to control a quadruped robot using just visual input. The robot learned a series of locomotion tasks, including going up and down stairs, climbing boxes, and chasing a soccer ball.

The training process was split into parts. First, the team trained their model on data generated by an expert AI system with access to detailed terrain information as it attempted the same tasks. This gave the model enough understanding of the tasks to attempt them in a simulation based on the data from LucidSim, which generated more data. They then re-trained the model on the combined data to create the final robotic control policy.

The approach matched or outperformed the expert AI system on four out of the five tasks in real-world tests, despite relying on just visual input. And on all the tasks, it significantly outperformed a model trained using “domain randomization”—a leading simulation approach that increases data diversity by applying random colors and patterns to objects in the environment.

The researchers told MIT Technology Review their next goal is to train a humanoid robot on purely synthetic data generated by LucidSim. They also hope to use the approach to improve the training of robotic arms on tasks requiring dexterity.

Given the insatiable appetite for robot training data, methods like this that can provide high-quality synthetic alternatives are likely to become increasingly important in the coming years.

Image Credit: MIT CSAIL

]]>
159498
Solar-Powered ‘Planimal’ Cells? Chloroplasts in Hamster Cells Make Food From Light https://singularityhub.com/2024/11/08/solar-powered-planimal-cells-chloroplasts-in-hamster-cells-make-food-from-light/ Fri, 08 Nov 2024 17:30:57 +0000 https://singularityhub.com/?p=159451 The ability of plants to convert sunlight into food is an enviable superpower. Now, researchers have shown they can get animal cells to do the same thing.

Photosynthesis in plants and algae is performed by tiny organelles known as chloroplasts, which convert sunlight into oxygen and chemical energy. While the origins of these structures are hazy, scientists believe they may have been photosynthetic bacteria absorbed by primordial cells.

Our ancestors weren’t so lucky, but now researchers from the University of Tokyo have managed to rewrite evolutionary history. In a recent paper, the team reported they had successfully implanted chloroplasts into hamster cells where they generated energy for at least two days via the photosynthetic electron transport process.

“As far as we know, this is the first reported detection of photosynthetic electron transport in chloroplasts implanted in animal cells,” professor Sachihiro Matsunaga said in a press release.

“We thought that the chloroplasts would be digested by the animal cells within hours after being introduced. However, what we found was that they continued to function for up to two days, and that the electron transport of photosynthetic activity occurred.”

Some animals have already managed to gain the benefits of photosynthesis—notably giant clams, which host algae in a symbiotic relationship. And it’s not the first time people have tried adding photosynthetic abilities into different kinds of cells. Previous studies had managed to make a kind of chimera between photosynthetic cyanobacteria and yeast cells.

But transplanting chloroplasts into animal cells is a bigger challenge. One of the major hurdles the researchers faced is that most algal chloroplasts become inactive below 37 degrees Celsius (98.6 degree Fahrenheit), but animal cells need to be cultured at these lower temperatures.

This prompted them to pick chloroplasts from a type of algae called Cyanidioschyzon merolae, which lives in highly acidic and volcanic hot springs. While it prefers temperatures about 42 degrees Celsius (107.6 degrees Fahrenheit), it remains active at much lower temperatures.

After isolating the algae’s chloroplasts and injecting them into hamster cells, the researchers cultured them for several days. During that time, they checked for photosynthetic activity using light pulses and imaged the cells to determine the location and structure of the choloroplasts.

They discovered the organelles were still producing energy after two days. They even found the so-called “planimal” cells were growing faster than regular hamster cells, suggesting the chloroplasts were providing a carbon source that acted as fuel for the host cells.

They also found many of the chloroplasts had migrated to surround the cells’ nuclei, and organelles known as mitochondria that convert carbohydrates into energy the cell can use had also gathered around the chloroplasts. The team suggests there could be some kind of chemical exchange between these sub-cellular structures, though they’ll need future studies to confirm this.

After two days, however, the chloroplasts started degrading, and by the fourth day, photosynthesis seemed to have stopped. This is probably due to the animal cells digesting the unfamiliar organelles, but the researchers say genetic tweaks to the animal cells could potentially side-step digestion.

While the research might conjure sci-fi visions of humans with green skin surviving on sunlight alone, the team says the most likely applications are in tissue engineering. Lab-grown tissue typically consists of several layers of cells, and it can be hard to get oxygen deep into the tissue.

“By mixing in chloroplast-implanted cells, oxygen could be supplied to the cells through photosynthesis, by light irradiation, thereby improving the conditions inside the tissue to enable growth,” said Matsunaga.

Nonetheless, the research is a breakthrough that rewrites many of our assumptions about life’s possible forms. And while it might be a distant prospect, it opens the tantalizing possibility of one day giving animals the solar-powered capabilities of plants.

Image Credit: R. Aoki, Y. Inui, Y. Okabe et al. 2024/ Proceedings of the Japan Academy, Series B

]]>
159451
The US Says Electric Air Taxis Can Finally Take Flight Under New FAA Rules https://singularityhub.com/2024/10/31/the-us-says-electric-air-taxis-can-finally-take-flight-under-new-faa-rules/ Thu, 31 Oct 2024 19:20:13 +0000 https://singularityhub.com/?p=159329 Electric air taxis have seen rapid technological advances in recent years, but the industry has had a regulatory question mark hanging over its head. Now, the US Federal Aviation Authority has published rules governing the operation of this new class of aircraft.

Startups developing electric vertical take-off and landing (eVTOL) aircraft have attracted billions of dollars of investment over the past decade. But an outstanding challenge for these vehicles is they’re hard to classify, often representing a strange hybrid between a drone, light aircraft, and helicopter.

For this reason they’ve fallen into a regulatory gray area in most countries. The murkiness has led to considerable uncertainty about where and how they’ll be permitted to operate in the future, which could have serious implications for the business model of many of these firms.

But now, the FAA has provided some much-needed clarity by publishing the rules governing what the agency calls “powered-lift” aircraft. This is the first time regulators have recognized a new category of aircraft since the 1940s when helicopters first entered the market.

“This final rule provides the necessary framework to allow powered-lift aircraft to safely operate in our airspace,” FAA administrator Mike Whitaker said in a statement.  “Powered-lift aircraft are the first new category of aircraft in nearly 80 years and this historic rule will pave the way for accommodating wide-scale advanced air mobility operations in the future.”

The principal challenge when it comes to regulating air taxis is the novel way they operate. Most leading designs use propellers that rotate up and down, which allows them to take off vertically like a helicopter before operating more like a conventional airplane during cruise.

The agency dealt with this by varying the operational requirements, such as minimum safe altitude, required visibility, and range, depending on the phase of flight. This means that during take-off the vehicles need to adhere to the less stringent requirements placed on helicopters, but when cruising they must conform to the same rules as airplanes. The rules are also performance-based, so exact requirements will depend on the capabilities of the specific vehicle in question.

The new regulations also provide a framework for certifying the initial batch of instructors and training future pilots. Because eVTOLs are a new class of aircraft, there are currently no pilots certified to fly them and therefore no one to train other pilots.

To get round this chicken-and-egg situation, the FAA says they’ll allow certain pilots employed by eVTOL companies to develop the required experience and training during the test flights required for vehicle certification. These pilots would become the first group of instructors who could then train other instructors at pilot schools and training centers.

The regulations also relax an existing requirement for training aircraft to feature two sets of flight controls. Instead, the agency is allowing pilots to learn in aircraft where the trainer can easily access the controls to intervene, if necessary, or letting pilots train in a simulator to gain enough experience to fly the aircraft solo.

When the agency introduced draft rules last year, the industry criticized them as too strict, according to The Verge. But the agency says it has taken the criticism onboard and thinks the new rules strike a good balance between safety and easing the burden on companies.

Industry leader Joby Aviation welcomed the new rules and, in particular, the provision for training pilots in simulators. “The regulation published today will ensure the US continues to play a global leadership role in the development and adoption of clean flight,” JoeBen Bevirt, founder and CEO of Joby, said in a statement. “Delivering ahead of schedule is a testament to the dedication, coordination and hard work of the rulemaking team.”

In its announcement, the FAA highlighted the technology’s potential for everything from air taxi services to short-haul cargo transport and even air ambulances. With these new rules in place, operators can now start proving out some of those business cases.

Image Credit: Joby

]]>
159329
‘Electric Plastic’ Could Merge Technology With the Body in Future Wearables and Implants https://singularityhub.com/2024/10/25/electric-plastic-could-more-closely-merge-technology-with-the-body-in-future-wearables/ Fri, 25 Oct 2024 20:40:59 +0000 https://singularityhub.com/?p=159284 Finding ways to connect the human body to technology could have broad applications in health and entertainment. A new “electric plastic” could make self-powered wearables, real-time neural interfaces, and medical implants that merge with our bodies a reality.

While there has been significant progress in the development of wearable and implantable technology in recent years, most electronic materials are hard, rigid, and feature toxic metals. A variety of approaches for creating “soft electronics” has emerged, but finding ones that are durable, power-efficient, and easy to manufacture is a significant challenge.

Organic ferroelectric materials are promising because they exhibit spontaneous polarization, which means they have a stable electric field pointing in a particular direction. This polarization can be flipped by applying an external electrical field, allowing them to function like a bit in a conventional computer.

The most successful soft ferroelectric is a material called polyvinylidene fluoride (PVDF), which has been used in commercial products like wearable sensors, medical imaging, underwater navigation devices, and soft robots. But PVDF’s electrical properties can break down when exposed to higher temperatures, and it requires high voltages to flip its polarization.

Now, in a paper published in Nature, researchers at Northwestern University have shown that combining the material with short chains of amino acids known as peptides can dramatically reduce power requirements and boost heat tolerance. And the incorporation of biomolecules into the material opens the prospect of directly interfacing electronics with the body.

To create their new “electric plastic” the team used a type of molecule known as a peptide amphiphile. These molecules feature a water-repelling component that helps them self-assemble into complex structures. The researchers connected these peptides to short strands of PVDF and exposed them to water, causing the peptides to cluster together.

This made the strands coalesce into long, flexible ribbons. In testing, the team found the material could withstand temperatures of 110 degrees Celsius, which is roughly 40 degrees higher than previous PVDF materials. Switching the material’s polarization also required significantly lower voltages, despite being made up of 49 percent peptides by weight.

The researchers told Science that as well as being able to store energy or information in the material’s polarization, it’s also biocompatible. This means it could be used in everything from wearable devices that monitor vital signs to flexible implants that can replace pacemakers. The peptides could also be connected to proteins inside cells to record biological activity or even stimulate it.

One challenge is that although PVDF is biocompatible, it can break down into so-called “forever chemicals,” which remain in the environment for centuries and studies have linked to health and environmental problems. Several other chemicals the researchers used to fabricate their material also fall into this category.

“This advance has enabled a number of attractive properties compared to other organic polymers,”  Frank Leibfarth, of UNC Chapel Hill, told Science. But he pointed out that the researchers had only tested very small amounts of the molecule, and it’s unclear how easy it will be to scale them up.

If the researchers can extend the approach to larger scales, however, it could bring a host of exciting new possibilities at the interface between our bodies and technology.

Image Credit: Mark Seniw/Center for Regenerative Nanomedicine/Northwestern University

]]>
159284
This DeepMind AI Helps Polarized Groups of People Find Common Ground https://singularityhub.com/2024/10/18/this-deepmind-ai-helps-polarized-groups-of-people-find-common-ground/ Fri, 18 Oct 2024 14:00:00 +0000 https://singularityhub.com/?p=159223 In our polarized times, finding ways to get people to agree with each other is more important than ever. New research suggests AI can help people with different views find common ground.

The ability to effectively make collective decisions is crucial for an open and free society. But it is a skill that’s atrophied in recent decades, driven in part by the polarizing effects of technology like social media.

New research from Google DeepMind suggests technology could also present a solution. In a recent paper in Science, the company showed that an AI system using large language models could act as mediator in group discussions and help find points of agreement on contentious issues.

“This research demonstrates the potential of AI to enhance collective deliberation,” wrote the authors. “The AI-mediated approach is time-efficient, fair, scalable, and outperforms human mediators on key dimensions.”

The researchers were inspired by philosopher Jürgen Habermas’ theory of communicative action, which proposes that, under the right conditions, deliberation between rational people will lead to agreement.

They built an AI tool that could summarize and synthesize the views of a small group of humans into a shared statement. The language model was asked to maximize the overall approval rating from the group as a whole. Group members then critiqued the statement, and the model used this to produce a fresh draft—a feedback loop that was repeated multiple times.

To test the approach, the researchers recruited around 5,000 people in the UK through a crowdsourcing platform and split them into groups of six. They asked these groups to discuss contentious issues like whether the voting age should be lowered to 16. They also trained one group member to write group statements and compared these against the machine-derived ones.

The team found participants preferred the AI summaries 56 percent of the time, suggesting the technology was doing a good job capturing group opinion. The volunteers also gave higher ratings to the machine-written statements and endorsed them more strongly.

More importantly, the researchers determined that after going through the AI mediation process a measure of group agreement increased by about eight percent on average. Participants also reported their view had moved closer to the group opinion after 30 percent of the deliberation rounds.

This suggests the approach was genuinely helping groups find common ground. One of the key attributes of the AI-generated group statements, the authors noted, was that they did a good job incorporating the views of dissenting voices while respecting the majority position.

To really put the approach to the test, the researchers recruited a demographically representative sample of 200 participants in the UK to take part in a virtual “citizen’s assembly,” which took place over three weekly one-hour sessions. The group deliberated over nine contentious questions, and afterwards, the researchers again found a significant increase in group agreement.

The technology still falls somewhat short of a human mediator, DeepMind’s Michael Henry Tessler told MIT Tech Review. “It doesn’t have the mediation-relevant capacities of fact-checking, staying on topic, or moderating the discourse.”

Nonetheless, Christopher Summerfield, research director at the UK AI Safety Institute, who led the project, told Science the technology was “ready to go” for real-world deployment and could help add some nuance to opinion polling.

But others think that without crucial steps like starting a deliberation with the presentation of expert information and allowing group members to directly discuss the issues, the technology could allow ill-informed and harmful views to make it into the group statements. “I believe in the magic of dialogue under the right design,” James Fishkin, a political scientist at Stanford University, told Science.But there’s not really much dialogue here.”

While that is certainly a risk, any technology that can help lubricate discussions in today’s polarized world should be welcomed. It might take a few more iterations, but dispassionate AI mediators could be a vital tool for re-establishing some common purpose in the world.

Image Credit: Mohamed Hassan / Pixabay

]]>
159223