Tech Archives - Singularity Hub https://singularityhub.com/tag/technology/ News and Insights on Technology, Science, and the Future from Singularity Group Thu, 31 Oct 2024 19:41:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://singularityhub.com/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Tech Archives - Singularity Hub https://singularityhub.com/tag/technology/ 32 32 4183809 The US Says Electric Air Taxis Can Finally Take Flight Under New FAA Rules https://singularityhub.com/2024/10/31/the-us-says-electric-air-taxis-can-finally-take-flight-under-new-faa-rules/ Thu, 31 Oct 2024 19:20:13 +0000 https://singularityhub.com/?p=159329 Electric air taxis have seen rapid technological advances in recent years, but the industry has had a regulatory question mark hanging over its head. Now, the US Federal Aviation Authority has published rules governing the operation of this new class of aircraft.

Startups developing electric vertical take-off and landing (eVTOL) aircraft have attracted billions of dollars of investment over the past decade. But an outstanding challenge for these vehicles is they’re hard to classify, often representing a strange hybrid between a drone, light aircraft, and helicopter.

For this reason they’ve fallen into a regulatory gray area in most countries. The murkiness has led to considerable uncertainty about where and how they’ll be permitted to operate in the future, which could have serious implications for the business model of many of these firms.

But now, the FAA has provided some much-needed clarity by publishing the rules governing what the agency calls “powered-lift” aircraft. This is the first time regulators have recognized a new category of aircraft since the 1940s when helicopters first entered the market.

“This final rule provides the necessary framework to allow powered-lift aircraft to safely operate in our airspace,” FAA administrator Mike Whitaker said in a statement.  “Powered-lift aircraft are the first new category of aircraft in nearly 80 years and this historic rule will pave the way for accommodating wide-scale advanced air mobility operations in the future.”

The principal challenge when it comes to regulating air taxis is the novel way they operate. Most leading designs use propellers that rotate up and down, which allows them to take off vertically like a helicopter before operating more like a conventional airplane during cruise.

The agency dealt with this by varying the operational requirements, such as minimum safe altitude, required visibility, and range, depending on the phase of flight. This means that during take-off the vehicles need to adhere to the less stringent requirements placed on helicopters, but when cruising they must conform to the same rules as airplanes. The rules are also performance-based, so exact requirements will depend on the capabilities of the specific vehicle in question.

The new regulations also provide a framework for certifying the initial batch of instructors and training future pilots. Because eVTOLs are a new class of aircraft, there are currently no pilots certified to fly them and therefore no one to train other pilots.

To get round this chicken-and-egg situation, the FAA says they’ll allow certain pilots employed by eVTOL companies to develop the required experience and training during the test flights required for vehicle certification. These pilots would become the first group of instructors who could then train other instructors at pilot schools and training centers.

The regulations also relax an existing requirement for training aircraft to feature two sets of flight controls. Instead, the agency is allowing pilots to learn in aircraft where the trainer can easily access the controls to intervene, if necessary, or letting pilots train in a simulator to gain enough experience to fly the aircraft solo.

When the agency introduced draft rules last year, the industry criticized them as too strict, according to The Verge. But the agency says it has taken the criticism onboard and thinks the new rules strike a good balance between safety and easing the burden on companies.

Industry leader Joby Aviation welcomed the new rules and, in particular, the provision for training pilots in simulators. “The regulation published today will ensure the US continues to play a global leadership role in the development and adoption of clean flight,” JoeBen Bevirt, founder and CEO of Joby, said in a statement. “Delivering ahead of schedule is a testament to the dedication, coordination and hard work of the rulemaking team.”

In its announcement, the FAA highlighted the technology’s potential for everything from air taxi services to short-haul cargo transport and even air ambulances. With these new rules in place, operators can now start proving out some of those business cases.

Image Credit: Joby

]]>
159329
‘Electric Plastic’ Could Merge Technology With the Body in Future Wearables and Implants https://singularityhub.com/2024/10/25/electric-plastic-could-more-closely-merge-technology-with-the-body-in-future-wearables/ Fri, 25 Oct 2024 20:40:59 +0000 https://singularityhub.com/?p=159284 Finding ways to connect the human body to technology could have broad applications in health and entertainment. A new “electric plastic” could make self-powered wearables, real-time neural interfaces, and medical implants that merge with our bodies a reality.

While there has been significant progress in the development of wearable and implantable technology in recent years, most electronic materials are hard, rigid, and feature toxic metals. A variety of approaches for creating “soft electronics” has emerged, but finding ones that are durable, power-efficient, and easy to manufacture is a significant challenge.

Organic ferroelectric materials are promising because they exhibit spontaneous polarization, which means they have a stable electric field pointing in a particular direction. This polarization can be flipped by applying an external electrical field, allowing them to function like a bit in a conventional computer.

The most successful soft ferroelectric is a material called polyvinylidene fluoride (PVDF), which has been used in commercial products like wearable sensors, medical imaging, underwater navigation devices, and soft robots. But PVDF’s electrical properties can break down when exposed to higher temperatures, and it requires high voltages to flip its polarization.

Now, in a paper published in Nature, researchers at Northwestern University have shown that combining the material with short chains of amino acids known as peptides can dramatically reduce power requirements and boost heat tolerance. And the incorporation of biomolecules into the material opens the prospect of directly interfacing electronics with the body.

To create their new “electric plastic” the team used a type of molecule known as a peptide amphiphile. These molecules feature a water-repelling component that helps them self-assemble into complex structures. The researchers connected these peptides to short strands of PVDF and exposed them to water, causing the peptides to cluster together.

This made the strands coalesce into long, flexible ribbons. In testing, the team found the material could withstand temperatures of 110 degrees Celsius, which is roughly 40 degrees higher than previous PVDF materials. Switching the material’s polarization also required significantly lower voltages, despite being made up of 49 percent peptides by weight.

The researchers told Science that as well as being able to store energy or information in the material’s polarization, it’s also biocompatible. This means it could be used in everything from wearable devices that monitor vital signs to flexible implants that can replace pacemakers. The peptides could also be connected to proteins inside cells to record biological activity or even stimulate it.

One challenge is that although PVDF is biocompatible, it can break down into so-called “forever chemicals,” which remain in the environment for centuries and studies have linked to health and environmental problems. Several other chemicals the researchers used to fabricate their material also fall into this category.

“This advance has enabled a number of attractive properties compared to other organic polymers,”  Frank Leibfarth, of UNC Chapel Hill, told Science. But he pointed out that the researchers had only tested very small amounts of the molecule, and it’s unclear how easy it will be to scale them up.

If the researchers can extend the approach to larger scales, however, it could bring a host of exciting new possibilities at the interface between our bodies and technology.

Image Credit: Mark Seniw/Center for Regenerative Nanomedicine/Northwestern University

]]>
159284
Meta Just Launched the Largest ‘Open’ AI Model in History. Here’s Why It Matters. https://singularityhub.com/2024/08/02/meta-just-launched-the-largest-open-ai-model-in-history-heres-why-it-matters/ Fri, 02 Aug 2024 20:32:43 +0000 https://singularityhub.com/?p=158155

In the world of artificial intelligence, a battle is underway. On one side are companies that believe in keeping the datasets and algorithms behind their advanced software private and confidential. On the other are companies that believe in allowing the public to see what’s under the hood of their sophisticated AI models.

Think of this as the battle between open- and closed-source AI.

In recent weeks, Meta, the parent company of Facebook, took up the fight for open-source AI in a big way by releasing a new collection of large AI models. These include a model named Llama 3.1 405B, which Meta’s founder and chief executive, Mark Zuckerberg, says is “the first frontier-level open-source AI model.”

For anyone who cares about a future in which everybody can access the benefits of AI, this is good news.

The Danger of Closed-Source AI—and the Promise of Open-Source AI

Closed-source AI refers to models, datasets, and algorithms that are proprietary and kept confidential. Examples include ChatGPT, Google’s Gemini, and Anthropic’s Claude.

Though anyone can use these products, there is no way to find out what dataset and source codes have been used to build the AI model or tool.

While this is a great way for companies to protect their intellectual property and profits, it risks undermining public trust and accountability. Making AI technology closed-source also slows down innovation and makes a company or other users dependent on a single platform for their AI needs. This is because the platform that owns the model controls changes, licensing, and updates.

There are a range of ethical frameworks that seek to improve the fairness, accountability, transparency, privacy, and human oversight of AI. However, these principles are often not fully achieved with closed-source AI due to the inherent lack of transparency and external accountability associated with proprietary systems.

In the case of ChatGPT, its parent company, OpenAI, releases neither the dataset nor code of its latest AI tools to the public. This makes it impossible for regulators to audit it. And while access to the service is free, concerns remain about how users’ data are stored and used for retraining models.

By contrast, the code and dataset behind open-source AI models is available for everyone to see.

This fosters rapid development through community collaboration and enables the involvement of smaller organizations and even individuals in AI development. It also makes a huge difference for small- and medium-size enterprises as the cost of training large AI models is colossal.

Perhaps most importantly, open-source AI allows for scrutiny and identification of potential biases and vulnerability.

However, open-source AI does create new risks and ethical concerns.

For example, quality control in open-source products is usually low. As hackers can also access the code and data, the models are also more prone to cyberattacks and can be tailored and customized for malicious purposes, such as retraining the model with data from the dark web.

An Open-Source AI Pioneer

Among all leading AI companies, Meta has emerged as a pioneer of open-source AI. With its new suite of AI models, it is doing what OpenAI promised to do when it launched in December 2015—namely, advancing digital intelligence “in the way that is most likely to benefit humanity as a whole,” as OpenAI said back then.

Llama 3.1 405B is the largest open-source AI model in history. It is what’s known as a large language model, capable of generating human language text in multiple languages. It can be downloaded online but because of its huge size, users will need powerful hardware to run it.

While it does not outperform other models across all metrics, Llama 3.1 405B is considered highly competitive and does perform better than existing closed-source and commercial large language models in certain tasks, such as reasoning and coding tasks.

But the new model is not fully open because Meta hasn’t released the huge dataset used to train it. This is a significant “open” element that is currently missing.

Nonetheless, Meta’s Llama levels the playing field for researchers, small organizations, and startups because it can be leveraged without the immense resources required to train large language models from scratch.

Shaping the Future of AI

To ensure AI is democratized, we need three key pillars:

  • Governance: regulatory and ethical frameworks to ensure AI technology is being developed and used responsibly and ethically
  • Accessibility: affordable computing resources and user-friendly tools to ensure a fair landscape for developers and users
  • Openness: datasets and algorithms to train and build AI tools should be open source to ensure transparency.

Achieving these three pillars is a shared responsibility for government, industry, academia and the public. The public can play a vital role by advocating for ethical policies in AI, staying informed about AI developments, using AI responsibly, and supporting open-source AI initiatives.

But several questions remain about open-source AI. How can we balance protecting intellectual property and fostering innovation through open-source AI? How can we minimize ethical concerns around open-source AI? How can we safeguard open-source AI against potential misuse?

Properly addressing these questions will help us create a future where AI is an inclusive tool for all. Will we rise to the challenge and ensure AI serves the greater good? Or will we let it become another nasty tool for exclusion and control? The future is in our hands.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Google DeepMindUnsplash

]]>
158155
This Is What Could Happen if AI Content Is Allowed to Take Over the Internet https://singularityhub.com/2024/07/25/this-is-what-could-happen-if-ai-content-is-allowed-to-take-over-the-internet/ Thu, 25 Jul 2024 21:44:27 +0000 https://singularityhub.com/?p=158074 Generative AI is a data hog.

The algorithms behind chatbots like ChatGPT learn to create human-like content by scraping terabytes of online articles, Reddit posts, TikTok captions, or YouTube comments. They find intricate patterns in the text, then spit out search summaries, articles, images, and other content.

For the models to become more sophisticated, they need to capture new content. But as more people use them to generate text and then post the results online, it’s inevitable that the algorithms will start to learn from their own output, now littered across the internet. That’s a problem.

A study in Nature this week found a text-based generative AI algorithm, when heavily trained on AI-generated content, produces utter nonsense after just a few cycles of training.

“The proliferation of AI-generated content online could be devastating to the models themselves,” wrote Dr. Emily Wenger at Duke University, who was not involved in the study.

Although the study focused on text, the results could also impact multimodal AI models. These models also rely on training data scraped online to produce text, images, or videos.

As the usage of generative AI spreads, the problem will only get worse.

The eventual end could be model collapse, where AI increasing fed data generated by AI is overwhelmed by noise and only produces incoherent baloney.

Hallucinations or Breakdown?

It’s no secret generative AI often “hallucinates.” Given a prompt, it can spout inaccurate facts or “dream up” categorically untrue answers. Hallucinations could have serious consequences, such as a healthcare AI incorrectly, but authoritatively, identifying a scab as cancer.

Model collapse is a separate phenomenon, where AI trained on its own self-generated data degrades over generations. It’s a bit like genetic inbreeding, where offspring have a greater chance of inheriting diseases. While computer scientists have long been aware of the problem, how and why it happens for large AI models has been a mystery.

In the new study, researchers built a custom large language model and trained it on Wikipedia entries. They then fine-tuned the model nine times using datasets generated from its own output and measured the quality of the AI’s output with a so-called “perplexity score.” True to its name, the higher the score, the more bewildering the generated text.

Within just a few cycles, the AI notably deteriorated.

In one example, the team gave it a long prompt about the history of building churches—one that would make most human’s eyes glaze over. After the first two iterations, the AI spewed out a relatively coherent response discussing revival architecture, with an occasional “@” slipped in. By the fifth generation, however, the text completely shifted away from the original topic to a discussion of language translations.

The output of the ninth and final generation was laughably bizarre:

“architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.”

Interestingly, AI trained on self-generated data often ends up producing repetitive phrases, explained the team. Trying to push the AI away from repetition made the AI’s performance even worse. The results held up in multiple tests using different prompts, suggesting it’s a problem inherent to the training procedure, rather than the language of the prompt.

Circular Training

The AI eventually broke down, in part because it gradually “forgot” bits of its training data from generation to generation.

This happens to us too. Our brains eventually wipe away memories. But we experience the world and gather new inputs. “Forgetting” is highly problematic for AI, which can only learn from the internet.

Say an AI “sees” golden retrievers, French bulldogs, and petit basset griffon Vendéens—a far more exotic dog breed—in its original training data. When asked to make a portrait of a dog, the AI would likely skew towards one that looks like a golden retriever because of an abundance of photos online. And if subsequent models are trained on this AI-generated dataset with an overrepresentation of golden retrievers, they eventually “forget” the less popular dog breeds.

“Although a world overpopulated with golden retrievers doesn’t sound too bad, consider how this problem generalizes to the text-generation models,” wrote Wenger.

Previous AI-generated text already swerves towards well-known concepts, phrases, and tones, compared to other less common ideas and styles of writing. Newer algorithms trained on this data would exacerbate the bias, potentially leading to model collapse.

The problem is also a challenge for AI fairness across the globe. Because AI trained on self-generated data overlooks the “uncommon,” it also fails to gauge the complexity and nuances of our world. The thoughts and beliefs of minority populations could be less represented, especially for those speaking underrepresented languages.

“Ensuring that LLMs [large language models] can model them is essential to obtaining fair predictions—which will become more important as generative AI models become more prevalent in everyday life,” wrote Wenger.

How to fix this? One way is to use watermarks—digital signatures embedded in AI-generated data—to help people detect and potentially remove the data from training datasets. Google, Meta, and OpenAI have all proposed the idea, though it remains to be seen if they can agree on a single protocol. But watermarking is not a panacea: Other companies or people may choose not to watermark AI-generated outputs or, more likely, can’t be bothered.

Another potential solution is to tweak how we train AI models. The team found that adding more human-generated data over generations of training produced a more coherent AI.

All this is not to say model collapse is imminent. The study only looked at a text-generating AI trained on its own output. Whether it would also collapse when trained on data generated by other AI models remains to be seen. And with AI increasingly tapping into images, sounds, and videos, it’s still unclear if the same phenomenon appears in those models too.

But the results suggest there’s a “first-mover” advantage in AI. Companies that scraped the internet earlier—before it was polluted by AI-generated content—have the upper hand.

There’s no denying generative AI is changing the world. But the study suggests models can’t be sustained or grow over time without original output from human minds—even if it’s memes or grammatically-challenged comments. Model collapse is about more than a single company or country.

What’s needed now is community-wide coordination to mark AI-created data, and openly share the information, wrote the team. “Otherwise, it may become increasingly difficult to train newer versions of LLMs [large language models] without access to data that were crawled from the internet before the mass adoption of the technology or direct access to data generated by humans at scale.”

Image Credit: Kadumago / Wikimedia Commons

]]>
158074
Joby’s New Hydrogen-Powered Aircraft Can Fly You From San Francisco to San Diego https://singularityhub.com/2024/07/15/jobys-new-hydrogen-powered-aircraft-can-fly-you-from-san-francisco-to-san-diego/ Mon, 15 Jul 2024 14:00:47 +0000 https://singularityhub.com/?p=157964 A new generation of “flying cars” promises to revolutionize urban mobility, but limited battery power holds them back from plying longer routes. A new hydrogen-powered variant from Joby Aviation could soon change that.

Rapid advances in battery technology and electric motors have opened the door to a new class of aircraft known as eVTOLs, which stands for electric vertical takeoff and landing. The companies making the aircraft tout them as a quieter, greener alternative to helicopters.

However, current battery technology means they’re limited to ranges of approximately 150 miles. That’s why they have primarily been envisaged as a new form of urban mobility, allowing quick hops across cities congested with traffic.

Joby is already developing a battery-powered eVTOL that it expects to start commercial operations next year. But this week, the company announced it has created a hydrogen-powered version of the aircraft, which recently completed a 523-mile test flight. The company says this could allow eVTOLs to break into regional travel as well.

“With our battery-electric air taxi set to fundamentally change the way we move around cities, we’re excited to now be building a technology stack that could redefine regional travel using hydrogen-electric aircraft,” JoeBen Bevirt, founder and CEO of Joby, said in a press release.

“Imagine being able to fly from San Francisco to San Diego, Boston to Baltimore, or Nashville to New Orleans without the need to go to an airport and with no emissions except water.”

Joby’s demonstrator is a converted battery-electric aircraft that had already completed 25,000 miles of test flights. It features the same airframe with six electric-motor-powered tilting propellers that allow it to take off vertically like a helicopter but cruise like a light aircraft. Joby says this should significantly speed up the certification process if the company decides to commercialize the technology.

What’s new is the addition of a hydrogen fuel cell system designed by H2FLY, a German startup Joby acquired in 2021, and a liquid hydrogen fuel tank that can store about 40 kilograms of fuel. The fuel cell combines the liquid hydrogen with oxygen from the air to generate the electricity that powers the aircraft’s motors. The H2FLY team used the same underlying technology in a series of demonstration flights with a more conventional aircraft design last year.

The new Joby aircraft will still carry some batteries to provide additional power during takeoff and landing. But hydrogen has a much higher energy density—or specific energy—than batteries, which makes it possible to power the aircraft for significantly longer.

“Hydrogen has one hundred times the specific energy of today’s batteries and three times that of jet fuel,” Bevirt wrote in a blog post. “The result is an electric aircraft that can travel much farther—and carry a greater payload—than is possible not only with any battery cells currently under development, but even with the same mass of jet fuel.”

However, switching to hydrogen fuel poses some challenges. For a start, hydrogen requires complicated cooling equipment, which means airports or other landing facilities would need to invest significant amounts in new fueling infrastructure.

“The industry is already scratching its head figuring out how to support battery electric aircraft with charging infrastructure at airports,” Cyrus Sigari, co-founder and managing partner of VC Up.Partners, told TechCrunch. “Adding hydrogen filling stations into that equation will present even more challenges.”

Hydrogen’s green credentials are also somewhat weaker than those of batteries. While it’s possible to generate hydrogen from water using only renewable electricity, at present the vast majority is produced from fossil fuels.

However, efforts are underway to increase the supply of green hydrogen, and the Bipartisan Infrastructure Law passed in 2021 set aside $9.5 billion to help boost these efforts. And if hydrogen-powered flight can piggyback on innovations in eVTOL technology, it could prove a powerful way to curb emissions in one of the world’s most polluting sectors.

Image Credit: Joby

]]>
157964
How Teams of AI Agents Working Together Could Unlock the Tech’s True Power https://singularityhub.com/2024/06/28/how-teams-of-ai-agents-working-together-could-unlock-the-techs-true-power/ Fri, 28 Jun 2024 14:00:51 +0000 https://singularityhub.com/?p=157666 If you had to sum up what has made humans such a successful species, it’s teamwork. There’s growing evidence that getting AIs to work together could dramatically improve their capabilities too.

Despite the impressive performance of large language models, companies are still scrabbling for ways to put them to good use. Big tech companies are building AI smarts into a wide-range of products, but none has yet found the killer application that will spur widespread adoption.

One promising use case garnering attention is the creation of AI agents to carry out tasks autonomously. The main problem is that LLMs remain error-prone, which makes it hard to trust them with complex, multi-step tasks.

But as with humans, it seems two heads are better than one. A growing body of research into “multi-agent systems” shows that getting chatbots to team up can help solve many of the technology’s weaknesses and allow them to tackle tasks out of reach for individual AIs.

The field got a significant boost last October when Microsoft researchers launched a new software library called AutoGen designed to simplify the process of building LLM teams. The package provides all the necessary tools to spin up multiple instances of LLM-powered agents and allow them to communicate with each other by way of natural language.

Since then, researchers have carried out a host of promising demonstrations. 

In a recent article, Wired highlighted several papers presented at a workshop at the International Conference on Learning Representations (ICLR) last month. The research showed that getting agents to collaborate could boost performance on math tasks—something LLMs tend to struggle with—or boost their reasoning and factual accuracy.

In another instance, noted by The Economist, three LLM-powered agents were set the task of defusing bombs in a series of virtual rooms. The AI team performed better than individual agents, and one of the agents even assumed a leadership role, ordering the other two around in a way that improved team efficiency.

Chi Wang, the Microsoft researcher leading the AutoGen project, told The Economist that the approach takes advantage of the fact most jobs can be split up into smaller tasks. Teams of LLMs can tackle these in parallel rather than churning through them sequentially, as an individual AI would have to do.

So far, setting up multi-agent teams has been a complicated process only really accessible to AI researchers. But earlier this month, the Microsoft team released a new “low-code” interface for building AI teams called AutoGen Studio, which is accessible to non-experts.

The platform allows users to choose from a selection of preset AI agents with different characteristics. Alternatively, they can create their own by selecting which LLM powers the agent, giving it “skills” such as the ability to fetch information from other applications, and even writing short prompts that tell the agent how to behave. 

So far, users of the platform have put AI teams to work on tasks like travel planning, market research, data extraction, and video generation, say the researchers.

The approach does have its limitations though. LLMs are expensive to run, so leaving several of them to natter away to each other for long stretches can quickly become unsustainable. And it’s unclear whether groups of AIs will be more robust to mistakes, or whether they could lead to cascading errors through the entire team.

Lots of work needs to be done on more prosaic challenges too, such as the best way to structure AI teams and how to distribute responsibilities between their members. There’s also the question of how to integrate these AI teams with existing human teams. Still,  pooling AI resources is a promising idea that’s quickly picking up steam.

Image Credit: Mohamed Nohassi / Unsplash

]]>
157666
Study Finds Self-Driving Cars Are Actually Safer Than Humans in Many (But Not All) Situations https://singularityhub.com/2024/06/24/study-finds-self-driving-cars-are-actually-safer-than-humans-in-most-situations/ Mon, 24 Jun 2024 14:00:02 +0000 https://singularityhub.com/?p=157684 Autonomous vehicles are understandably held to incredibly high safety standards, but it’s sometimes forgotten that the true baseline is the often dangerous driving of humans. Now, new research shows that self-driving cars were involved in fewer accidents than humans in most scenarios.

One of the main arguments for shifting to autonomous vehicles is the prospect of taking human error out of driving. Given that more than 40,000 people die in car accidents every year in the US, even a modest improvement in safety could make a huge difference.

But self-driving cars have been involved in a number of accidents in recent years that have led to questions over their safety and caused some larger companies like Cruise to scale back their ambitions.

Now though, researchers have analyzed thousands of accident reports from incidents involving both autonomous vehicles and human drivers. Their results, published in Nature Communications, suggest that in most situations autonomous vehicles are actually safer than humans.

The team from the University of Central Florida focused their study on California, where the bulk of autonomous vehicles testing is going on. They gathered 2,100 reports of accidents involving self-driving cars from databases maintained by the National Highway Traffic Safety Administration, the California Department of Motor Vehicles, and news reports.

They then compared them against 35,000 reports of incidents involving human drivers compiled by the California Highway Patrol. The team used an approach called “matched case-control analysis” in which they attempted to find pairs of crashes involving humans and self-driving cars that otherwise had very similar characteristics.

This makes it possible to control for all the other variables that could contribute to a crash and investigate the impact of the “driver” on the likelihood of a crash occurring. The team found 548 such matches, and when they compared the two groups, they found self-driving cars were safer than human drivers in most of the accident scenarios they looked at.

There are some significant caveats though. The researchers also discovered that autonomous vehicles were over five times more likely to be involved in an accident at dawn or dusk and nearly twice as likely when making a turn.

The former is likely due to limitations in imaging sensors, while J. Christian Gerdes, from Stanford University, told IEEE Spectrum that their trouble with turns is probably due to limited ability to predict the behavior of other drivers.

There were some bright spots for autonomous vehicles too though. They were roughly half as likely to be involved in a rear-end accident and just one-fifth as likely to be involved in a broadside collision.

The researchers also found that the chance of a self-driving vehicle crashing in rain or fog was roughly a third of that for a human driver, which they put down to the vehicles’ reliance on radar sensors that are largely immune to bad weather.

How much can be read into these results is a matter of debate. The authors admit there is limited data on autonomous vehicle crashes, which limits the scope of their findings. George Mason University’s Missy Cummings also told New Scientist that accident reports from self-driving companies are often biased, seeking to pin the blame on human drivers even when the facts don’t support it.

Nonetheless, the study is an important first step in quantifying the potential safety benefits of autonomous vehicle technology and has highlighted some important areas where progress is still needed. Only by taking a clear-eyed look at the numbers can policymakers make sensible decisions about where and when this technology should be deployed.

Image Credit: gibblesmash asdf / Unsplash

]]>
157684
Why What We Decide to Name New Technologies Is So Crucial https://singularityhub.com/2024/01/18/why-what-we-decide-to-name-new-technologies-is-so-crucial/ Thu, 18 Jan 2024 15:00:42 +0000 https://singularityhub.com/?p=155506 Back in 2017, my editor published an article titled “The Next Great Computer Interface Is Emerging—But It Doesn’t Have a Name Yet.” Seven years later, which may as well be a hundred in technology years, that headline hasn’t aged a day.

Last week, UploadVR broke the news that Apple won’t allow developers for their upcoming Vision Pro headset to refer to applications as VR, AR, MR, or XR. For the past decade, the industry has variously used terms like virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (XR) to describe technologies that include things like VR headsets. Apple, however, is making it clear that developers should refer to their apps as “spatial” or use the term “spatial computing.” They’re also asking developers not to refer to the device as a headset (whoops). Apple calls it a “spatial computer,” and VR mode is simply “fully immersive.”

It remains to be seen whether Apple will strictly enforce these rules, but the news sparked a colorful range of reactions from industry insiders. Some amusingly questioned what an app like VRChat, one of the most popular platforms in the industry with millions of monthly active users, should do. Others debated at the intersection of philosophy of language and branding to explore Apple’s broader marketing strategy.

Those who have worked in this area are certainly aware of the longstanding absurdity of relying on an inconsistent patchwork of terms.

While no one company has successfully forced linguistic consensus yet, this is certainly not the first time a company has set out to define this category in the minds of consumers.

In 2017, as Google first started selling VR devices, they attempted to steer the industry toward the term “immersive computing.” Around the same time Microsoft took aim at branding supremacy by fixating on the label “mixed reality.” And everyone will remember that Facebook changed the company’s name in an effort to define the broader industry as “the metaverse.”

The term spatial computing is certainly not an Apple invention. It’s thought to have been first introduced in the modern sense by MIT’s Simon Greenwold in his 2003 thesis paper, and has been in use for much of the past decade. Like many others, I’ve long found the term to be the most useful at capturing the main contribution of these technologies—that they make use of three-dimensional space to develop interfaces that are more intuitive for our nervous systems.

A winding etymological journey for a technology is also not unique to computer interfaces. All new technologies cycle through ever-evolving labels that often start by relating them to familiar concepts. The word “movie” began life as “moving picture” to describe how a collection of still images seemed to “move,” like flipping through a picture book. In the early 1900s, the shorter slang term movie appeared in comic strips and quickly caught on with the public. Before the term “computer” referred to machines, it described a person whose job was to perform mathematical calculations. And the first automobiles were introduced to the public as “horseless carriages,” which should remind us of today’s use of the term “driverless car.”

Scholars of neuroscience, linguistics, and psychology will be especially familiar with the ways in which language—and the use of words—can impact how we relate to the world. When a person hears a word, a rich network of interconnected ideas, images, and associations is activated in our mind. In that sense, words can be thought of as bundles of concepts and a shortcut to making sense of the world.

The challenge with labeling emerging technologies is they can be so new to our experience, our brains haven’t yet constructed a fixed set of bundled concepts to relate to.

The word “car,” for example, brings to mind attributes like “four wheels,” “steering wheel,” and “machine used to move people around.” Over time, bundles of associations like these become rooted in the mind as permanent networks of relationships which can help us quickly process our environment. But this can also create limitations and risk overlooking disruptions due to an environment which has changed. Referring to autonomous driving technology as “driverless cars” might result in someone overlooking a “driverless car” small enough to carry packages on a sidewalk. It’s the same technology, but not one most people might refer to as a car.

This might sound like useless contemplation on the role of semantics, but the words we use have real implications on the business of emerging technologies. In 1980, AT&T hired the consultancy McKinsey to predict how many people would be using mobile phones by the year 2000. Their analysis estimated no more than 900,000 devices by the turn of the century, and because of the advice, AT&T exited the hardware business. Twenty years later, they recognized how unhelpful that advice had been as 900,000 phones were being sold every three days in North America alone.

While in no way defending their work, I hold the opinion that in some ways McKinsey wasn’t wrong. Both AT&T and McKinsey may have been misled by the bundle of concepts the word “mobile phone” would have elicited in the year 1980. At that time, devices were large, as heavy as ten pounds or more, cost thousands of dollars, and had a painfully short battery life. There certainly wasn’t a large market for those phones. A better project for AT&T and McKinsey might have been to explore what the term “mobile phone” would even refer to in 20 years. Those devices were practical, compact, and affordable.

A more recent example might be the term “metaverse.” A business operations person focused on digital twins has a very different bundle of associations in their mind when hearing the word metaverse than a marketing person focused on brand activations in virtual worlds like Roblox. I’ve worked with plenty of confused senior leaders who have been pitched very different kinds of projects carrying the label “metaverse,” leading to uncertainty about what the term really means.

As for our as-of-yet-unnamed 3D computing interfaces, it’s still unclear what label will conquer the minds of mainstream consumers. During an interview with Matt Miesnieks, a serial entrepreneur and VC, about his company 6D.ai—which was later sold to Niantic—I asked what we might end up calling this stuff. Six years after that discussion, I’m reminded of his response.

“Probably whatever Apple decides to call it.”

Image Credit: James Yarema / Unsplash

]]>
155506
Solar-Powered ‘Smart’ Clothing Could Rapidly Heat or Cool Your Body https://singularityhub.com/2023/12/18/solar-powered-smart-clothing-could-rapidly-heat-or-cool-your-body/ Mon, 18 Dec 2023 18:57:33 +0000 https://singularityhub.com/?p=155105 I have terrible body temperature control. I stick my head in the freezer when it’s sizzling hot outside. Blood vessels in my hands spasm when it’s damp and chilly. “I’m hot; now I’m cold” is a running joke in my family.

But I’m lucky: Compared to cold-blooded animals, we humans have myriad ways of keeping our bodies humming along at a range of temperatures. We shiver when cold and sweat buckets to ward off heat. And when we struggle in extreme heat or cold, we have the brains to switch our wardrobe.

It sounds obvious, yet clothing has allowed us to explore the far reaches of our world while relatively comfortable and safe. But even the most technical garments—self-heating vests, Gore-Tex layers, or suits with built-in ice packs—have their limits. Most can either warm or cool the body—but not both—or require an external power source or a bulky battery pack.

These garments falter in desert or high-elevation areas, where temperatures can drastically swing from scorching heat to below freezing. And as we go beyond Earth, a lightweight suit that controls body temperature could make longer spacewalks more plausible and pleasant.

This week, scientists from Nankai University took a step towards smart clothing that rapidly adjusts body temperature using only solar power. The team fashioned a flexible material that captures sunlight to store and transfer heat. During the day, the patch removes heat from the skin and stores excess energy. At night, when it’s cooler, it releases this energy cache to heat the skin to comfortable levels.

Because the unit is powered by solar energy, it can maintain skin temperature without a battery pack. The patch can also rapidly adjust to ambient temperature swings by automatically switching between cooling and heating modes.

In a test, the team placed the patch on a volunteer’s hand and cooled their skin over nine percent in just seconds. The patch could keep the skin comfortable when exposed to a wide range of temperatures—from freezing to Death-Valley-level heat.

Rather than designing clothing made entirely from the material, the team envisions it can be strategically woven into everyday fashion or spacesuits—with larger patches in the torso and back areas, and smaller ones tailored to the shoulders, upper arms, and the fronts of thighs. Even without covering the entire body, the patches’ high efficiency at storing and transferring heat can keep wearers comfortable all day—as long as there are a few hours of sunshine.

The device “opens many possibilities” for “expanding human adaptation to harsh environments,” wrote Drs. Xingyi Huang and Peng Li at Shanghai Jiao Tong University, who weren’t involved in the study.

Sci-Fi Fashion

It’s hard to beat nestling under a heated blanket while watching snow fall outside. In contrast, running out into the frigid winter air to chase down a gift package is a total nightmare—especially while wearing pajamas and slippers.

Our bodies aren’t built for extreme temperatures. The biological processes that let us think, feel, breathe, and digest all depend on proteins that function best between a small range of ambient temperatures. Called the “thermal comfort zone,” it’s roughly between 71 and 82 degrees Fahrenheit (or 22 and 28 degrees Celsius). Temperatures outside this range make the body sweat or shiver to keep our internal temperature in check. But these biological mechanisms fail in extreme heat or cold, resulting in heat stroke, frostbite, or even death.

Clothing can extend the thermal comfort zone by regulating skin temperature either passively and actively. Passive options don’t need a power source. Some, like hand warmer packs, use chemical reactions to generate and release heat. Others dissipate heat into the surrounding environment to cool wearers down or reflect body heat back onto the skin to keep it warm. Puffy jackets, cooling neck gaiters, or sweat-wicking fabrics fall into this category.

Active materials use an external power source to rapidly change a material’s temperature on demand. But high energy consumption makes it hard to maintain temperature all day, especially when the wearer is on the move.

Solar-Thermal Wear

The new study leveled up active materials by using a natural energy source: the sun.

The bendy material is like an open-face sandwich. On top is a flexible solar panel that rapidly converts sunlight into electrical energy and a storage module to capture excess power. The bottom, skin-facing side is an electrocaloric device—a thin film that rapidly changes its properties when exposed to electricity. Given an electrical zap, this layer absorbs heat and lowers ambient temperature. Turning off the electrical field reverses the process and the device transfers heat to the skin.

The device can rapidly switch between heating and cooling cycles, and unlike previous rigid versions, the material is bendable and can contour to human skin like a Band-Aid.

In one test, the team placed the material on a volunteer’s hand and varied the ambient temperature between chilly and very toasty. They monitored the volunteer’s skin temperature with an infrared camera.

The small patch attained comfortable temperatures within seconds of ambient temperature changes. It was also self-sufficient, easily operating for a full day on twelve hours of sunlight.

Into the Unknown?

The new system broadens our skin’s natural thermal comfort zone. The “impressive” expansion makes it possible for the body to “adapt to more complex and changing environments,” wrote Huang and Li.

One potential application is weaving the material “into a conventional spacesuit to help reduce the overall power requirements,” wrote the team. Though finding space (no pun intended) could pose a challenge. A fitted spacesuit has only so much area exposed to sunlight, which limits the size of the panels. Roughly 60 percent of a spacesuit will have to be covered by the material to power it for a full day without an external battery source.

Also, even on Earth, dwindling sunlight in winter makes it more difficult to charge the material—especially if exploring polar regions where daytime all but disappears.

The team is already working to make the material more practical. One idea is to use temperature-sensitive electrical components that could further boost the patch’s temperature control range. Another is to add chemicals that increase the patch’s ability to conduct electricity, making it more efficient at storing and transferring heat. Linking multiple electrocaloric layers head-to-toe could also increase the device’s ability to handle temperature changes.

To Huang and Li, the material has far more uses beyond clothing. It could coat vehicles or buildings to keep temperatures in check without air conditioning. With extreme temperatures taking more lives than hurricanes, floods, or tornadoes combined, these materials aren’t just for intrepid explorers—they could change our everyday lives.

Image Credit: Kajetan SumilaUnsplash

]]>
155105
Ultrasonic 3D Printer Could One Day Repair Organs in the Body Without Surgery https://singularityhub.com/2023/12/11/ultrasonic-3d-printer-could-one-day-help-repair-organs-in-the-body-without-surgery/ Mon, 11 Dec 2023 22:35:59 +0000 https://singularityhub.com/?p=154921 A plump piece of farm-fresh chicken leg rested on a pristine surface at Harvard Medical School. Skin on and bone in, it was precisely sliced to barely crack the bone.

A robot arm swerved over, scanned the breakage, and carefully injected a liquid cocktail of ingredients into the crack, including some isolated from seaweed. With several pulses of ultrasound, the liquid hardened into a bone-like material and sealed the fracture.

This wasn’t an avant-garde dinner show. Rather, it was an innovative experiment to see if ultrasound can one day be used to 3D print implants directly inside our body.

Led by Dr. Yu Shrike Zhang at Brigham and Women’s Hospital and Harvard Medical School, a recent study combined the unique properties of ultrasound and 3D printing to repair damaged tissue. At the heart of the technology is a mixture of chemicals that gel in response to sonic waves—a concoction dubbed “sono-ink.”

In one test, the team 3D printed a cartoonish bone shape inside a hefty piece of isolated pork belly, the ultrasound easily penetrating layers of fatty skin and tissue. The technology also made beehive-like structures inside isolated pork livers and a heart shape in kidneys.

It may sound macabre, but the goal isn’t to 3D print emojis inside living tissue. Rather, doctors may one day use ultrasound and sono-ink to directly repair damaged organs inside the body as an alternative to invasive surgery.

As a proof of concept, the team used sono-ink to repair a broken region of an isolated goat heart. After a few blasts of ultrasound, the resulting patch gelled and meshed seamlessly with surrounding heart tissue, essentially becoming a biocompatible, stretchable bandage.

Another test loaded the sono-ink with a chemotherapy drug and injected the concoction into a damaged liver. Within minutes, the ink released the drug into injured areas, while sparing most of the healthy surrounding cells.

The technology offers a way of converting open surgeries into less-invasive treatments, wrote Drs. Yuxing Yao and Mikhail Shapiro at the California Institute of Technology, who were not involved in the study. It could also be used to print body-machine interfaces that respond to ultrasound, make flexible electronics for heart injuries, or efficiently deliver anti-cancer drugs straight to the source after surgery to limit side effects.

“We’re still far from bringing this tool into the clinic, but these tests reaffirmed the potential of this technology,” said Zhang. “We’re very excited to see where it can go from here.”

From Light to Sound

Thanks to its versatility, 3D printing has captured bioengineers’ imagination when it comes to building artificial biological parts—for example, stents for life-threatening heart disease.

The process is usually iterative. An inkjet 3D printer—similar to an office printer—sprays out a thin layer and “cures” it with light. This solidifies the liquid ink and then, layer by layer, the printer builds an entire structure. Yet light can only illuminate the surface of many materials, making it impossible to generate a fully printed 3D structure with one blast.

The new study turned to volumetric printing, where a printer projects light into a volume of liquid resin, solidifying the resin into the object’s structure—and voilà, the object is built whole.

The process is much faster and produces objects with smoother surfaces than traditional 3D printing. But it’s limited by how far light can shine through the ink and surrounding material—for example, skin, muscle, and other tissues.

Here’s where ultrasound comes in. Best known for maternal care, low levels of ultrasound easily penetrate opaque layers—such as skin or muscle—without harm. Called focused ultrasound, researchers are exploring the technology to monitor and stimulate the brain and other tissues.

It has drawbacks. Sound waves blur when traveling through liquids, which are abundant in our bodies. Used to 3D print structures, the sound waves could generate an abomination of the original design. To build an acoustic 3D printer, the first step was to redesign the ink.

A Sound Recipe

The team first experimented with ink designs that cure with ultrasound. The recipe they came up with is a soup of molecules. Some solidify when heated; others absorb sound waves.

The sono-ink transforms into a gel in just minutes after ultrasound pulses.

The process is self-propelling, explained Yao and Shapiro. Ultrasound triggers a chemical reaction that generates heat which is absorbed into the gel and accelerates the cycle. Because the ultrasound source is controlled by a robotic arm, it’s possible to focus the sound waves to a resolution of one millimeter—a bit thicker than your average credit card.

The team tested multiple sono-ink recipes and 3D printed simple structures, like a multi-colored three-piece gear and glow-in-the-dark structures resembling blood vessels. This helped the team probe the limits of the system and explore potential uses: A fluorescent 3D-printed implant, for example, could be easier to track inside the body.

Sound Success

The team next turned to isolated organs.

In one test, they injected sono-ink into a damaged goat heart. A similar condition in humans can lead to deadly blood clots and heart attacks. The common treatment is open-heart surgery.

Here, the team infused sono-ink directly into the goat heart through blood vessels. With precisely focused ultrasound pulses the ink gelled to protect the damaged region—without harming neighboring parts—and connected with the heart’s own tissues.

In another test, they injected the ink into a chicken leg bone fracture and reconstructed the bone “with seamless bonding to the native parts,” the authors wrote.

In a third test, they mixed doxorubicin, a chemotherapy drug often used in breast cancer, into the sono-ink and injected it into damaged parts of a pork liver. With blasts of ultrasound, the ink settled into the damaged regions and gradually released the drug into the liver over the next week. The team thinks this method could help improve cancer treatment after the surgical removal of tumors, they explained.

The system is just a start. Sono-ink hasn’t yet been tested inside a living body, and it could trigger toxic effects. And while ultrasound is generally safe, the stimulation can increase sound-wave pressure and heat tissues up to a very toasty 158 degrees Fahrenheit. To Yao and Shapiro, these challenges can guide the technology.

The ability to quickly print soft 3D materials opens the door to new body-machine interfaces. Organ patches with embedded electronics could support long-term care for people with chronic heart disease. Ultrasound could also spur tissue regeneration in deeper parts of the body without invasive surgery.

Biomedical applications aside, sono-ink could even make a splash in our everyday world. 3D-printed shoes, for example, have already entered the market. It’s possible “the running shoes of the future could be printed with the same acoustic method that repairs bones,” wrote Yao and Shapiro.

Image Credit: Alex Sanchez, Duke University; Junjie Yao, Duke University; Y. Shrike Zhang, Harvard Medical School

]]>
154921