Automation Archives - Singularity Hub
https://singularityhub.com/tag/automation/
News and Insights on Technology, Science, and the Future from Singularity GroupThu, 10 Oct 2024 17:10:20 +0000en-US
hourly
1 https://wordpress.org/?v=6.5.2https://singularityhub.com/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.pngAutomation Archives - Singularity Hub
https://singularityhub.com/tag/automation/
32324183809You’ll Soon Be Able to Book a Room at the World’s First 3D-Printed Hotel
https://singularityhub.com/2024/10/10/youll-soon-be-able-to-book-a-room-at-the-worlds-first-3d-printed-hotel/
Thu, 10 Oct 2024 17:10:20 +0000https://singularityhub.com/?p=159151The first 3D-printed house in the US was unveiled just over six years ago. Since then, homes have been printed all over the country and the world, from Virginia to California and Mexico to Kenya. If you’re intrigued by the concept but not sure whether you’re ready to jump on the bandwagon, you’ll soon be able to take a 3D-printed dwelling for a test run—by staying in the world’s first 3D-printed hotel.
The hotel is under construction in the city of Marfa, in the far west of Texas. It’s an expansion of an existing hotel called El Cosmico, which until now has really been more of a campground, offering accommodations in trailers, yurts, and tents. According to the property’s website, “the vision has been to create a living laboratory for artistic, cultural, and community experimentation.” The project is a collaboration between Austin, Texas-based 3D printing construction company Icon, architecture firm Bjarke Ingels Group, and El Cosmico’s owner, Liz Lambert.
El Cosmico will gain 43 new rooms and 18 houses, which will be printed using Icon’s gantry-style Vulcan printer. Vulcan is 46.5 feet (14.2 meters) wide by 15.5 feet (4.7 meters) tall, and it weighs 4.75 tons. It builds homes by pouring a proprietary concrete mixture called Lavacrete into a pattern dictated by software, squeezing out one layer at a time as it moves around on an axis set on a track. Its software, BuildOS, can be operated from a tablet or smartphone.
One of the benefits of 3D-printed construction is that it’s much easier to diverge from conventional architecture and create curves and other shapes. The hotel project’s designers are taking full advantage of this; far from traditional boxy hotel rooms, they’re aiming to create unique architecture that’s aligned with its natural setting.
“By testing the geometric boundaries of Icon’s 3D-printed construction, we have imagined fluid, curvilinear structures that enjoy the freedom of form in the empty desert. By using the sand, soils, and colors of the terroir as our print medium, the circular forms seem to emerge from the very land on which they stand,” Bjarke Ingels, the founder and creative director of Bjarke Ingels Group, said in a press release.
Renderings of the completed project and photos of the initial construction show circular, neutral-toned structures that look like they might have sprouted up out of the ground. Don’t let that fool you, though—the interiors, while maybe not outright fancy, will be tastefully decorated and are quite comfortable-looking.
At first glance, Marfa seems like an odd choice for something as buzzy as a 3D-printed hotel. The town sits in the middle of the hot, dry Texas desert; it has a population of 1,700 people; and the closest airport is in El Paso, a three-hour drive away. But despite its relative isolation, Marfa is a hotspot for artists and art lovers and has a unique vibe all its own that draws flocks of tourists (according to Vogue, an estimated 49,000 people visited Marfa in 2019).
El Cosmico is not only expanding, it’s relocating to a 60-acre site on the outskirts of Marfa. Along with the 3D-printed accommodations, the site will have a restaurant, pool, spa, and communal facilities. Most of the trailers and tents from the existing property will be preserved and moved to the new site.
The project broke ground last month, and El Cosmico 2.0 is slated to open in 2026.
How much will it cost you to give 3D-printed construction a test run? Similar to how the market prices of commercial 3D-printed homes haven’t been dramatically lower than conventional houses, it seems 3D-printed hotel rooms will cost about the same as regular hotel rooms, or maybe more: Reservations for the new rooms can’t yet be booked, but they’re predicted to cost between $200 and $450 per night.
]]>159151Waymo Robotaxis Are Giving 100,000 Rides a Week. It’ll Soon Be More.
https://singularityhub.com/2024/09/05/waymo-robotaxis-are-giving-100000-rides-a-week-itll-soon-be-more/
Thu, 05 Sep 2024 18:53:36 +0000https://singularityhub.com/?p=158675After years of overly aggressive forecasts about self-driving cars, here’s a statistic that snuck up on us: People are now hailing 100,000 automated rides a week—a number that’s double the 50,000 weekly rides provided a few months ago.
These rides are courtesy of Waymo, the self-driving car project incubated by Google and spun out as its own company under Alphabet. Waymo has been developing and testing its technology on public roads for over a decade. For much of that time, rides were free for willing guinea pigs and included a safety driver. But Waymo has been offering automated rides without safety drivers since 2020. And last year, the company began commercializing and expanding its ride-hail service, Waymo One.
Paid robotaxi rides are now available to anyone throughout Phoenix and San Francisco. Select riders added by waitlist can ride Waymo in parts of Los Angeles, and the company is also rolling out its services in Austin, Texas.
After a year of commercial operations in San Francisco without major incident, Waymo is eyeing expansion.
A hundred thousand paid rides a week is likely still a fraction of the total ride-hail or taxi rides—with a driver at the wheel—given across San Francisco alone. Still, self-driving cars are clearly no longer solely a research project either. Tens of thousands of paying customers are trusting them for rides on congested city streets.
Although the company is so far leading the race, it’s unclear how soon it might turn a profit. Waymo doesn’t employ drivers, but it does employ technicians to remotely track rides and help out if the cars get stuck. Operating the fleet also includes servicing, storing, and recharging the vehicles. The self-driving equipment itself, including lidars, costs as much as $100,000 per car, according to The New York Times.
While the company is likely still losing a good bit more than it makes, according to The Times, it also has the financial backing of Alphabet, which aims to invest another $5 billion.
The cost of equipment and personnel may well come down in the future, but operating ride-hail fleets may not be Waymo’s ultimate, or even a sustainable, goal. Once it’s proven its software has an excellent long-term safety record in real-world situations, the company may move to license the technology for use in trucking or to carmakers for personal vehicles.
“There’s a lot of benefits that come from using ride-hailing as a first use case for the Waymo Driver,” David Margines, Waymo’s director of product management, recently told The Chronicle. “But our goals don’t stop there. If you think about our overall mission of bringing the Waymo Driver to as many vehicle miles traveled as possible, then you’ve got to branch out from that pretty quickly into some of these other environments.”
The current pace of expansion, however, is by no means assured.
Last year, Waymo competitor Cruise halted operations after one of its cars struck and dragged a pedestrian who was hit by another car. Waymo has avoided similarly serious incidents, though its cars have been involved in minor accidents, inconveniences (like blocking traffic), and annoyances (a software glitch recently caused a parking lot of Waymo cars to honk all night). As the company increases the number of rides given, expands into new areas, and adds highways, the risk of a major accident increases.
To keep up its momentum, Waymo will have to continue threading the needle between growth and safety. Nonetheless, commercial self-driving cars are no longer a fantasy.
]]>158675Robots Are Coming to the Kitchen—What That Could Mean for Society and Culture
https://singularityhub.com/2024/09/03/robots-are-coming-to-the-kitchen-what-that-could-mean-for-society-and-culture/
Tue, 03 Sep 2024 18:30:32 +0000https://singularityhub.com/?p=158659
Automating food is unlike automating anything else. Food is fundamental to life—nourishing body and soul—so how it’s accessed, prepared, and consumed can change societies fundamentally.
Since technology tends to be expensive at first, the early adopters of AI kitchen technologies are restaurants and other businesses. Over time, prices are likely to fall enough for the home market, possibly changing both home and societal dynamics.
Can food technology really change society? Yes, just consider the seismic impact of the microwave oven. With that technology, it was suddenly possible to make a quick meal for just one person, which can be a benefit but also a social disruptor.
Familiar concerns about the technology include worse nutrition and health from prepackaged meals and microwave-heated plastic containers. Less obviously, that convenience can also transform eating from a communal, cultural and creative event into a utilitarian act of survival—altering relationships, traditions, how people work, the art of cooking, and other facets of life for millions of people.
For instance, think about how different life might be without the microwave. Instead of working at your desk over a reheated lunch, you might have to venture out and talk to people, as well as enjoy a break from work. There’s something to be said for living more slowly in a society that’s increasingly frenetic and socially isolated.
Convenience can come at a great cost, so it’s vital to look ahead at the possible ethical and social disruptions that emerging technologies might bring, especially for a deeply human and cultural domain—food—that’s interwoven throughout daily life.
The benefits of AI kitchens include enabling chefs to be more creative, as well as eliminating repetitive, tedious tasks such as peeling potatoes or standing at a workstation for hours. The technology can free up time. Not having to cook means being able to spend more time with family or focus on more urgent tasks. For personalized eating, AI can cater to countless special diets, allergies, and tastes on demand.
However, there are also risks to human well-being. Cooking can be therapeutic and provides opportunities formanythings: gratitude, learning, creativity, communication, adventure, self-expression, growth, independence, confidence, and more, all of which may be lost if no one needs to cook. Family relationships could be affected if parents and children are no longer working alongside each other in the kitchen—a safe space to chat, in contrast to what can feel like an interrogation at the dining table.
The kitchen is also the science lab of the home, so science education could suffer. The alchemy of cooking involves teaching children and other learners about microbiology, physics, chemistry, materials science, math, cooking techniques and tools, food ingredients and their sourcing, human health, and problem-solving. Not having to cook can erode these skills and knowledge.
Community and Cultures
AI can help with experimentation and creativity, such as creating elaborate food presentations and novel recipes within the spirit of a culture. Just as AI and robotics help generate new scientific knowledge, they can increase understanding of, say, the properties of food ingredients, their interactions, and cooking techniques, including new methods.
But there are risks to culture. For example, AI could bastardize traditional recipes and methods, since AI is prone to stereotyping, for example flattening or oversimplifying cultural details and distinctions. This selection bias could lead to reduced diversity in the kinds of cuisine produced by AI and robot cooks. Technology developers could become gatekeepers for food innovation, if the limits of their machines lead to homogeneity in cuisines and creativity, similar to the weirdly similar feel of AI art images across different apps.
Also, think about your favorite restaurants and favorite dinners. How might the character of those neighborhoods change with automated kitchens? Would it degrade your own gustatory experience if you knew those cooking for you weren’t your friends and family but instead were robots?
The hope with technology is that more jobs will be created than jobs lost. Even if there’s a net gain in jobs, the numbers hide the impact on real human lives. Many in the food service industry—one of the most popular occupations in any economy—could find themselves unable to learn new skills for a different job. Not everyone can be an AI developer or robot technician, and it’s far from clear that supervising a robot is a better job than cooking.
Philosophically, it’s still an open question whether AI is capable of genuine creativity, particularly if that implies inspiration and intuition. Assuming so may be the same mistake as thinking that a chatbot understands what it’s saying, instead of merely generating words that statistically follow the previous words. This has implications for aesthetics and authenticity in AI food, similar to ongoing debates about AI art and music.
Safety and Responsibility
Because humans are a key disease vector, robot cooks can improve food safety. Precision trimming and other automation can reduce food waste, along with AI recipes that can make the fullest use of ingredients. Customized meals can be a benefit for nutrition and health, for example, in helping people avoid allergens and excess salt and sugar.
The technology is still emerging, so it’s unclear whether those benefits will be realized. Foodborne illnesses are an unknown. Will AI and robots be able to smell, taste, or otherwise sense the freshness of an ingredient or the lack thereof and perform other safety checks?
Physical safety is another issue. It’s important to ensure that a robot chef doesn’t accidentally cut, burn, or crush someone because of a computer vision failure or other error. AI chatbots have been advising people to eat rocks, glue, gasoline, and poisonous mushrooms, so it’s not a stretch to think that AI recipes could be flawed, too. Where legal regimes are still struggling to sort out liability for autonomous vehicles, it may similarly be tricky to figure out liability for robot cooks, including if hacked.
Given the primacy of food, food technologies help shape society. The kitchen has a special place in homes, neighborhoods, and cultures, so disrupting that venerable institution requires careful thinking to optimize benefits and reduce risks.
]]>158659Icon’s Enormous 3D Printer Extrudes a New 100-Home Neighborhood
https://singularityhub.com/2024/08/11/icons-enormous-3d-printer-extrudes-a-new-100-home-neighborhood/
Sun, 11 Aug 2024 14:00:04 +0000https://singularityhub.com/?p=158221In November 2022, Icon and Lennar started 3D printing homes for a new neighborhood in Texas. Now, according to a report by Reuters, the 100-home project is nearly complete.
While foundations, roofing, and finishes were built and installed traditionally, the walls of each house were constructed by Icon’s Vulcan 3D printer. Vulcan uses a long, crane-like robotic arm tipped with a nozzle to extrude beads of concrete like frosting on a cake. Directed by a digital design, the printer lays down a footprint, then builds up the walls layer by layer.
One of the earliest large-scale projects for 3D-printed homes, it showcases some of the benefits: A house can be printed in around three weeks with Vulcan and a single crew of workers. Icon partnered with design firm Bjark Ingels Group on eight floor plans for the ranch-style homes, each with three- to four-bedrooms and ranging from 1,574 to 2,112 square feet.
Around 25 percent of the homes have been sold with prices ranging from $450,000 to $600,000, about average for the area. Already, buyers are moving in. A couple interviewed by Reuters said their home feels solidly constructed, and its thick concrete walls insulate well, keeping the interior cool in the baking Texas summer. The homes come stock with solar panels to convert all that sunshine into power. The one downside? The concrete blocks WiFi signals, necessitating a mesh network for internet.
The idea of 3D printing homes isn’t new. The earliest projects date back to around the turn of this century. Over the years, startups like Icon have honed the process, perfecting concrete materials and robotic delivery systems and identifying which steps are best suited for 3D printing.
Recently, the technology has made its way into commercial development. In 2021, a home printed by SQ4D was sold in New York. Mighty Buildings, a 3D printing startup that began by printing and selling pre-fab ADUs, raised $52 million last year. Now, the company has its sights set on larger structures and whole communities. Unlike Icon, Mighty prints its structures in parts in a factory and then ships them out for assembly on site.
Overall, 3D printing has been hailed as a cheaper, faster, less resource-intensive way to build. Proponents hope it can bring more affordable housing to those in need. And to that end, Icon has partnered with New Story to 3D print homes in Mexico for families living in extreme poverty and with Mobile Loaves & Fishes to print homes in Austin for those experiencing chronic homelessness.
To date, however, market prices of commercial 3D-printed homes haven’t been dramatically lower than traditionally built homes. While some steps offer savings, others may bring higher costs—like fitting windows or other fixtures tailored to today’s building technologies into less conventional 3D-printed designs. And beyond building costs, prices on the open market are based on demand and how much buyers are willing to pay.
It’s still early days for 3D printing as a commercial homebuilding technology. The Texas project is one of the first at scale, and costs may yet decline as Icon and others figure out how to optimize the process and slot their work into the existing ecosystem.
In the meantime, a handful of Texans will settle into their futuristic homes—nestled between walls of corduroy concrete to keep the heat at bay.
]]>158221Study Finds Self-Driving Cars Are Actually Safer Than Humans in Many (But Not All) Situations
https://singularityhub.com/2024/06/24/study-finds-self-driving-cars-are-actually-safer-than-humans-in-most-situations/
Mon, 24 Jun 2024 14:00:02 +0000https://singularityhub.com/?p=157684Autonomous vehicles are understandably held to incredibly high safety standards, but it’s sometimes forgotten that the true baseline is the often dangerous driving of humans. Now, new research shows that self-driving cars were involved in fewer accidents than humans in most scenarios.
One of the main arguments for shifting to autonomous vehicles is the prospect of taking human error out of driving. Given that more than 40,000 people die in car accidents every year in the US, even a modest improvement in safety could make a huge difference.
But self-driving cars have been involved in a number of accidents in recent years that have led to questions over their safety and caused some larger companies like Cruise to scale back their ambitions.
Now though, researchers have analyzed thousands of accident reports from incidents involving both autonomous vehicles and human drivers. Their results, published in Nature Communications, suggest that in most situations autonomous vehicles are actually safer than humans.
The team from the University of Central Florida focused their study on California, where the bulk of autonomous vehicles testing is going on. They gathered 2,100 reports of accidents involving self-driving cars from databases maintained by the National Highway Traffic Safety Administration, the California Department of Motor Vehicles, and news reports.
They then compared them against 35,000 reports of incidents involving human drivers compiled by the California Highway Patrol. The team used an approach called “matched case-control analysis” in which they attempted to find pairs of crashes involving humans and self-driving cars that otherwise had very similar characteristics.
This makes it possible to control for all the other variables that could contribute to a crash and investigate the impact of the “driver” on the likelihood of a crash occurring. The team found 548 such matches, and when they compared the two groups, they found self-driving cars were safer than human drivers in most of the accident scenarios they looked at.
There are some significant caveats though. The researchers also discovered that autonomous vehicles were over five times more likely to be involved in an accident at dawn or dusk and nearly twice as likely when making a turn.
The former is likely due to limitations in imaging sensors, while J. Christian Gerdes, from Stanford University, told IEEE Spectrum that their trouble with turns is probably due to limited ability to predict the behavior of other drivers.
There were some bright spots for autonomous vehicles too though. They were roughly half as likely to be involved in a rear-end accident and just one-fifth as likely to be involved in a broadside collision.
The researchers also found that the chance of a self-driving vehicle crashing in rain or fog was roughly a third of that for a human driver, which they put down to the vehicles’ reliance on radar sensors that are largely immune to bad weather.
How much can be read into these results is a matter of debate. The authors admit there is limited data on autonomous vehicle crashes, which limits the scope of their findings. George Mason University’s Missy Cummings also told New Scientist that accident reports from self-driving companies are often biased, seeking to pin the blame on human drivers even when the facts don’t support it.
Nonetheless, the study is an important first step in quantifying the potential safety benefits of autonomous vehicle technology and has highlighted some important areas where progress is still needed. Only by taking a clear-eyed look at the numbers can policymakers make sensible decisions about where and when this technology should be deployed.
]]>157684AI Is Gathering a Growing Amount of Training Data Inside Virtual Worlds
https://singularityhub.com/2024/05/01/ai-is-gathering-a-growing-amount-of-training-data-inside-virtual-worlds/
Wed, 01 May 2024 16:52:00 +0000https://singularityhub.com/?p=156942To anyone living in a city where autonomous vehicles operate, it would seem they need a lot of practice. Robotaxis travel millions of miles a year on public roads in an effort to gather data from sensors—including cameras, radar, and lidar—to train the neural networks that operate them.
According to Gautham Sholingar, a senior manager at Nvidia focused on autonomous vehicle simulation, one key benefit is accounting for obscure scenarios for which it would be nearly impossible to gather training data in the real world.
“Without simulation, there are some scenarios that are just hard to account for. There will always be edge cases which are difficult to collect data for, either because they are dangerous and involve pedestrians or things that are challenging to measure accurately like the velocity of faraway objects. That’s where simulation really shines,” he told me in an interview for Singularity Hub.
While it isn’t ethical to have someone run unexpectedly into a street to train AI to handle such a situation, it’s significantly less problematic for an animated character inside a virtual world.
Industrial use of simulation has been around for decades, something Sholingar pointed out, but a convergence of improvements in computing power, the ability to model complex physics, and the development of the GPUs powering today’s graphics indicate we may be witnessing a turning point in the use of simulated worlds for AI training.
When a neural network processes image data, it’s converting each pixel’s color into a corresponding number. For black and white images, the number ranges from 0, which indicates a fully black pixel, up to 255, which is fully white, with numbers in between representing some variation of grey. For color images, the widely used RGB (red, green, blue) model can correspond to over 16 million possible colors. So as graphics rendering technology becomes ever more photorealistic, the distinction between pixels captured by real-world cameras and ones rendered in a game engine is falling away.
Simulation is also a powerful tool because it’s increasingly able to generate synthetic data for sensors beyond just cameras. While high-quality graphics are both appealing and familiar to human eyes, which is useful in training camera sensors, rendering engines are also able to generate radar and lidar data as well. Combining these synthetic datasets inside a simulation allows the algorithm to train using all the various types of sensors commonly used by AVs.
Due to their expertise in producing the GPUs needed to generate high-quality graphics, Nvidia have positioned themselves as leaders in the space. In 2021, the company launched Omniverse, a simulation platform capable of rendering high-quality synthetic sensor data and modeling real-world physics relevant to a variety of industries. Now, developers are using Omniverse to generate sensor data to train autonomous vehicles and other robotic systems.
In our discussion, Sholingar described some specific ways these types of simulations may be useful in accelerating development. The first involves the fact that with a bit of retraining, perception algorithms developed for one type of vehicle can be re-used for other types as well. However, because the new vehicle has a different sensor configuration, the algorithm will be seeing the world from a new point of view, which can reduce its performance.
“Let’s say you developed your AV on a sedan, and you need to go to an SUV. Well, to train it then someone must change all the sensors and remount them on an SUV. That process takes time, and it can be expensive. Synthetic data can help accelerate that kind of development,” Sholingar said.
Another area involves training algorithms to accurately detect faraway objects, especially in highway scenarios at high speeds. Since objects over 200 meters away often appear as just a few pixels and can be difficult for humans to label, there isn’t typically enough training data for them.
“For the far ranges, where it’s hard to annotate the data accurately, our goal was to augment those parts of the dataset,” Sholingar said. “In our experiment, using our simulation tools, we added more synthetic data and bounding boxes for cars at 300 meters and ran experiments to evaluate whether this improves our algorithm’s performance.”
According to Sholingar, these efforts allowed their algorithm to detect objects more accurately beyond 200 meters, something only made possible by their use of synthetic data.
While many of these developments are due to better visual fidelity and photorealism, Sholingar also stressed this is only one aspect of what makes capable real-world simulations.
“There is a tendency to get caught up in how beautiful the simulation looks since we see these visuals, and it’s very pleasing. What really matters is how the AI algorithms perceive these pixels. But beyond the appearance, there are at least two other major aspects which are crucial to mimicking reality in a simulation.”
First, engineers need to ensure there is enough representative content in the simulation. This is important because an AI must be able to detect a diversity of objects in the real world, including pedestrians with different colored clothes or cars with unusual shapes, like roof racks with bicycles or surfboards.
Second, simulations have to depict a wide range of pedestrian and vehicle behavior. Machine learning algorithms need to know how to handle scenarios where a pedestrian stops to look at their phone or pauses unexpectedly when crossing a street. Other vehicles can behave in unexpected ways too, like cutting in close or pausing to wave an oncoming vehicle forward.
“When we say realism in the context of simulation, it often ends up being associated only with the visual appearance part of it, but I usually try to look at all three of these aspects. If you can accurately represent the content, behavior, and appearance, then you can start moving in the direction of being realistic,” he said.
It also became clear in our conversation that while simulation will be an increasingly valuable tool for generating synthetic data, it isn’t going to replace real-world data collection and testing.
“We should think of simulation as an accelerator to what we do in the real world. It can save time and money and help us with a diversity of edge-case scenarios, but ultimately it is a tool to augment datasets collected from real-world data collection,” he said.
Beyond Omniverse, the wider industry of helping “things that move” develop autonomy is undergoing a shift toward simulation. Tesla announced they’re using similar technology to develop automation in Unreal Engine, while Canadian startup, Waabi, is taking a simulation-first approach to training their self-driving software. Microsoft, meanwhile, has experimented with a similar tool to train autonomous drones, although the project was recently discontinued.
While training and testing in the real world will remain a crucial part of developing autonomous systems, the continued improvement of physics and graphics engine technology means that virtual worlds may offer a low-stakes sandbox for machine learning algorithms to mature into functional tools that can power our autonomous future.
]]>156942Amazon Delivery Drones: How the Sky Could Be the Limit for Market Dominance
https://singularityhub.com/2023/11/01/amazon-delivery-drones-how-the-sky-could-be-the-limit-for-market-dominance/
Wed, 01 Nov 2023 20:14:34 +0000https://singularityhub.com/?p=154141Amazon’s latest plan to use drones to deliver packages in the UK by the end of 2024 is essentially a relaunch. It was 10 years ago that the company’s founder Jeff Bezos first announced it would fly individual packages through the sky.
Three years later, an impressive promotional video revealed that the project was starting out in the British city of Cambridge. But by 2021, the operation appeared to have come to an abrupt halt.
Now it seems the company was undeterred by that pause. The dream of sending drones to UK homes bearing (not very heavy) items that we cannot wait more than 30 minutes to have is back in play. So, will it work this time?
In the US, progress has been sluggish. Amazon managed a grand total of 100 deliveries in May 2023, in two locations. (At one of these locations, in Texas, the company has to pause operations when the temperature gets too high).
Despite this, Amazon plans to launch delivery drones in two new areas—one in the UK and one in Italy (precise locations are yet to be disclosed). It has a new model of drone and a vast logistical network at its disposal.
Aside from these key factors, Amazon may well have been inspired by other companies in the sector. The most obvious example is drone delivery of vital medical supplies.
But what these success stories have in common is that they are cost-efficient—pharmaceutical products weigh little and are typically expensive enough to justify the use of a drone—and they are focused on areas which are not densely populated.
In contrast, Amazon’s own estimates put the cost of delivering a single package at $484 today, which it expects to reduce to $63 by 2025. Offering customers free or cheap drone delivery will be extremely expensive.
Amazon’s solution to this is likely to be the same one it has used so successfully over the last two decades: increasing the scale of its operation. After all, at the start of the century, many wondered how e-commerce could ever be profitable. Now, millions of people buy from Amazon, and that vast number of customers is key to its success.
But Amazon’s business plan seems to rely on dominating the market. And for air deliveries, this means not only dropping packages in rural areas, but being available in cities where more than half the world’s population live.
While it may be easy to convince the residents of a small, low-density area to trial boxes of toothpaste and mouthwash landing in their gardens, it might be much more difficult to persuade residents of apartment buildings to accept drones flying past their windows carrying their neighbor’s delivery of dog biscuits.
Added to this are the laws regulating the use of drones. In the UK, for example, you are not allowed to fly one over congested areas or within 50 meters “of a person, vehicle or building not under your control.”
The Higher They Fly, the Harder They Fall
Cities will not simply let commercial drones take to the skies—at least not without charging for the nuisance they generate. They will either ban drones in densely populated areas, or seek further regulation.
If regulation is the route taken, a new hurdle arises which is similar to the allocation of radio waves or mobile phone network licenses—that there will only be enough space for a few operators (sometimes just one).
This allocation usually happens through a bidding process. And studies of auctions of telecom licenses show the importance of involving multiple credible operators. But having different firms winning the right to deliver in different cities could easily reduce the level of reach that Amazon would need to succeed.
An alternative scenario would see a single operator in charge of all drone deliveries. But this raises a familiar economic problem, where natural monopolies emerge in sectors like water provision or other kinds of infrastructure.
For, while society can often benefit from the innovation potential of the private sector, having only one firm in the market opens up the possibility of abuse. For instance, the privatization of water in the UK has come with a regulator which chooses the prices companies can charge and never-ending debates on the regulation of sewage and leakages.
Regardless of which company is awarded the business, external regulation usually involves a requirement to treat all consumers fairly and equally—which would mean charging Amazon the same price as its competitors to use the drones.
But fairness and equality are not the goals big companies are interested in when they invest heavily in innovative technology. Their goal is to obtain or keep a dominant position in the market.
Amazon’s current dominance largely relies on its superior logistical operation: it can deliver quickly, cheaply, and reliably everywhere. With drone delivery available to other platforms at the same price, Amazon would lose this competitive advantage. So, if it does manage a successful launch this time around, it could well come at the expense of its current dominance as a logistical operation.
]]>154141Agility’s New Factory Can Crank Out 10,000 Humanoid Robots a Year
https://singularityhub.com/2023/09/20/agilitys-new-factory-can-crank-out-10000-humanoid-robots-a-year/
Wed, 20 Sep 2023 14:00:25 +0000https://singularityhub.com/?p=153444Simple robots have long been a manufacturing staple, but more advanced robots—think Boston Dynamics’ Atlas—have mostly been bespoke creations in the lab. That’s begun to change in recent years as four-legged robots like Boston Dynamics’ Spot have gone commercial. Now, it seems, humanoid robots are aiming for mass markets too.
Agility Robotics announced this week it has completed a factory to produce its humanoid Digit robot. The 70,000-square-foot facility, based in Salem, Oregon has a top capacity of 10,000 robots annually. Agility broke ground on the factory last year and plans to begin operations later this year. The first robots will be delivered in 2024 to early customers taking part in the company’s partner program. After that, Agility will open orders to everyone in 2025.
“The opening of our factory marks a pivotal moment in the history of robotics: the beginning of the mass production of commercial humanoid robots,” Damion Shelton, Agility Robotics’ cofounder and CEO said in a press release Tuesday.
The latest version of Digit stands 5 feet 9 inches tall and weighs 141 pounds, according to the company’s website. It has a flat head with a pair of emoji-like eyes, two arms designed to pick up and move totes and boxes, and walks on legs that hinge backwards like a bird’s. The robot can work 16 hours a day and docks itself to a charging station to refuel.
Founded in 2015, Agility Robotics was spun out of Oregon State University’s Robotics Laboratory, where robotics professor and cofounder Jonathan Hurst leads research in legged robots. Little more than a pair of legs, Digit’s direct ancestor Cassie launched in 2017. Agility had added arms and a suite of sensors, including a vaguely head-like lidar unit, by the time it announced the first commercial version of Digit in early 2020.
Back then, Digit was marketed as a delivery robot that would unfold from the back of a van and drop packages on porches. Though its first broad commercial application will instead be moving boxes and totes, the company still views Digit as a multi-purpose platform with other uses ahead.
“I believe dynamic-legged robots will one day help take care of elderly and infirm people in their homes, assist with lifesaving efforts in fires and earthquakes, and deliver packages to our front doors,” Hurst wrote for IEEE Spectrum in 2020.
To go bigger, however, the company will have to prove Digit is widely useful beyond the experimental and then figure out how to make it en masse.
That’s where the new factory, dubbed RoboFab, comes in. To date, robots like Digit are made in the single digits or dozens at most. Atlas is still a research robot—though its slick moves make for viral videos—and a new push into humanoid robots by other players, including the likes of Tesla, Figure, and Sanctuary, is only just getting started.
It would be an impressive achievement if Agility hews to its aggressive timeline.
Apart from building and opening a factory, challenges to scaling include setting up a steady supply chain, nailing consistent product quality beyond a few units, and servicing and supporting customer robots in the field. All that will take time—maybe years. And of course, in order to produce 10,000 robots annually, they have to sell that many too. The company expects the first year’s production to be in the hundreds.
But if Digit proves capable, affordable, and easy for businesses to integrate in the years ahead, it seems likely there would be ample demand for its box-and-tote picking skills. Amazon invested in Agility’s $150-million Series B in 2022 and has been packing its warehouses with robots for years. Digit could fit an unfilled niche in its machine empire.
Further broadening the number of tasks the robot can complete—and thereby widening the market—would likewise boost demand for the bot. Amazon, and others no doubt, would likely be more than willing to entertain the idea of Digit one day delivering packages.
But first, Agility will have to fire up the assembly line and prove they can keep it humming along at a healthy pace.
]]>153444Generative AI Could Add $4.4 Trillion a Year to the Global Economy, McKinsey Finds
https://singularityhub.com/2023/06/22/generative-ai-could-add-4-4-trillion-a-year-to-the-global-economy-mckinsey-finds/
Thu, 22 Jun 2023 14:00:37 +0000https://singularityhub.com/?p=152424There’s been concern about artificial intelligence taking away jobs for years, and with the recent boom in generative AI, those fears have grown. The ability to generate realistic and accurate text, images, or audio based on a prompt could make plenty of jobs obsolete (including, ahem, journalism and writing). But a new study says the doomsday predictions are misguided, because generative AI is far more likely to do just the opposite of canceling out jobs.
Last week, McKinsey published a report called The Economic Potential of Generative AI: The Next Productivity Frontier. It’s the result of a study involving 850 different job roles and 2,100 tasks across occupations in 47 countries. Researchers considered what portion of each existing job or task can be taken over by generative AI, as well as new occupations and responsibilities likely to be created by the technology. Their conclusion? Generative AI has the potential to create up to $4.4 trillion worth of annual value in the global economy.
$4.4 trillion is the high end of a range, with the lower bound sitting at $2.6 trillion. Even if the value created were to fall on the low end, it would still approximate the GDP of the United Kingdom, which was $3.1 trillion in 2021.
How will that happen? Mostly by automating and accelerating work that’s currently done by humans, allowing humans to do more work in the same amount of time. That makes both us and AIs sound like nothing more than workhorses, but here’s an example.
A study released in April detailed how generative AI impacted the work of customer service agents at a software firm. The AI monitored agent interactions with customers in real time and gave them suggestions for what to say. The agents who used the AI resolved 13.8 percent more issues per hour than they’d been able to without it; they got through calls more quickly, resolved more complaints successfully, and could even handle multiple calls at once. The AI also cut down the time managers had to spend training new employees, enabling them to take on bigger teams—and ultimately allowing the company to hire more employees and do more business.
McKinsey’s study found that generative AI and other technologies could automate work activities that currently take up 60 to 70 percent of employees’ time. That’s a complicated projection, though; the report acknowledges that some significant reskilling will be needed, and companies and governments will have to invest in supporting worker transitions and managing the other risks that such a momentous shift will bring. “If worker transitions and other risks can be managed, generative AI could contribute substantively to economic growth and support a more sustainable, inclusive world,” the authors wrote. It’s a pretty big “if,” one deserving several equivalent reports of its own to contemplate exactly how it’s all going to work.
According to the report, generative AI’s value add will be mostly concentrated in four categories of jobs: customer operations, marketing and sales, software engineering, and research and development. The customer service example above illustrates the first category; AI can assist with or completely take over customer interactions, and to some extent it’s already done so—when was the last time you got an actual human on the phone after calling a corporate customer service number?
For marketing and sales, AI can generate creative content (like the millions a day being generated by OpenAI’s DALL-E), including content that’s more personalized than what we see today. Software engineers are already using AI to write computer code based on natural-language prompts. In the research and development arena, AI is not only modeling proteins incredibly rapidly, it’s building protein complexes tailored to specific biological responses and helping design artificial protein drugs. It’s no surprise, then, that life sciences is one of the industries predicted to see the most revenue growth from generative AI (along with banking and high-tech).
It seems clear that generative AI is poised to revolutionize the way we work, and eventually even the way we live. But outside of generating economic value, will it generate well-being and a higher quality of life for the average person? Perhaps that’s the question we should really be focusing on.
AI will continue to proliferate and find new applications across various sectors of the economy. For now though, humans are still an essential piece of the equation to complete most tasks—including McKinsey’s generative AI report. While the data for it was retrieved and analyzed by AI, the report itself was written completely by humans.
]]>152424Autonomous Trucks Will Be Cruising Down Highways Next Year, Startup Says
https://singularityhub.com/2023/04/05/autonomous-trucks-will-be-cruising-down-highways-next-year-startup-says/
Wed, 05 Apr 2023 14:00:27 +0000https://singularityhub.com/?p=151252Self-driving cars get all the hype (or, they did before people realized they weren’t going to be ready by 2020…or 2022…or this year), but self-driving trucks are likely to hit the road first. Not only is the bulk of their driving done on highways, which is far simpler than navigating urban roads with all their obstacles; there’s been a shortage of truck drivers for years, and it doesn’t seem to be getting better, so there’s a market need for trucks that can drive themselves.
If one company’s plan plays out, trucks without drivers will be cruising down highways by next year. On Monday driverless hardware and software specialist Aurora Innovation announced that its Aurora Driver—a system of sensors, software, and a computer designed to give any vehicle self-driving capabilities—is “feature complete.” This means all the product’s technical capabilities are in place and it’s entering its final phase of development. The company is planning to launch the Driver commercially next year.
The current version of the product is Beta 6.0, and it’s specifically built for service on Aurora’s Dallas to Houston route, which is one of the most highly trafficked shipping corridors in the country. At 240 miles long, the route is mostly a straight shot on Interstate 45, and made up the initial run of an autonomous freight pilot Aurora did with FedEx (they subsequently added a 600-mile route between El Paso and Fort Worth).
As a company press release explains, the main difference between Beta 6.0 and its predecessor is the system’s improved ability to handle uncommon road scenarios that impact safety, like high winds, collisions, or sudden heavy rain, snow, or fog. Beta 6.0 can detect the severity of these conditions and either slow down or look for a safe place to pull over. If a vehicle does get in an accident, the system is trained to pull over and alert one of the company’s command center specialists. In addition, Aurora worked with Waymo to design a flashing beacon that alerts oncoming traffic when one of its trucks is pulled over on the side of the road.
Aurora was founded by industry veterans Chris Urmson, who previously led Waymo; Sterling Anderson, who oversaw Tesla’s autopilot program; and Drew Bagnell, who worked on Uber’s self-driving program. The company’s 2019 funding round raised more than $530 million, with Amazon being one of the main investors. The company has partnered with major automakers like Volvo, Volkswagen, Toyota, and Hyundai, among others. In 2020, Aurora acquired Uber’s self-driving unit, and in 2021 went public via a $13 billion special-purpose acquisition company (SPAC) deal with Reinvent Technology Partners. Subsequently, though, its stock fell by more than 85 percent.
Urmson’s optimism seems unshakable, though. “We look at trucking, and we see a landscape where we feel like [we’re] the only viable player,” he recently told Fast Company. “It’s an $800 billion business in the US, and we’re a company that is well capitalized, that’s got incredible talent, amazing partnerships, and awesome technology. We’re like, ‘Let’s just go execute.’i”
Now that Aurora Driver’s architecture is complete, the company will shift its focus to closing its Driver Safety Case, outlining its approach to safety and demonstrating that vehicles equipped with its self-driving system are safe to be on public roads. The US Department of Transportation requires this documentation before the company can commercially launch its product.
If all goes to plan, trucks outfitted with Aurora Driver will be cruising the Houston-Dallas corridor a year from now. Ultimately the goal is to lighten the burden on truckers and give the freight industry a needed boost, improving efficiency and economic feasibility across the board.