Future Archives - Singularity Hub https://singularityhub.com/tag/future/ News and Insights on Technology, Science, and the Future from Singularity Group Thu, 05 Dec 2024 18:19:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://singularityhub.com/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Future Archives - Singularity Hub https://singularityhub.com/tag/future/ 32 32 4183809 Automated Cyborg Cockroach Factory Could Churn Out a Bug a Minute for Search and Rescue https://singularityhub.com/2024/12/05/automated-cyborg-cockroach-factory-could-churn-out-a-bug-a-minute-for-search-and-rescue/ Thu, 05 Dec 2024 18:19:08 +0000 https://singularityhub.com/?p=159723 Envisioning armies of electronically controllable insects is probably nightmare fuel for most people. But scientists think they could help rescue workers scour challenging and hazardous terrain. An automated cyborg cockroach factory could help bring the idea to life.

The merger of living creatures with machines is a staple of science fiction, but it’s also a serious line of research for academics. Several groups have implanted electronics into moths, beetles, and cockroaches that allow simple control of the insects.

However, building these cyborgs is tricky as it takes considerable dexterity and patience to surgically implant electrodes in their delicate bodies. This means that creating enough for most practical applications is simply too time-consuming.

To overcome this obstacle, researchers at Nanyang Technological University in Singapore have automated the process, using a robotic arm with computer vision to install electrodes and tiny backpacks full of electronics on Madagascar hissing cockroaches. The approach cuts the time required to attach the equipment from roughly half an hour to just over a minute.

“In the future, factories for insect-computer hybrid robot[s] could be built to satisfy the needs for fast preparation and application of the hybrid robots,” the researchers write in a non-peer-reviewed paper on arXiv.

“Different sensors could be added to the backpack to develop applications on the inspection and search missions based on the requirements.”

Cyborg insects could be a promising alternative to conventional robots thanks to their small size, ability to operate for hours on little food, and their adaptability to new environments. As well as helping with search and rescue operations, the researchers suggest that swarms of these robot bugs could be used to inspect factories.

The researchers had already shown that signals from electrodes implanted into cockroach abdomens could be used to control the direction of travel and get them to slow down and even stop. But installing these electrodes and a small backpack with control electronics required painstaking work from a trained researcher.

That kind of approach makes it difficult to scale up to the hundreds or even thousands of insects required for practically useful swarms. So, the team developed an automated system that could install the electronics on a cockroach with minimal human involvement.

First, the researchers anesthetized the cockroaches by exposing them to carbon dioxide for 10 minutes. They then placed the bugs on a platform where a pair of rods powered by a motor pressed down on two segments of their hard exoskeletons to expose a soft membrane just behind the head.

A computer vision system then identified where to implant the electrodes and used this information to guide a robotic arm carrying the electronic backpack. Electrodes in place, the arm pressed the backpack down until its mounting mechanism hooked into another section of the insect’s body. The arm then released the backpack, and the rods retracted to free the cyborg bug.

The entire assembly process takes just 68 seconds, and the resulting cockroaches are just as controllable as ones made manually, the researchers found. A four-bug team was able to cover 80 percent of a 20-square-foot outdoor test environment filled with obstacles in about 10 minutes.

Fabian Steinbeck at Bielefeld University in Germany told New Scientist that using these cyborg bugs for search and rescue might be tricky as they currently have to be controlled remotely. Getting signal in collapsed buildings and similar challenging terrain would be difficult, and we don’t yet have the technology to get them to navigate autonomously.

Rapid improvements in both AI and communication technologies could soon change that though. So, it may not be too far-fetched to imagine swarms of robot bugs coming to your rescue in the near future.

Image Credit: Erik Karits from Pixabay

]]>
159723
This Tiny House Is Made From the Recycled Heart of a Wind Turbine https://singularityhub.com/2024/12/02/this-tiny-house-is-made-from-the-recycled-heart-of-a-wind-turbine/ Mon, 02 Dec 2024 15:00:41 +0000 https://singularityhub.com/?p=159702 If you’ve tried to rent or buy a home in the last few years, you may have noticed there’s a severe housing shortage in the US and around the world. Millions of people need homes, and there aren’t nearly enough of them to go around. Plenty of creative, low-cost solutions have been proposed, from inflatable houses to 3D-printed houses, “foldable” houses, and houses that ship in kits to be assembled like furniture.

Now there’s another idea joining the fray, and it carries the added benefit of playing a role in the renewable energy transition: It’s a tiny house made from the nacelle of a decommissioned wind turbine.

The house, unveiled last month as part of Dutch Design Week, is a collaboration between Swedish power company Vattenfall and Dutch architecture firm Superuse Studios. Wind turbines typically have a 20-year lifespan, and Vattenfall is looking for novel ways to repurpose parts of its turbines. With the first generation of large-scale turbines now reaching the end of their useful life, there will be thousands of nacelles (not to mention blades, towers, and generators) in search of a new purpose.

Blades, towers, and generators are the parts of a wind turbine that most people are familiar with, but not so much the nacelle. The giant rectangular box sits at the top of the turbine’s tower and houses its gearbox, shafts, generator, and brake. It’s the beating heart of the turbine, where the blades’ rotation is converted into electricity.

Though it’s big enough to be a tiny house, this particular nacelle is on the small side (as far as nacelles go). It’s 10 feet tall by 13 feet wide by 33 feet long. The interior space of the home about 387 square feet, or the size of a small studio apartment or hotel room. The nacelle came from one of Vattenfall’s V80 turbines, which was installed at an Austrian wind farm in 2005 and has a production capacity of two megawatts. Turbine technology has come a long way since then; the largest ones in the world are approaching a production capacity of 15 megawatts.

Though there will be larger nacelles available, Superuse Studios intentionally chose a small one for its prototype. Their thinking was, if you can make a livable home in this small of a space, you can definitely make a livable home—and add more features—in a larger space; better to start small and grow than start big then downsize.

Though the house is small, its designers ensured it was fully compliant with Dutch building code and therefore suitable for habitation. It has a kitchen with a sink and a stove, a bathroom with a shower, a dining area, and a combined living/sleeping area. As you’d expect from a house made of recycled wind turbine parts, it’s also climate-friendly: Its electricity comes partly from rooftop solar panels, and it has a bidirectional charger for electric vehicles (meaning power from the house can charge the car or power from the car’s battery can be used in the house). There’s an electric heat pump for temperature control, and a solar heater for hot water.

Solar panels and wind turbines don’t last forever, and they use various raw and engineered materials. When the panels or turbines can’t produce power anymore, what’s to be done with all that concrete, copper, steel, silicon, glass, or aluminum? Finding purposeful ways to reuse or recycle these materials will be a crucial component of a successful transition away from fossil fuels.

“We are looking for innovative ways in which you can reuse materials from used turbines as completely as possible,” said Thomas Hjort, Vattenfall’s director of innovation, in a press release. “So making something new from them with as few modifications as possible. That saves raw materials, energy consumption and in this way we ensure that these materials are useful for many years after their first working life.”

As of right now, the nacelle tiny house is just a proof of concept; there are no plans to start producing more in the immediate future, but it’s not outside the realm of possibility eventually. Picture communities of these houses arranged in rows or circles, with communal spaces or parks in between. Using a larger nacelle, homes with one or two bedrooms could be designed, expanding the possibilities for inhabitants and giving purpose to more decommissioned turbines.

“At least ten thousand of this generation of nacelles are available, spread around the world,” said Jos de Krieger, a partner at Superuse Studios. “Most of them have yet to be decommissioned. This offers perspective and a challenge for owners and decommissioners. If such a complex structure as a house is possible, then numerous simpler solutions are also feasible and scalable.”

If 10,000-plus nacelles are available, that means 30,000-plus blades are available. What innovative use might designers and engineers find for them?

Image Credit: Vattenfall

]]>
159702
Niantic Is Training a Giant ‘Geospatial’ AI on Pokémon Go Data https://singularityhub.com/2024/11/27/niantic-is-training-a-giant-geospatial-ai-on-pokemon-go-data/ Wed, 27 Nov 2024 15:00:50 +0000 https://singularityhub.com/?p=159547 If you want to see what’s next in AI, just follow the data. ChatGPT and DALL-E trained on troves of internet data. Generative AI is making inroads in biotechnology and robotics thanks to existing or newly assembled datasets. One way to glance ahead, then, is to ask: What colossal datasets are still ripe for the picking?

Recently, a new clue emerged.

In a blog post, gaming company Niantic said it’s training a new AI on millions of real-world images collected by Pokémon Go players and in its Scaniverse app. Inspired by the large language models powering chatbots, they call their algorithm a “large geospatial model” and hope it’ll be as fluent in the physical world as ChatGPT is in the world of language.

Follow the Data

This moment in AI is defined by algorithms that generate language, images, and increasingly, video. With OpenAI’s DALL-E and ChatGPT, anyone can use everyday language to get a computer to whip up photorealistic images or explain quantum physics. Now, the company’s Sora algorithm is applying a similar approach to video generation. Others are competing with OpenAI, including Google, Meta, and Anthropic.

The crucial insight that gave rise to these models: The rapid digitization of recent decades is useful for more than entertaining and informing us humans—it’s food for AI too. Few would have viewed the internet in this way at its advent, but in hindsight, humanity has been busy assembling an enormous educational dataset of language, images, code, and video. For better or worse—there are several copyright infringement lawsuits in the works—AI companies scraped all that data to train powerful AI models.

Now that they know the basic recipe works well, companies and researchers are looking for more ingredients.

In biotech, labs are training AI on collections of molecular structures built over decades and using it to model and generate proteins, DNA, RNA, and other biomolecules to speed up research and drug discovery. Others are testing large AI models in self-driving cars and warehouse and humanoid robots—both as a better way to tell robots what to do, but also to teach them how to navigate and move through the world.

Of course, for robots, fluency in the physical world is crucial. Just as language is endlessly complex, so too are the situations a robot might encounter. Robot brains coded by hand can never account for all the variation. That’s why researchers are now building large datasets with robots in mind. But they’re nowhere near the scale of the internet, where billions of humans have been working in parallel for a very long time.

Might there be an internet for the physical world? Niantic thinks so. It’s called Pokémon Go. But the hit game is only one example. Tech companies have been creating digital maps of the world for years. Now, it seems likely those maps will find their way into AI.

Pokémon Trainers

Released in 2016, Pokémon Go was an augmented reality sensation.

In the game, players track down digital characters—or Pokémon—that have been placed all over the world. Using their phones as a kind of portal, players see characters superimposed on a physical location—say, sitting on a park bench or loitering by a movie theater. A newer offering, Pokémon Playground, allows users to embed characters at locations for other players. All this is made possible by the company’s detailed digital maps.

Niantic’s Visual Positioning System (VPS) can determine a phone’s position down to the centimeter from a single image of a location. In part, VPS assembles 3D maps of locations classically, but the system also relies on a network of machine learning algorithms—one or more per location—trained on years of player images and scans taken at various angles, times of day, and seasons and stamped with a position in the world.

“As part of Niantic’s Visual Positioning System (VPS), we have trained more than 50 million neural networks, with more than 150 trillion parameters, enabling operation in over a million locations,” the company wrote in its recent blog post.

Now, Niantic wants to go further.

Instead of millions of individual neural networks, they want to use Pokémon Go and Scaniverse data to train a single foundation model. Whereas individual models are constrained by the images they’ve been fed, the new model would generalize across all of them. Confronted with the front of a church, for example, it would draw on all the churches and angles it’s seen—front, side, rear—to visualize parts of the church it hasn’t been shown.

This is a bit like what we humans do as we navigate the world. We might not be able to see around a corner, but we can guess what’s there—it might be a hallway, the side of a building, or a room—and plan for it, based on our point of view and experience.

Niantic writes that a large geospatial model would allow it to improve augmented reality experiences. But it also believes such a model might power other applications, including in robotics and autonomous systems.

Getting Physical

Niantic believes it’s in a unique position because it has an engaged community contributing a million new scans a week. In addition, those scans are from the view of pedestrians, as opposed to the street, like in Google Maps or for self-driving cars. They’re not wrong.

If we take the internet as an example, then the most powerful new datasets may be collected by millions, or even billions, of humans working in concert.

At the same time, Pokémon Go isn’t comprehensive. Though locations span continents, they’re sparse in any given place and whole regions are completely dark. Further, other companies, perhaps most notably, Google, have long been mapping the globe. But unlike the internet, these datasets are proprietary and splintered.

Whether that matters—that is, whether an internet-sized dataset is needed to make a generalized AI that’s as fluent in the physical world as LLMs are in the verbal—isn’t clear.

But it’s possible a more complete dataset of the physical world arises from something like Pokémon Go, only supersized. This has already begun with smartphones, which have sensors to take images, videos, and 3D scans. In addition to AR apps, users are increasingly being incentivized to use these sensors with AI—like, taking a picture of a fridge and asking a chatbot what to cook for dinner. New devices, like AR glasses could expand this kind of usage, yielding a data bonanza for the physical world.

Of course, collecting data online is already controversial, and privacy is a big issue. Extending those problems to the real world is less than ideal.

After 404 Media published an article on the topic, Niantic added a note, “This scanning feature is completely optional—people have to visit a specific publicly-accessible location and click to scan. This allows Niantic to deliver new types of AR experiences for people to enjoy. Merely walking around playing our games does not train an AI model.” Other companies, however, may not be as transparent about data collection and use.

It’s also not certain new algorithms inspired by large language models will be straightforward. MIT, for example, recently built a new architecture aimed specifically at robotics. “In the language domain, the data are all just sentences,” Lirui Wang, the lead author of a paper describing the work, told TechCrunch.  “In robotics, given all the heterogeneity in the data, if you want to pretrain in a similar manner, we need a different architecture.”

Regardless, researchers and companies will likely continue exploring areas where LLM-like AI may be applicable. And perhaps as each new addition matures, it will be a bit like adding a brain region—stitch them together and you get machines that think, speak, write, and move through the world as effortlessly as we do.

Image: Kamil Switalski on Unsplash

]]>
159547
Could We Ever Decipher an Alien Language? Uncovering How AI Communicates May Be Key https://singularityhub.com/2024/11/12/could-we-ever-decipher-an-alien-language-uncovering-how-ai-communicates-may-be-key/ Tue, 12 Nov 2024 22:14:50 +0000 https://singularityhub.com/?p=159481

In the 2016 science fiction movie Arrival, a linguist is faced with the daunting task of deciphering an alien language consisting of palindromic phrases, which read the same backwards as they do forwards, written with circular symbols. As she discovers various clues, different nations around the world interpret the messages differently—with some assuming they convey a threat.

If humanity ended up in such a situation today, our best bet may be to turn to research uncovering how artificial intelligence develops languages.

But what exactly defines a language? Most of us use at least one to communicate with people around us, but how did it come about? Linguists have been pondering this very question for decades, yet there is no easy way to find out how language evolved.

Language is ephemeral, it leaves no examinable trace in the fossil records. Unlike bones, we can’t dig up ancient languages to study how they developed over time.

While we may be unable to study the true evolution of human language, perhaps a simulation could provide some insights. That’s where AI comes in—a fascinating field of research called emergent communication, which I have spent the last three years studying.

To simulate how language may evolve, we give AI agents simple tasks that require communication, like a game where one robot must guide another to a specific location on a grid without showing it a map. We provide (almost) no restrictions on what they can say or how—we simply give them the task and let them solve it however they want.

Because solving these tasks requires the agents to communicate with each other, we can study how their communication evolves over time to get an idea of how language might evolve.

Similar experiments have been done with humans. Imagine you, an English speaker, are paired with a non-English speaker. Your task is to instruct your partner to pick up a green cube from an assortment of objects on a table.

You might try to gesture a cube shape with your hands and point at grass outside the window to indicate the color green. Over time, you’d develop a sort of proto-language together. Maybe you’d create specific gestures or symbols for “cube” and “green.” Through repeated interactions, these improvised signals would become more refined and consistent, forming a basic communication system.

This works similarly for AI. Through trial and error, algorithms learn to communicate about objects they see, and their conversation partners learn to understand them.

But how do we know what they’re talking about? If they only develop this language with their artificial conversation partner and not with us, how do we know what each word means? After all, a specific word could mean “green,” “cube,” or worse—both. This challenge of interpretation is a key part of my research.

Cracking the Code

The task of understanding AI language may seem almost impossible at first. If I tried speaking Polish (my mother tongue) to a collaborator who only speaks English, we couldn’t understand each other or even know where each word begins and ends.

The challenge with AI languages is even greater, as they might organize information in ways completely foreign to human linguistic patterns.

Fortunately, linguists have developed sophisticated tools using information theory to interpret unknown languages.

Just as archaeologists piece together ancient languages from fragments, we use patterns in AI conversations to understand their linguistic structure. Sometimes we find surprising similarities to human languages, and other times we discover entirely novel ways of communication.

These tools help us peek into the “black box” of AI communication, revealing how AI agents develop their own unique ways of sharing information.

My recent work focuses on using what the agents see and say to interpret their language. Imagine having a transcript of a conversation in a language unknown to you, along with what each speaker was looking at. We can match patterns in the transcript to objects in the participant’s field of vision, building statistical connections between words and objects.

For example, perhaps the phrase “yayo” coincides with a bird flying past—we could guess that “yayo” is the speaker’s word for “bird.” Through careful analysis of these patterns, we can begin to decode the meaning behind the communication.

In the latest paper by me and my colleagues, set to appear in the conference proceedings of Neural Information Processing Systems (NeurIPS), we show that such methods can be used to reverse-engineer at least parts of the AIs’ language and syntax, giving us insights into how they might structure communication.

Aliens and Autonomous Systems

How does this connect to aliens? The methods we’re developing for understanding AI languages could help us decipher any future alien communications.

If we are able to obtain some written alien text together with some context (such as visual information relating to the text), we could apply the same statistical tools to analyze them. The approaches we’re developing today could be useful tools in the future study of alien languages, known as xenolinguistics.

But we don’t need to find extraterrestrials to benefit from this research. There are numerous applications, from improving language models like ChatGPT or Claude to improving communication between autonomous vehicles or drones.

By decoding emergent languages, we can make future technology easier to understand. Whether it’s knowing how self-driving cars coordinate their movements or how AI systems make decisions, we’re not just creating intelligent systems—we’re learning to understand them.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Tomas Martinez on Unsplash

]]>
159481
Europe Aims to Visit This Large Asteroid When It Brushes by Earth in 2029 https://singularityhub.com/2024/11/05/europe-hopes-to-visit-this-large-asteroid-when-it-brushes-by-earth-in-2029/ Tue, 05 Nov 2024 20:26:05 +0000 https://singularityhub.com/?p=159430

The European Space Agency has given the go-ahead for initial work on a mission to visit an asteroid called Apophis. If approved at a key meeting next year, the robotic spacecraft, known as the Rapid Apophis Mission for Space Safety (Ramses), will rendezvous with the asteroid in February 2029.

Apophis is 340 meters wide, about the same as the height of the Empire State Building. If it were to hit Earth, it would cause wholesale destruction hundreds of miles from its impact site. The energy released would equal that from tens or hundreds of nuclear weapons, depending on the yield of the device.

Luckily, Apophis won’t hit Earth in 2029. Instead, it will pass by Earth safely at a distance of 19,794 miles (31,860 kilometers), about one-twelfth the distance from the Earth to the Moon. Nevertheless, this is a very close pass by such a big object, and Apophis will be visible with the naked eye.

NASA and the European Space Agency have seized this rare opportunity to send separate robotic spacecraft to rendezvous with Apophis and learn more about it. Their missions could help inform efforts to deflect an asteroid that threatens Earth, should we need to in the future.

The Threat From Asteroids

Some 66 million years ago, an asteroid the size of a small city hit Earth. The impact of this asteroid brought about a global extinction event that wiped out the dinosaurs.

Earth is in constant danger of being hit by asteroids, leftover debris from the formation of the solar system 4.5 billion years ago. Located in the asteroid belt between Mars and Jupiter, asteroids come in many shapes and sizes. Most are small, only 10 meters across, but the largest are hundreds of kilometers across, larger than the asteroid that killed the dinosaurs.

Apophis
Artist’s impression of Apophis. Image Credit: NASA

The asteroid belt contains one to two million asteroids larger than a kilometer across and millions of smaller bodies. These space rocks feel each other’s gravitational pull, as well as the gravitational tug of Jupiter on one side and the inner planets on the other.

Because of this gravitational tug-of-war, every once in a while an asteroid is thrown out of its orbit and hurtles towards the inner solar system. There are 35,000 such “near-Earth objects” (NEOs). Of these, 2,300 “potentially hazardous objects” (PHOs) have orbits that intersect Earth’s and are large enough that they pose a real threat to our survival.

Do Not Go Gentle Into That Good Night

During the 20th century, astronomers set up several surveys, such as Atlas, in order to detect and study hazardous asteroids. But detection is not enough; we have to find a way to defend Earth against an incoming asteroid.

Blowing up an asteroid, as depicted in the movie Armageddon, is no use. The asteroid would be broken into smaller fragments, which would keep moving in much the same direction. Instead of being hit by one large asteroid, Earth would be hit by a swarm of smaller objects.

The preferred solution is to deflect the incoming asteroid away from Earth so that it passes by harmlessly. To do so, we would need to apply an external force to the asteroid to nudge it away. A popular idea is to fire a projectile at the asteroid. NASA did this in 2022, when a spacecraft called DART collided with an asteroid. Before we do this out of necessity, we have to understand how different types of asteroids would react to such an impact.

Apophis, Ramses, and Osiris-Apex

Apophis was discovered in 2004. The asteroid passed by Earth on December 21, 2004 at a distance of 14 million kilometers. It returned in 2021 and will swing by Earth again in 2029, 2036, and 2068.

Until recently, there was a small chance that Apophis could collide with Earth in 2068. However, during Apophis’ approach in 2021, astronomers used radar observations to refine their knowledge of the asteroid’s orbit. These showed that Apophis would not hit our planet for the next 100 years.

The Ramses mission will rendezvous with Apophis in February 2029, two months before its closest approach to Earth on Friday, April 13. It will then accompany the asteroid as it approaches Earth. The goal is to learn how Apophis’s orbit, rotation, and shape will change as it passes so close to Earth’s gravitational field.

In 2016, NASA launched the “Origins, Spectral Interpretation, Resource Identification, and Security–Regolith Explorer” (Osiris-Rex) mission to study the near-Earth asteroid Bennu. It intercepted Bennu in 2020 to collect samples of rock and soil from its surface and dispatched the rocks in a capsule, which arrived on Earth in 2023.

The spacecraft is still out there, so NASA renamed it the “Origins, Spectral Interpretation, Resource Identification and Security–Apophis Explorer” (Osiris-Apex) and assigned it to study Apophis. Osiris-Apex will reach the asteroid just after its 2029 close encounter. It will then fly low over Apophis’s surface and fire its engines, disturbing the rocks and dust that cover the asteroid to reveal the layer underneath.

A close flyby of an asteroid as large as Apophis happens only once every 5,000 to 10,000 years. Apophis’s arrival in 2029 presents a rare opportunity to study such an asteroid up close, and seeing how it is affected by Earth’s gravitational pull. The information gleaned will shape the way we choose to protect Earth in the future from a real killer asteroid.

Ancient Egyptian Mythology

When Ramses and Osiris-Apex meet up with Apophis in 2029 they will inadvertently reenact a core component of ancient Egyptian cosmology. To the ancient Egyptians, the sun was personified by several powerful gods, chief among them Re. The sun’s setting in the evening was interpreted as Re dying and entering the netherworld.

During his nighttime journey through the netherworld, Re was menaced by the great snake Apophis, who embodied the powers of darkness and dissolution. Only after Apophis had been defeated could Re be revitalized by Osiris, the king of the netherworld. Re could then once again be reborn in the east, rising in the sky once more.

Tomb murals, coffins, and funerary papyri depict Apophis as a large, coiled snake threatening Re as he sails in his solar barque (sailing ship). But Apophis is always defeated, his body pierced by a spear or riven by knives.

Though the asteroid Apophis poses no danger in the near future, Ramses (named after the pharaohs of the same name, which meant “born of Re”) and Osiris-Apex will study it so that one day we will know how to defeat it—or any of its distant brethren.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
159430
What Is AI Superintelligence? Could It Destroy Humanity? And Is It Really Almost Here? https://singularityhub.com/2024/11/01/what-is-ai-superintelligence-could-it-destroy-humanity-and-is-it-really-almost-here/ Sat, 02 Nov 2024 00:50:32 +0000 https://singularityhub.com/?p=159355

In 2014, the British philosopher Nick Bostrom published a book about the future of artificial intelligence with the ominous title Superintelligence: Paths, Dangers, Strategies. It proved highly influential in promoting the idea that advanced AI systems—“superintelligences” more capable than humans—might one day take over the world and destroy humanity.

A decade later, OpenAI boss Sam Altman says superintelligence may only be “a few thousand days” away. A year ago, Altman’s OpenAI cofounder Ilya Sutskever set up a team within the company to focus on “safe superintelligence,” but he and his team have now raised a billion dollars to create a startup of their own to pursue this goal.

What exactly are they talking about? Broadly speaking, superintelligence is anything more intelligent than humans. But unpacking what that might mean in practice can get a bit tricky.

Different Kinds of AI

In my view, the most useful way to think about different levels and kinds of intelligence in AI was developed by US computer scientist Meredith Ringel Morris and her colleagues at Google.

Their framework lists six levels of AI performance: no AI, emerging, competent, expert, virtuoso, and superhuman. It also makes an important distinction between narrow systems, which can carry out a small range of tasks, and more general systems.

A narrow, no-AI system is something like a calculator. It carries out various mathematical tasks according to a set of explicitly programmed rules.

There are already plenty of very successful narrow AI systems. Morris gives the Deep Blue chess program that famously defeated world champion Garry Kasparov way back in 1997 as an example of a virtuoso-level narrow AI system.

Table: The Conversation * Source: Adapted from Morris et al. * Created with Datawrapper

Some narrow systems even have superhuman capabilities. One example is AlphaFold, which uses machine learning to predict the structure of protein molecules, and whose creators won the Nobel Prize in Chemistry this year.What about general systems? This is software that can tackle a much wider range of tasks, including things like learning new skills.

A general no-AI system might be something like Amazon’s Mechanical Turk: It can do a wide range of things, but it does them by asking real people.

Overall, general AI systems are far less advanced than their narrow cousins. According to Morris, the state-of-the-art language models behind chatbots such as ChatGPT are general AI—but they are so far at the “emerging” level (meaning they are “equal to or somewhat better than an unskilled human”), and yet to reach “competent” (as good as 50 percent of skilled adults).

So by this reckoning, we are still some distance from general superintelligence.

How Intelligent Is AI Right Now?

As Morris points out, precisely determining where any given system sits would depend on having reliable tests or benchmarks.

Depending on our benchmarks, an image-generating system such as DALL-E might be at virtuoso level (because it can produce images 99 percent of humans could not draw or paint), or it might be emerging (because it produces errors no human would, such as mutant hands and impossible objects).

There is significant debate even about the capabilities of current systems. One notable 2023 paper argued GPT-4 showed “sparks of artificial general intelligence.”

OpenAI says its latest language model, o1, can “perform complex reasoning” and “rivals the performance of human experts” on many benchmarks.

However, a recent paper from Apple researchers found o1 and many other language models have significant trouble solving genuine mathematical reasoning problems. Their experiments show the outputs of these models seem to resemble sophisticated pattern-matching rather than true advanced reasoning. This indicates superintelligence is not as imminent as many have suggested.

Will AI Keep Getting Smarter?

Some people think the rapid pace of AI progress over the past few years will continue or even accelerate. Tech companies are investing hundreds of billions of dollars in AI hardware and capabilities, so this doesn’t seem impossible.

If this happens, we may indeed see general superintelligence within the “few thousand days” proposed by Sam Altman (that’s a decade or so in less sci-fi terms). Sutskever and his team mentioned a similar timeframe in their superalignment article.

Many recent successes in AI have come from the application of a technique called “deep learning,” which, in simplistic terms, finds associative patterns in gigantic collections of data. Indeed, this year’s Nobel Prize in Physics has been awarded to John Hopfield and also the “Godfather of AI” Geoffrey Hinton, for their invention of the Hopfield network and Boltzmann machine, which are the foundation of many powerful deep learning models used today.

General systems such as ChatGPT have relied on data generated by humans, much of it in the form of text from books and websites. Improvements in their capabilities have largely come from increasing the scale of the systems and the amount of data on which they are trained.

However, there may not be enough human-generated data to take this process much further (although efforts to use data more efficiently, generate synthetic data, and improve transfer of skills between different domains may bring improvements). Even if there were enough data, some researchers say language models such as ChatGPT are fundamentally incapable of reaching what Morris would call general competence.

One recent paper has suggested an essential feature of superintelligence would be open-endedness, at least from a human perspective. It would need to be able to continuously generate outputs that a human observer would regard as novel and be able to learn from.

Existing foundation models are not trained in an open-ended way, and existing open-ended systems are quite narrow. This paper also highlights how either novelty or learnability alone is not enough. A new type of open-ended foundation model is needed to achieve superintelligence.

What Are the Risks?

So what does all this mean for the risks of AI? In the short term, at least, we don’t need to worry about superintelligent AI taking over the world.

But that’s not to say AI doesn’t present risks. Again, Morris and co have thought this through: As AI systems gain great capability, they may also gain greater autonomy. Different levels of capability and autonomy present different risks.

For example, when AI systems have little autonomy and people use them as a kind of consultant—when we ask ChatGPT to summarize documents, say, or let the YouTube algorithm shape our viewing habits—we might face a risk of over-trusting or over-relying on them.

In the meantime, Morris points out other risks to watch out for as AI systems become more capable, ranging from people forming parasocial relationships with AI systems to mass job displacement and society-wide ennui.

What’s Next?

Let’s suppose we do one day have superintelligent, fully autonomous AI agents. Will we then face the risk they could concentrate power or act against human interests?

Not necessarily. Autonomy and control can go hand in hand. A system can be highly automated, yet provide a high level of human control.

Like many in the AI research community, I believe safe superintelligence is feasible. However, building it will be a complex and multidisciplinary task, and researchers will have to tread unbeaten paths to get there.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
159355
This Radical New Farming Method Would Replace Photosynthesis With Solar Power https://singularityhub.com/2024/10/24/this-radical-new-farming-method-would-replace-photosynthesis-with-solar-power/ Thu, 24 Oct 2024 20:13:38 +0000 https://singularityhub.com/?p=159262 Farming immediately sparks images of lush fields of leafy greens under a blue sky, corn blowing in the wind, or majestic terraced rice paddies carved into mountainsides. Agriculture changed societies and our food habits roughly 12,000 years ago when humans switched from nomadic hunter-gatherer lifestyles to more permanent settlements.

In recent centuries, innovative farming equipment and synthetic chemical fertilizers have boosted food production to feed an increasingly growing population of people. But as any backyard gardener knows, growing plant-based food—lettuce, tomatoes, herbs, grains, pumpkins—still mostly relies on the age-old strategy: Plant seeds in nutritious soil, keep them well hydrated with plenty of sunlight, and wait for them to grow.

This strategy has downsides. Agriculture uses nearly half of the world’s habitable land and accounts for up to a third of human-generated greenhouse gas emissions, wrote Feng Jiao at the Washington University in St. Louis and his team in a recent analysis.

The reason? While sunny regions naturally provide enough light to grow crops, areas with colder winters often need grow lights and greenhouses part of the year. This increases energy consumption, logistical headaches, and ultimately, food costs.

In their paper, Jiao and colleagues argue for a new method that could dramatically revamp farming practices to reduce land use and greenhouse gas emissions.

Dubbed “electro-agriculture,” the approach uses solar panels to trigger a chemical reaction that turns ambient CO2 into an energy source called acetate. Certain mushrooms, yeast, and algae already consume acetate as food. With a slight genetic tweak, we could also engineer other common foods such as grains, tomatoes, or lettuce to consume acetate.

It could be “a groundbreaking revolution in farming,” wrote the team.

According to one estimate, if the US were to fully adopt electro-agriculture, it would reduce agricultural land use by nearly 90 percent. A similar system could also allow more efficient crop growth during spaceflight, where efficiency in small spaces is key. With more research, it might even be possible to bypass traditional photosynthesis with acetate and grow plants in the dark.

“The whole point of this new process [is] to try to boost the efficiency of photosynthesis,” said Jiao in a press release. “Right now, we are at about four percent efficiency, which is already four times higher than for photosynthesis, and because everything is more efficient with this method, the CO2 footprint associated with the production of the food becomes much smaller.”

Man Versus Food

Agriculture is one of the most difficult domains in which to reduce carbon emissions. As the global population increases, its impact on the environment will likely grow.

“There is an urgent need for the global food system to be reimagined to sustain a habitable planet,” wrote the team.

Photosynthesis is at the heart of agriculture. In plants and some bacteria, green-tinted molecular machines called chloroplasts absorb sunlight and churn that light into energy. It’s no coincidence most farms are in sun-bathed areas liked central California.

Farmers and scientists have tried shrinking the agricultural footprint with vertical farming. True to form, vertical farms grow crops on stacked shelves rather than large horizontal fields. The method often relies on hydroponics, in which plants absorb nutrients from a water-based system instead of soil, similar to AeroGarden but at an industrial scale.

These systems run indoors, so plants can grow all year. But heavy reliance on artificial grow lights means high energy consumption limits their ability to scale.

Part of the problem is efficiency. Much of the “electricity supplied to the LED grow lights in conventional vertical farming is lost to heat,” explained the team.

Electro-agriculture, or “electro-ag,” skirts these challenges. The system captures ambient CO2 from the air and uses water and electricity to convert the gas into different molecules—including ethanol and acetate, which is “plant food” for some species.

Acetate is a vinegar-like chemical at the heart of many biological reactions. One recent study found that acetate made from CO2 could be used to cultivate yeast, mushrooms, and a type of green algae in total darkness without the need for natural photosynthesis. With some sunlight, the chemical boosted growth four-fold in nine different crop types compared to traditional farming techniques.

These initial results got scientists wondering: Can we use acetate alone to replace photosynthesis?

Not quite. Most adult crop plants naturally require photosynthesis to build up their weight and size. Plants grown with electro-ag would need to shift their metabolism to consume acetate—which most adult plants struggle to process—as a primary food source.

But plants can use the molecule for energy as they’re germinating from seeds. It’s a bit like people who drank milk as infants but later became lactose intolerant. The genetic programming is still there; it just needs to be reactivated.

Here’s where genetic engineering comes in.

By tweaking genes involved in acetate metabolism, it might be possible to reawaken the plants’ natural ability to digest the molecule. The strategy hasn’t been directly tested yet. But in bacteria, amping up a gene involved in acetate metabolism boosted their ability to eat it.

Engineering plants that eat acetate is a “critical step” toward building an electro-ag system.

The team envision a vertically stacked set-up to reduce land usage, kind of like a fridge with three sections. The first section—the roof—would be covered in solar panels to gather energy. The middle section would use this energy to break down CO2 and generate acetate to feed plants growing in the bottom section. Depending on the type of crop, this section could hold roughly three to seven “floors” of plants stacked on top of each other, like trays in a fridge.

Into the Wild

Electro-ag could benefit the environment, slashing total land usage for farming by roughly 88 percent in the US alone. This would free up over one billion acres of land that could be restored to natural ecosystems, such as dense forests. The technology could also help stabilize food prices. As weather becomes increasingly unpredictable due to climate change, developing nations are often hit hardest. A large-scale indoor system could help put a lid on volatility.

But how much all this would cost is still uncertain. The field is still in a very early stage. Currently, scientists are tweaking tomato and lettuce genes to increase their abilities to use acetate as food. High-calorie staple crops, such as potato, corn, rice, and wheat, are next on the list. Plants aside, a similar technology—in theory—could also be used for cultivating dairy and plant-based meat, although the idea hasn’t been tested yet.

“This is just the first step for this research, and I think there’s a hope that its efficiency and cost will be significantly improved in the near future,” said Jiao.

Image Credit: Francesco Gallarotti on Unsplash

]]>
159262
You’ll Soon Be Able to Book a Room at the World’s First 3D-Printed Hotel https://singularityhub.com/2024/10/10/youll-soon-be-able-to-book-a-room-at-the-worlds-first-3d-printed-hotel/ Thu, 10 Oct 2024 17:10:20 +0000 https://singularityhub.com/?p=159151 The first 3D-printed house in the US was unveiled just over six years ago. Since then, homes have been printed all over the country and the world, from Virginia to California and Mexico to Kenya. If you’re intrigued by the concept but not sure whether you’re ready to jump on the bandwagon, you’ll soon be able to take a 3D-printed dwelling for a test run—by staying in the world’s first 3D-printed hotel.

The hotel is under construction in the city of Marfa, in the far west of Texas. It’s an expansion of an existing hotel called El Cosmico, which until now has really been more of a campground, offering accommodations in trailers, yurts, and tents. According to the property’s website, “the vision has been to create a living laboratory for artistic, cultural, and community experimentation.” The project is a collaboration between Austin, Texas-based 3D printing construction company Icon, architecture firm Bjarke Ingels Group, and El Cosmico’s owner, Liz Lambert.

El Cosmico will gain 43 new rooms and 18 houses, which will be printed using Icon’s gantry-style Vulcan printer. Vulcan is 46.5 feet (14.2 meters) wide by 15.5 feet (4.7 meters) tall, and it weighs 4.75 tons. It builds homes by pouring a proprietary concrete mixture called Lavacrete into a pattern dictated by software, squeezing out one layer at a time as it moves around on an axis set on a track. Its software, BuildOS, can be operated from a tablet or smartphone.

Image Credit: Icon

One of the benefits of 3D-printed construction is that it’s much easier to diverge from conventional architecture and create curves and other shapes. The hotel project’s designers are taking full advantage of this; far from traditional boxy hotel rooms, they’re aiming to create unique architecture that’s aligned with its natural setting.

Image Credit: Icon

“By testing the geometric boundaries of Icon’s 3D-printed construction, we have imagined fluid, curvilinear structures that enjoy the freedom of form in the empty desert. By using the sand, soils, and colors of the terroir as our print medium, the circular forms seem to emerge from the very land on which they stand,” Bjarke Ingels, the founder and creative director of Bjarke Ingels Group, said in a press release.

Renderings of the completed project and photos of the initial construction show circular, neutral-toned structures that look like they might have sprouted up out of the ground. Don’t let that fool you, though—the interiors, while maybe not outright fancy, will be tastefully decorated and are quite comfortable-looking.

Image Credit: ICON
Image Credit: Icon

At first glance, Marfa seems like an odd choice for something as buzzy as a 3D-printed hotel. The town sits in the middle of the hot, dry Texas desert; it has a population of 1,700 people; and the closest airport is in El Paso, a three-hour drive away. But despite its relative isolation, Marfa is a hotspot for artists and art lovers and has a unique vibe all its own that draws flocks of tourists (according to Vogue, an estimated 49,000 people visited Marfa in 2019).

El Cosmico is not only expanding, it’s relocating to a 60-acre site on the outskirts of Marfa. Along with the 3D-printed accommodations, the site will have a restaurant, pool, spa, and communal facilities. Most of the trailers and tents from the existing property will be preserved and moved to the new site.

The project broke ground last month, and El Cosmico 2.0 is slated to open in 2026.

How much will it cost you to give 3D-printed construction a test run? Similar to how the market prices of commercial 3D-printed homes haven’t been dramatically lower than conventional houses, it seems 3D-printed hotel rooms will cost about the same as regular hotel rooms, or maybe more: Reservations for the new rooms can’t yet be booked, but they’re predicted to cost between $200 and $450 per night.

Image Credit: Icon

]]>
159151
Scientists Say Net Zero Aviation Is Possible by 2050—If We Act Now https://singularityhub.com/2024/09/27/scientists-say-net-zero-aviation-is-possible-by-2050-if-we-act-now/ Fri, 27 Sep 2024 18:55:26 +0000 https://singularityhub.com/?p=158951 Aviation has proven to be one of the most stubbornly difficult industries to decarbonize. But a new roadmap outlined by University of Cambridge researchers says the sector could reach net zero by 2050 if urgent action is taken.

The biggest challenge when it comes to finding alternatives to fossil fuels in aviation is basic physics. Jet fuel is incredibly energy dense, which is crucial for a mode of transport where weight savings can dramatically impact range.

While efforts are underway to build planes powered by batteries, hydrogen, or methane, none can come close to matching kerosene, pound for pound, at present. Sustainable aviation fuel is another option, but so far, its uptake has been limited, and its green credentials are debatable.

Despite this, the authors of a new report from the University of Cambridge’s Aviation Impact Accelerator (AIA) say that with a concerted effort the industry can clean up its act. The report outlines four key sustainable aviation goals that, if implemented within the next five years, could help the sector become carbon neutral by the middle of the century.

“Too often the discussions about how to achieve sustainable aviation lurch between overly optimistic thinking about current industry efforts and doom-laden cataloging of the sector’s environmental evils,” Eliot Whittington, executive director at the Cambridge Institute for Sustainability Leadership, said in a press release.

“The Aviation Impact Accelerator modeling has drawn on the best available evidence to show that there are major challenges to be navigated if we’re to achieve net zero flying at scale, but that it is possible.”

The report notes that time is of the essence. Aviation is responsible for roughly 4 percent of global warming despite only 10 percent of the population flying, a figure that’s likely to rise as the world continues to develop. Despite global leaders pledging to make aviation net zero, current efforts to get there are not ambitious enough, the authors say.

After researching the interventions that could have the biggest impact and discussions at the inaugural meeting of the Transatlantic Sustainable Aviation Partnership at MIT last year, AIA came up with four focus areas that could put those goals within reach.

The first of these is to reduce contrails. While most of the focus is on emissions from burning jet fuel, the generation of persistent contrails can trap heat in atmosphere and add significantly to warming.

Contrails can be avoided by adjusting an aircraft’s altitude in areas where they’re most likely to be formed, but the underlying science is poorly understood as are potential strategies for adjusting air traffic. Therefore, the report suggests setting up several “living labs” in existing airspace to conduct data collection and experiments. These should be ready by the end of 2025, say the authors.

The second goal is to reduce the amount of fuel airplanes use by introducing new aircraft and engine designs, improving operational efficiency of the sector, or just getting aircraft to fly slower. To catalyze action, governments need to set clear policies, such as establishing fuel burn reduction targets, loan guarantees for new aircraft purchases, or incentives to scrap old airplanes.

The third goal is to ensure sustainable aviation fuel is actually sustainable, and its production is scalable. Most sustainable fuels rely on biomass, but limitations on production and competition from other sectors could mean they can’t realize the hoped for emissions reductions.

In the near term, the report suggests aviation will have to work with other industries to set best practices and limit total cross-sector emissions. And in the long run, the industry will have to make efforts to find alternative ways to develop synthetic sustainable fuels.

Lastly, the report argues the industry also needs to invest in “moonshot” technologies. By 2025, aviation should launch several high-risk, high-reward demonstration programs in technologies that could be truly transformative for the sector. These include the development of cryogenic hydrogen or methane fuels, hydrogen-electric propulsion technology, or the use of synthetic biology to dramatically lower the energy demands for sustainable fuel production.

The report’s authors stress that, although they are confident these interventions could have the desired impact, time is of the essence. History suggests that getting global leaders to take decisive action on climate issues is tricky, but at least they now have a concrete roadmap.

Image Credit: John McArthur / Unsplash

]]>
158951
Robots Are Coming to the Kitchen—What That Could Mean for Society and Culture https://singularityhub.com/2024/09/03/robots-are-coming-to-the-kitchen-what-that-could-mean-for-society-and-culture/ Tue, 03 Sep 2024 18:30:32 +0000 https://singularityhub.com/?p=158659

Automating food is unlike automating anything else. Food is fundamental to life—nourishing body and soul—so how it’s accessed, prepared, and consumed can change societies fundamentally.

Automated kitchens aren’t sci-fi visions from The Jetsons or Star Trek. The technology is real and global. Right now, robots are used to flip burgers, fry chicken, create pizzas, make sushi, prepare salads, serve ramen, bake bread, mix cocktails, and much more. AI can invent recipes based on the molecular compatibility of ingredients or whatever a kitchen has in stock. More advanced concepts are in the works to automate the entire kitchen for fine dining.

Since technology tends to be expensive at first, the early adopters of AI kitchen technologies are restaurants and other businesses. Over time, prices are likely to fall enough for the home market, possibly changing both home and societal dynamics.

Can food technology really change society? Yes, just consider the seismic impact of the microwave oven. With that technology, it was suddenly possible to make a quick meal for just one person, which can be a benefit but also a social disruptor.

Familiar concerns about the technology include worse nutrition and health from prepackaged meals and microwave-heated plastic containers. Less obviously, that convenience can also transform eating from a communal, cultural and creative event into a utilitarian act of survival—altering relationships, traditions, how people work, the art of cooking, and other facets of life for millions of people.

For instance, think about how different life might be without the microwave. Instead of working at your desk over a reheated lunch, you might have to venture out and talk to people, as well as enjoy a break from work. There’s something to be said for living more slowly in a society that’s increasingly frenetic and socially isolated.

Convenience can come at a great cost, so it’s vital to look ahead at the possible ethical and social disruptions that emerging technologies might bring, especially for a deeply human and cultural domain—food—that’s interwoven throughout daily life.

With funding from the US National Science Foundation, my team at California Polytechnic State University is halfway into what we believe is the first study of the effects AI kitchens and robot cooks could have on diverse societies and cultures worldwide. We’ve mapped out three broad areas of benefits and risks to examine.

Creators and Consumers

The benefits of AI kitchens include enabling chefs to be more creative, as well as eliminating repetitive, tedious tasks such as peeling potatoes or standing at a workstation for hours. The technology can free up time. Not having to cook means being able to spend more time with family or focus on more urgent tasks. For personalized eating, AI can cater to countless special diets, allergies, and tastes on demand.

However, there are also risks to human well-being. Cooking can be therapeutic and provides opportunities for many things: gratitude, learning, creativity, communication, adventure, self-expression, growth, independence, confidence, and more, all of which may be lost if no one needs to cook. Family relationships could be affected if parents and children are no longer working alongside each other in the kitchen—a safe space to chat, in contrast to what can feel like an interrogation at the dining table.

The kitchen is also the science lab of the home, so science education could suffer. The alchemy of cooking involves teaching children and other learners about microbiology, physics, chemistry, materials science, math, cooking techniques and tools, food ingredients and their sourcing, human health, and problem-solving. Not having to cook can erode these skills and knowledge.

Community and Cultures

AI can help with experimentation and creativity, such as creating elaborate food presentations and novel recipes within the spirit of a culture. Just as AI and robotics help generate new scientific knowledge, they can increase understanding of, say, the properties of food ingredients, their interactions, and cooking techniques, including new methods.

But there are risks to culture. For example, AI could bastardize traditional recipes and methods, since AI is prone to stereotyping, for example flattening or oversimplifying cultural details and distinctions. This selection bias could lead to reduced diversity in the kinds of cuisine produced by AI and robot cooks. Technology developers could become gatekeepers for food innovation, if the limits of their machines lead to homogeneity in cuisines and creativity, similar to the weirdly similar feel of AI art images across different apps.

Also, think about your favorite restaurants and favorite dinners. How might the character of those neighborhoods change with automated kitchens? Would it degrade your own gustatory experience if you knew those cooking for you weren’t your friends and family but instead were robots?

The hope with technology is that more jobs will be created than jobs lost. Even if there’s a net gain in jobs, the numbers hide the impact on real human lives. Many in the food service industry—one of the most popular occupations in any economy—could find themselves unable to learn new skills for a different job. Not everyone can be an AI developer or robot technician, and it’s far from clear that supervising a robot is a better job than cooking.

Philosophically, it’s still an open question whether AI is capable of genuine creativity, particularly if that implies inspiration and intuition. Assuming so may be the same mistake as thinking that a chatbot understands what it’s saying, instead of merely generating words that statistically follow the previous words. This has implications for aesthetics and authenticity in AI food, similar to ongoing debates about AI art and music.

Safety and Responsibility

Because humans are a key disease vector, robot cooks can improve food safety. Precision trimming and other automation can reduce food waste, along with AI recipes that can make the fullest use of ingredients. Customized meals can be a benefit for nutrition and health, for example, in helping people avoid allergens and excess salt and sugar.

The technology is still emerging, so it’s unclear whether those benefits will be realized. Foodborne illnesses are an unknown. Will AI and robots be able to smell, taste, or otherwise sense the freshness of an ingredient or the lack thereof and perform other safety checks?

Physical safety is another issue. It’s important to ensure that a robot chef doesn’t accidentally cut, burn, or crush someone because of a computer vision failure or other error. AI chatbots have been advising people to eat rocks, glue, gasoline, and poisonous mushrooms, so it’s not a stretch to think that AI recipes could be flawed, too. Where legal regimes are still struggling to sort out liability for autonomous vehicles, it may similarly be tricky to figure out liability for robot cooks, including if hacked.

Given the primacy of food, food technologies help shape society. The kitchen has a special place in homes, neighborhoods, and cultures, so disrupting that venerable institution requires careful thinking to optimize benefits and reduce risks.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Kindel Media / Pexels

]]>
158659