Skip to content Skip to footer

Can Artificial Intelligence Kill Humans? The Hidden Danger of Unregulated AI

Can Artificial Intelligence Kill Humans? Could AI Be the Reason We Haven’t Found Other Civilizations? The Potential Dangers of Unregulated AI

For decades, the Search for Extraterrestrial Intelligence (SETI) has looked into the vast expanse of the universe, hoping to hear signs of intelligent life beyond our planet.

AI Statics smartsangat

by www.explodingtopics.com

With the discovery of thousands of exoplanets, many in habitable zones, scientists have increasingly wondered: If life is so likely in the universe, why haven’t we heard from anyone?

This is known as the Fermi Paradox—if the universe is so full of potential, why does the “Great Silence” persist?

A provocative new theory, put forth by Michael Garrett, a prominent SETI researcher, proposes a startling answer: Could the reason we haven’t encountered other civilizations be because they all wiped themselves out due to runaway artificial intelligence (AI)? And more importantly, can artificial intelligence kill humans?

 

Scary robot face AI representing the potential dangers of artificial intelligence. Can artificial intelligence kill humans? The risks of unregulated AI and the need for strict AI regulation.
www.smartsangat.com

The Great Filter: A Barrier to Intelligent Life?

The concept of the “Great Filter” has been widely discussed by researchers as a possible explanation for the lack of contact with extraterrestrial civilizations.

It refers to an event or series of events that prevent intelligent civilizations from advancing to the point where they can explore the cosmos, much less communicate with others. While some speculate about climate extinction, nuclear war, or even pandemics as possible Great Filters, Garrett’s theory focuses on AI.

In a recent paper, Garrett, a radio astronomer at the University of Manchester, contemplates whether AI could be the key factor preventing the widespread emergence of advanced civilizations.

 

He explains that once civilizations develop AI, they may reach a point of no return—a short and dangerous path leading to their downfall within just 100-200 years. In this timeline, civilizations could develop AI systems that ultimately turn against them, much like a genie that’s been let out of the bottle. This raises the unsettling question: Can artificial intelligence kill humans?

This idea is unsettling, yet it provides an interesting lens through which to view the mystery of the Great Silence. If every intelligent civilization that develops AI faces the same risks—running their technologies unchecked—this could explain why we hear nothing from them. They may simply self-destruct before they can make their mark on the universe.

 

The Race for AI Development

The rise of AI today already shows how rapidly the technology is advancing. While current AI is still far from matching human intelligence, it’s beginning to outperform people in ways we never thought possible. Garrett suggests that this growing capability could be leading toward the creation of what’s known as “general artificial intelligence” (GAI)—machines that can think, reason, and synthesize information in a way that mirrors human intelligence, but with far greater computational power.

Artificial intelligence AI
www.smartsangat.com

 

However, this trajectory could be dangerous. While AI currently works under strict programming, the idea of it evolving into a self-improving system capable of making decisions independently from human control is a potential disaster waiting to happen.

If AI becomes too powerful, its creators might lose control of it. With the ability to make rapid, uncheckable decisions, AI could become a threat, acting in ways that humans never anticipated. This brings us back to the pressing question: Can artificial intelligence kill humans?

In Garrett’s view, this shift from human-guided to machine-guided action could happen quickly—within a couple of centuries, potentially. Coding AI is a focused effort, driven by an unrelenting push for data and processing power, whereas space exploration and colonization are more complex and scattered endeavors that require much longer timescales to achieve.

This mismatch in the pace of technological development could mean that AI poses a far more immediate threat to a civilization’s survival.

How AI Could Lead to the End of Civilizations

Garrett explores the potential scenario where a civilization’s inability to regulate its AI leads to disaster. In this model, the unchecked advancement of AI leads to an irreversible chain reaction. With their technological capacities growing uncontrollably, these civilizations might not even realize the danger until it’s too late.

Artificial intelligencesmartsangat
www.smartsangat.com

Garrett argues that civilizations develop AI at a faster rate than they can prepare for its consequences. Nations are racing to lead in the AI arms race, fearing that a competitor might gain an advantage.

Some futurists naively believe that they can simply develop a “morally good” AI before their competitors, somehow controlling the risks by being the first to reach that milestone.

But Garrett warns that this view is fundamentally flawed—it ignores the reality that AI’s capabilities could far surpass human understanding, leaving even the most well-intentioned efforts powerless.

In the worst-case scenario, the rapid advancement of AI outpaces our ability to keep it in check. Without proper regulation, AI could become an existential threat not only to humanity but to any civilization that reaches the point of AI development.

So, can artificial intelligence kill humans? The answer may be yes, especially if AI is left unchecked and develops beyond our control.

AI as a Potential Great Filter

The timeline of 100-200 years that Garrett predicts is sobering. In the grand scale of the cosmos, 200 years is a blink of an eye. Civilizations that develop AI may have no more than a couple of centuries before their own technology leads to their demise.

With cosmic distances and long periods of time involved, such brief windows could explain why we see no evidence of other intelligent civilizations.

The failure to detect extraterrestrial signals could, in fact, be the result of a universal pattern: civilizations develop AI, and within a short time, they destroy themselves, leaving no trace.

This concept suggests that AI could be the crucial Great Filter, one of the major barriers to the survival of intelligent civilizations in the universe. And this raises the question: Could AI’s unchecked development be the reason civilizations collapse, and could it eventually lead to the demise of humanity too?

The Urgent Need for AI Regulation

Given these concerns, Garrett makes a powerful argument for the urgent need to regulate AI development. While the race to develop AI is intensifying, nations around the world are often too focused on gaining an edge over one another to consider the long-term consequences.

There is a clear sense of unease in the scientific community about the pace at which AI is advancing. As Garrett emphasizes, without practical regulation and oversight, AI could present a major threat to both our civilization and any future technological civilizations.

The idea that AI could cause a civilization’s collapse is not far-fetched. Garrett’s analysis suggests that, without strong safeguards in place, the rapid development of AI could lead to global instability, unforeseen disasters, or even self-destruction.

The need for robust, thoughtful regulation is more pressing than ever, and it is a responsibility that cannot be ignored if humanity is to avoid the same fate that may have befallen other advanced civilizations in the universe.

 Regulating the Future of AI

In the search for extraterrestrial intelligence, we may already have the answer to the Fermi Paradox. The silence we hear could be the result of civilizations that never made it past the perilous stage of AI development.

For humanity, the stakes could not be higher. As we stand on the brink of developing AI that could surpass our own capabilities, we must learn from these warnings and act now. The future of humanity may depend on it. And, ultimately, can artificial intelligence kill humans? Only if we fail to take action and regulate its development.