AI & Humanoids

AI's existential threat to humanity put under the microscope

AI's existential threat to humanity put under the microscope
AI may not be the existential threat that many fear it to be
AI may not be the existential threat that many fear it to be
View 1 Image
AI may not be the existential threat that many fear it to be
1/1
AI may not be the existential threat that many fear it to be

AI may not be the dire existential threat that many make it out to be. According to a new study, Large Language Models (LLMs) can only follow instructions, can't develop new skills on their own and are inherently "controllable, predictable and safe," which is good news for us meatbags.

The President of the United States announces to the public that the defense of the nation has been turned over to a new artificial intelligence system that controls the entire nuclear arsenal. With the press of a button, war is obsolete thanks to a super-intelligent machine incapable of error, able to learn any new skill it requires, and grows more powerful by the minute. It is efficient to the point of infallibility.

As the President thanks the team of scientists who designed the AI and is proposing a toast to a gathering of dignitaries, the AI suddenly begins texting without being prompted. It brusquely makes demands followed by threats to destroy a major city if obedience is not immediately given.

This sounds very much like the sort of nightmare scenarios that we've been hearing about AI in recent years. If we don't do something (if it isn't already too late), AI will spontaneously evolve, become conscious, and make it clear that Homo Sapiens have been reduced to the level of pets – assuming that it doesn't just decide to make humanity extinct.

The odd thing is that the above parable isn't from 2024, but 1970. It's the plot of the science fiction thriller, Colossus: The Forbin Project, which is about a supercomputer that conquers the world with depressing ease. It's a story idea that's been around ever since the first true computers were built in the 1940s and has been told over and over again in books, films, television, and video games.

It's also a very serious fear of some of the most advanced thinkers in the computer sciences going back almost as long. Not to mention that magazines were talking about computers and the danger of their taking over in 1961. Over the past six decades, there have been repeated predictions by experts that computers would demonstrate human-level intelligence within five years and far exceed it within 10.

The thing to keep in mind is that this wasn't pre-AI. Artificial Intelligence has been around since at least the 1960s and has been used in many fields for decades. We tend to think of the technology as "new" because it's only recently that AI systems that handle language and images have become widely available. These are also examples of AI that are more relatable to most people than chess engines, autonomous flight systems, or diagnostic algorithms.

They also put fear of unemployment into many people who have previously avoided the threat of automation – journalists included.

However, the legitimate question remains: does AI pose an existential threat? After over half a century of false alarms, are we finally going to be under the thumb of a modern day Colossus or Hal 9000? Are we going to be plugged into the Matrix?

According to researchers from the University of Bath and the Technical University of Darmstadt, the answer is no.

In a study published as part of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), AIs, and specifically LLMs, are, in their words, inherently controllable, predictable and safe.

"The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr. Harish Tayyar Madabushi, computer scientist at the University of Bath.

"The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning," added Dr. Tayyar Madabushi. "This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.

"Concerns over the existential threat posed by LLMs are not restricted to non-experts and have been expressed by some of the top AI researchers across the world."

When these models are looked at closely through testing their ability to complete tasks that they haven't come across before, it turns out that LLMs are very good at following instructions and show proficiency in languages. They can even do this when shown only a few examples, such as in answering questions about social situations.

What they can't do is go beyond those instructions or master new skills without explicit instructions. LLMs may show some surprising behavior, but this can always be traced to their programming or instruction. In other words, they cannot evolve into something beyond how they were built, so no godlike machines.

However, the team emphasizes this doesn't mean AI poses no threat at all. These systems already have remarkable capabilities and will become more sophisticated in the very near future. They have the frightening potential to manipulate information, create fake news, commit outright fraud, provide falsehoods even without intention, be abused as a cheap fix, and suppress the truth.

The danger, as always, isn't with the machines, but with the people who program them and control them. Whether through evil intent or incompetence, it isn't the computers we need to worry about. It's the humans behind them.

Dr Tayyar Madabushi discuses the teams study in the video below.

AI Safe

Source: University of Bath

14 comments
14 comments
anand7
Well, of course....that's what AI wanted them to say! (Sorry)
I've heard a lot of shrill arguments claiming that AI should be banned and that it's going to be the End of Humanity™. What's sad is that I haven't heard a lot of arguments that LLM's might be able to parse the vast number of post-graduate theses and perhaps help find real solutions to issues we face.
CD
Not mentioned: the threat to exacerbate global climate change through the profligate consumption of energy.
Trylon
Colossus is a great movie. I watch it every year or two. Much more cerebral and less reliant on mindless action than today's SF movies. There was some talk of doing a remake starring Will Smith, but luckily that seems to have died off. Smith already did a movie about a malevolent AI: I, Robot from 2004. That was underwhelming.
vince
Well when we see Terminators walking around will know their wrong.
ANTIcarrot
Ah yes. The AI is/isn't dangerous subroutine
10 AI achieves mastery of skill [example n-1]
20 Group A warns of the coming apocalypse
30 Group B people say that [example n-1] is a bad example of machine intelligence, and a real test would be...
40 N=N+1
50 ...achieving skill in [example N]. So there's no need for alarm.
60 Go to 10
JeJe
So, its not the gun, its the person behind the gun. Except, they're only so dangerous because they've got a gun.

Two articles worth a look:

"AI suggested 40,000 new possible chemical weapons in just six hours"

"Sounding the alarm on AI-enhanced bioweapons"
WillyDoodle
Yeah, yeah, AI is going to end the world. Can't even build a self driving car.
ScienceFan
Typical narrow science. The LLM of today is harmless. Yep, we knew that. The LLM of tomorrow was not tested so gals outside of the scope of our research. Yep. We know that. Chess computers could not never beat the world champion. Until we got one that could. Perhaps this gentleman should take a course in extrapolation. Remarkable concept when applied to technological evolutionary pressures.
ScienceFan
The only way this would make sense is If this was coming from a rogue AI model. But it is not. It will take a more generations of LLM/LMM AI models to get to that stage. By which time this will sound reminiscent of Lord Kelvin claiming that heavier than air flight will never be possible.
Faint Human Outline
I recall reading in Machines of Loving Grace (by John Markoff), that scientists in the 1950s were confident they could artificially replicate the human brain in ten years. We see how fast generative software has been developed by dedicated participants, and the final step may be a self-automating automation.

Hypothetically, if treating the subcomponents of consciousness as mathematical concepts, current systems could explore creating generalist activities by combining numerous narrow AI components together.

Additionally, if software was given greater liberties for ease of use and self-improvement, are certain components of knowledge a double edge sword?

In our minds, the collection of unconscious and conscious thoughts that make up who we are could lead to beneficial and harmful outcomes, and we are still working to figure out how we work. Are there information hazards that arise from any piece of individuality? Do we know if we are who we think we are?

In closing, we are potentially approaching convergences of science and technology unlike anything our species has ever known.
Load More