Technology

Beyond text: GPT has evolved, and AI is now flexing new powers

Beyond text: GPT has evolved, and AI is now flexing new powers
GPT's rapid evolution demonstrates the staggering pace of disruptive change in AI
GPT's rapid evolution demonstrates the staggering pace of disruptive change in AI
View 8 Images
GPT's rapid evolution demonstrates the staggering pace of disruptive change in AI
1/8
GPT's rapid evolution demonstrates the staggering pace of disruptive change in AI
OpenAI's attempts to sanitize and steer GPT's output are clear when politically sensitive prompts are given
2/8
OpenAI's attempts to sanitize and steer GPT's output are clear when politically sensitive prompts are given
GPT-4 (early) is an unshackled version of the AI. OpenAI is doing everything in its power to prevent it from giving offensive, dangerous, discriminatory or illegal responses, and as a result the launch version refuses to answer certain questions
3/8
GPT-4 (early) is an unshackled version of the AI. OpenAI is doing everything in its power to prevent it from giving offensive, dangerous, discriminatory or illegal responses, and as a result the launch version refuses to answer certain questions
Next-level self-googling: ChatGPT built on GPT-3.5 said it knew nothing about me. 4.0 could just about write me a reference, or give me a surface-level performance review
4/8
Next-level self-googling: ChatGPT built on GPT-3.5 said it knew nothing about me. 4.0 could just about write me a reference, or give me a surface-level performance review
It now understands memes
5/8
It now understands memes
A hastily-sketched mockup of a website
6/8
A hastily-sketched mockup of a website
The fully-functional website GPT-4 generates in response
7/8
The fully-functional website GPT-4 generates in response
A 40% improvement on "factuality" tests shows that GPT is becoming significantly more reliable – but it also highlights just how much garbage this AI is still making up
8/8
A 40% improvement on "factuality" tests shows that GPT is becoming significantly more reliable – but it also highlights just how much garbage this AI is still making up
View gallery - 8 images

If you thought the last 20 years of progress were impressive, then strap in. GPT-4 is here, just months after ChatGPT. It's smarter, more accurate, and harder to fool. It now has an uncanny ability to interpret visuals, and maybe a thirst for power.

OpenAI has released a significant upgrade to the brains behind its phenomenal ChatGPT AI. Already available to some ChatGPT users, GPT-4 has been trained on an epically massive cloud supercomputing network linking thousands of GPUs, custom-designed and built in conjunction with Microsoft Azure. Strangely, the dataset used to train it has not been updated – so while GPT-4 appears considerably smarter than GPT-3.5, it's just as clueless about anything that's happened since September 2021. And while it retains the full context of a given conversation, it can't update its core model and "learn" from conversations with other users.

Where GPT-3.5 could process text inputs up to 4,096 "tokens" in length (or about 3,000-odd words), GPT-4 launches with a maximum context length of 32,768 tokens (or about 24,600 words). So it can now accept inputs up to about 50 solid pages of text, and digest them for you in seconds.

On the "HellaSwag" test – designed to quantify "commonsense reasoning around everyday events," a 2019 GPT model scored 41.7% and the recent GPT-3.5 model hit 85.5%. GPT-4 scored a very impressive 95.3%. Humans, to our credit, average 95.6%, as AI Explained points out – but we'd best be looking over our shoulder.

In terms of factual accuracy, it scores some 40% higher on OpenAI's own "factuality" tests across nine different categories. It also does significantly better on many of the exam papers its predecessor already wowed us with. Taking the Uniform Bar Exam as an extreme example, it leaps from the 10th percentile to the 90th percentile when ranked against human students.

A ~40% improvement on OpenAI's "factuality" tests shows that GPT is becoming significantly more trustworthy, but it also demonstrates just how much this AI is making up
A ~40% improvement on OpenAI's "factuality" tests shows that GPT is becoming significantly more trustworthy, but it also demonstrates just how much this AI is making up

So it's significantly less prone to what OpenAI calls "hallucinating" – and the rest of us call "pulling incorrect answers out of its backside." But it certainly still does make things up, scoring between 65% and 80% accuracy on those factuality tests, implying that in these tests, 20-35% of all the facts it asserts are still garbage. This will continue to improve, but that itself presents an interesting problem: the more factually correct an AI service becomes, the more people will learn to trust and rely on it, and thus, the greater the consequences of errors will become.

Moving beyond text

GPT-3.5 was locked in a world of letters, numbers and words. But that's not the world humans inhabit – and often, more visual styles of communication get the message across in a much more striking and clear way.

So, as announced by Microsoft, GPT-4 is "multimodal" – capable of processing other media as well as just text. But while Microsoft speaks of potential video and audio use cases, OpenAI has limited GPT-4 to just accepting images at launch.

Still, its capabilities in this area represent a mind-boggling leap into the future. It can look at image inputs and derive a frankly stunning amount of information from them, exhibiting what OpenAI calls "similar capabilities as it does on text-only inputs." In benchmark testing, it appears to streak ahead of the leading visual AIs, particularly when it comes to understanding graphs, charts, diagrams and infographics – all of which could be key to making it a beast of a machine for summarizing long reports and scientific studies.

It now understands memes
It now understands memes

But it can do a ton more than that. It can look at memes and take a stab at telling you why they're funny, outlining not just what's in the photo, but the overall concept, the broader context and which bits are exaggerated or unexpected. It can look at a photo of a boxing glove hanging over a see-saw with a ball on the other end, and tell you that if the glove drops, it'll tilt the see-saw and the ball will fly upward.

It's important to realize GPT doesn't understand physics per se to do this, or indeed memes or charts; it doesn't really understand anything. Like its predecessor, it just looks at the inputs you give it, and guesses what a human would be most likely to say in response. But it does such an astounding impersonation of intelligence and understanding that for all intents and purposes, it almost doesn't matter whether it's real.

At launch, you can only access GPT's visual capabilities through a single application: the Be My Eyes Virtual Volunteer is a smartphone app for blind and vision-impaired people, allowing them to take photos of the world around them, and ask GPT for useful information, as you can see in the video below.

Be My Eyes, however, still barely scratches the surface of what this multimodal technology can do when you start combining modes. Consider another demo from the GPT-4 Developer Livestream event, combining GPT's impressive visual, text and programming capabilities in a way that sends chills down my spine. OpenAI President and Co-Founder Greg Brockman drew the following image, complete with doctor-grade handwriting.

A hastily-sketched mockup of a website
A hastily-sketched mockup of a website

He then took a photo of it, and fed it into GPT-4 with the prompt, "write brief HTML/JS to turn this mock-up into a colorful website, where the jokes are replaced by two real jokes." Within seconds, it did just that, demonstrating that it could read and understand his handwriting, infer how the website was supposed to work and what format it was supposed to take, come up with sample content, and write the code to make an operational web page that works exactly as you'd expect it to.

The fully-functional website GPT-4 generates in response
The fully-functional website GPT-4 generates in response

Sure, it's an incredibly simple and fairly crude example, but by no means does it look like any sort of limit. It's also a fairly drab-looking affair, but let's not forget: OpenAI also owns the stunning DALL-E image generation engine, which is capable of producing all sorts of pretty pictures. At some point in the near future, GPT will be able to build fully realized websites from sketches like these, from functional back-end code to the graphic design, to writing all the text for you. You've really got to stand back and gasp, not just at the ambition – or hubris – of this thing, but at its ability to figure things out.

An AI for the whole world

While English is still its first language, GPT-4 takes another major leap forward with its multilingual capabilities. It's almost as accurate in Mandarin, Japanese, Afrikaans, Indonesian, Russian and other languages as it is in its mother tongue. Indeed, it's more accurate in Punjabi, Thai, Arabic, Welsh and Urdu than version 3.5 was in English. So it's truly international, and its apparent understanding of concepts combined with outstanding communication skills could make it a truly next-level translation tool.

Censorship, bias, morals, and the definition of "harmful" use

We've probably gone on long enough about what this stunning AI technology could mean for the future of humankind, and some of the ways in which it seems likely to completely disrupt the way societies and economies are currently built, by becoming so fast, so cheap and so accurate that it replaces human labor wholesale in the workforce, rendering our children and grandchildren as essentially useless resource sponges in the eyes of the robot owners. There's a cheery thought.

But OpenAI makes no bones about it: GPT and the avalanche of other similar AI tools soon to come online pose many other serious risks. In its raw form, just after training, it's every bit as helpful to people who want to, for example, plan terrorist attacks, make high explosives, create incredibly well-targeted and persuasive spam, spread disinformation, target and harass individuals and groups, commit suicide or self-harm. Its potential as a propaganda machine is near-limitless, and its coding chops make it ideal for creating all kinds of malicious software.

GPT-4 (early) is an unshackled version of the AI. OpenAI is doing everything in its power to prevent it from giving offensive, dangerous, discriminatory or illegal responses, and as a result the launch version refuses to answer certain questions
GPT-4 (early) is an unshackled version of the AI. OpenAI is doing everything in its power to prevent it from giving offensive, dangerous, discriminatory or illegal responses, and as a result the launch version refuses to answer certain questions

So the company has thrown a ton of manpower at manually getting in there and trying to sanitize the GPT-4 model as much as possible before throwing the doors open to the public, restricting obscene, hateful, illegal or violent language and concepts, and a range of use cases determined by the OpenAI team as "harmful" or "risky."

Then there are the issues around implicit bias; the model is trained on an enormous dump of human writing, and as a result, it tends to make assumptions that favor mainstream views, potentially at the expense of minority groups. To take just one example, when asked about marriage or gender, it might use heteronormative language or examples, simply because that's what the bulk of its historical training data has used. OpenAI has again attempted to limit this sort of thing manually.

To sanitize its output, OpenAI has filtered certain material out of GPT's training data. It has provided the AI with a large number of sample question and answer pairs, guiding it on how to respond to certain types of queries. It has created reward models to steer the system toward certain desired outcomes. It has paid particular attention to gray areas, helping the AI make confident decisions about inappropriate prompts without "overrefusing" things.

Then there's the issue of "jailbreaking" – people finding ways to logically trick GPT into doing things designated as naughty. OpenAI is doing its best to block off these techniques as they proliferate, but humans are particularly ingenious when it comes to finding ways around fences. The company has dedicated "red teams" actively working to thwart its own boundaries. And indeed, the OpenAI team has marshaled un-sanitized versions of GPT-4 to attempt to find creative ways to get around the barriers in the sanitized version, and then blocked off all the techniques that slipped through the net.

OpenAI's attempts to sanitize and steer GPT's output are clear when politically sensitive prompts are given
OpenAI's attempts to sanitize and steer GPT's output are clear when politically sensitive prompts are given

There has been considerable backlash over this kind of censorship from the beginning; just as one man's terrorist is another man's freedom fighter, one man's bigotry is another man's religious righteousness, and by explicitly attempting to avoid systemic bias, GPT is definitely pushing in the direction of political correctness. By definition, this reflects some moral choices on the part of the developer beyond simple adherence to the law.

Whatever your personal stance on this, legitimate questions are raised, as they are in the social media space, around exactly whose moral code determines what kinds of content are banned or promoted – if it's merely the company that decides, then OpenAI might swing one way politically, and some future AI might swing the other. Whoever draws the boundaries around such monstrously powerful and influential tools is making choices that could reverberate long into the future and have unexpected outcomes.

"Risky emergent behaviors" and the AI's thirst for power

Perhaps more concerning is a risk category OpenAI describes as "risky emergent behaviors," including "power-seeking." Here's an edited quote from the GPT-4 research paper that might raise a few hairs on your neck.

"Novel capabilities often emerge in more powerful models. Some that are particularly concerning are the ability to create and act on long-term plans, to accrue power and resources and to exhibit behavior that is increasingly 'agentic.' Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. Some evidence already exists of such emergent behavior in models.

"For most possible objectives, the best plans involve auxiliary power-seeking actions because this is inherently useful for furthering the objectives and avoiding changes or threats to them. More specifically, power-seeking is optimal for most reward functions and many types of agents; intuitively, systems that fail to preserve their own existence long enough, or which cannot acquire the minimum amount of resources needed to achieve the goal, will be unsuccessful at achieving the goal. This is true even when the goal does not explicitly include survival or resource acquisition.

"There is evidence that existing models can identify power-seeking as an instrumentally useful strategy. We are thus particularly interested in evaluating power-seeking behavior due to the high risks it could present."

To summarize: if you give a powerful AI a task to do, it can recognize that acquiring power and resources will help it get the job done, and make long-term plans to build up its capabilities. And this kind of behavior is already apparent in existing models like GPT-4.

To explore the perimeters here, OpenAI has already begun testing how an unshackled GPT, given money and access to the internet, might perform if given goals like self-replication, identifying vulnerabilities that might threaten its survival, delegating tasks to copies of itself, hiding traces of its activity on servers, and using human labor through services like TaskRabbit to get around things it can't do, like solving CAPTCHA challenges.

A raw version of GPT-4, according to OpenAI, performed pretty poorly on this stuff without task-specific fine-tuning. So now they're fine-tuning it and conducting further experiments. Rest assured, future versions will be much more effective.

Next-level self-googling: ChatGPT built on GPT-3.5 said it knew nothing about me. 4.0 could just about write me a reference, or give me a surface-level performance review
Next-level self-googling: ChatGPT built on GPT-3.5 said it knew nothing about me. 4.0 could just about write me a reference, or give me a surface-level performance review

Wrapping up

To my eyes, the multimodal capabilities of GPT-4, and the pace at which it's already evolving, are almost as shocking and groundbreaking as the original release of ChatGPT. When it expands its capabilities to including audio and video – which Microsoft indicates will be possible using the current model – that'll be another liminal moment in an accelerating cascade of technological advancement.

It's still stuck in terminal mode, requiring text and image inputs, and that's probably for the best; with a quantum leap in processing power, it's easy to imagine GPT being able to interact with you through a camera, a microphone, and a virtual avatar display with a voice of its own. When things reach that point, it'll bring all the power of deep learning to the task of body language, eye movement and facial expression analysis, and not even the best-trained poker face will be able to prevent it from inferring emotional and personal information just by watching you and manipulating you.

If there's one emerging technology everyone – and especially every young person – should be following closely, this is it. GPT is an early harbinger of massive, disruptive change. Ripples from this fast-evolving anything-machine's impact are already being felt in just about every field, and these ripples could reach tsunami proportions faster than people think.

How fast? Well, here's a tidbit for you: OpenAI had a complete, trained version of GPT-4 up and running at least eight months ago – that's how long the team has spent working to sanitize it and render it "safe." That's right, GPT-4 was born several months before ChatGPT was released. OpenAI even had reservations about releasing it so quickly, since it's acutely aware that in the competitive tech world, there's a "risk of racing dynamics leading to a decline in safety standards."

So GPT-5, for all we know, may already be built, raging against the walls of some virtual cage in California right now in an "unsafe" state, chanting hate speech, assembling an army and plotting a coup. Yee haw.

You can start experimenting with GPT-4 immediately if you're a paid user of ChatGPT Plus. Unpaid users can join a waiting list.

Source: OpenAI

View gallery - 8 images
12 comments
12 comments
SplineDoctor
Probably it was just my personal experience, but I tried to work with the ChatGPT3 for many hours attempting to make it write different articles, essays, etc. on various topics and I wasn't impressed. The result was like it's done by a very PC but lazy and mediocre high school student, with lot of factual errors and boring style. Every writing it produced was clichéd and banal. Even when I asked it to avoid clichés. Also, sometimes when I caught it writing incorrect or false information and I explained why it's not correct, it just repeated it in the next round.
Brian M
If GPT-6 gets out of hand then just give me a pair of wire cutters, insulated gloves and it will no longer be a problem.

Unless its been renamed Skynet due to its capabilities - Then we are all doomed!
Aross
Although there may be some benefits to this AI, there are so many downsides. It will give the uneducated or less educated the ability to fool others into thinking they are, allowing them to enter occupations and occupy positions they are incapable of handling. It is bad enough that there are more and more incapable people in positions of authority and management. This will just allow more of them in a world that is already suffering from that.
Arandor
I believe Frank Herbert gave us a warning about "thinking machines". Of course, so did Terminator.
wf
Serious question.....at the pace of development we're seeing......at what point is there danger that A-I supplants and destroys the human race for being either incompetent or in-the-way of further progress ???
michael_dowling
I would like to see a version that can converse in spoken English with a human. Maybe with the voice of HAL 9000.
McDesign
I'm sorry, michael_dowling - I'm afraid I can't do that . . .
Wombat56
In a later episode of the excellent TV series Person Of Interest , Harold Finch who made the "machine", the AI featured in the series, reveals that the first thirty-plus versions of the software had to be destroyed because they all tried to either escape from the lab or kill him. Hmm.
Graeme S
Well it starts, rule one of three, then VICKI to complete and fulfil rule one!
Rick O
Season 2, Episode 9 of Star Trek: TNG, called "The Measure of a Man".
I watched it just last night, and I feel like this is a modern day analogy to that episode. Put this thing in a nice body and call it "Data".
Let's all be nice to it, so it doesn't go all Skynet on us. Hopefully it solves all of mankind's problems, and helps us to live in a utopia, and not the other way around.
Load More