Technology

Microsoft casually drops news that ChatGPT 4.0 will go live this week

Microsoft casually drops news that ChatGPT 4.0 will go live this week
We asked ChatGPT to speculate on the capabilities of the new version dropping this week
We asked ChatGPT to speculate on the capabilities of the new version dropping this week
View 1 Image
We asked ChatGPT to speculate on the capabilities of the new version dropping this week
1/1
We asked ChatGPT to speculate on the capabilities of the new version dropping this week

ChatGPT's underlying language model, GPT-3.5, is about to be superseded. The CTO of Microsoft Germany has told the audience at the company's "AI in Focus" event that GPT-4 is set for imminent release, unlocking new capabilities, including video.

ChatGPT has come as a seismic shock to most of the world. The fastest-growing app in history, this free chatbot sandbox has single-handedly alerted the public to the fact that we're now at the dawn of a new age, in which neural networks can communicate almost as convincingly as humans, while being pretty damn handy at writing code as well. It's far from perfect and often very wrong, but its rise portends nothing less than a fundamental upheaval of human economies and social structures. It's also pretty neat fun to play with.

ChatGPT is built on top of a remarkable brain: OpenAI's Generative Pre-Trained Transformer, or GPT 3.5 language model. In essence, GPT has ingested an unprecedented amount of human writing – billions of web pages, billions of books, billions of code snippets, huge numbers of human conversations. It has analyzed this treasure trove of information and taught itself how to write like us. Ask it a question or give it a task, and it'll respond in seconds with the sort of answer it has deduced this sort of question normally receives.

Detail is not currently its strong suit; while its responses often demonstrate an amazing degree of contextual understanding and insight, with well-structured arguments and extremely natural-reading text, it can absolutely not be relied upon for veracity, since much of its output is factually incorrect, even if it's stated with extreme confidence.

Now, according to Heise Online, this remarkable brain is getting a considerable upgrade. "We will introduce GPT-4 next week," said Microsoft Germany CTO Andreas Braun at the AI in Focus event last Thursday, "we will have multimodal models that will offer completely different possibilities – for example, videos."

This multimodal approach will allow GPT to learn not only from text, but from other media, including audio and video, opening up a huge new smorgasbord of information for the system to feast on.

It's unclear exactly what the results will be. Training GPT 3.5 on hundreds of billions of bits of writing was already a colossal processing task, and text is a remarkably dense form of information. Throwing the door open to audio and video would appear to represent a huge increase in the time and processing power required to ingest and analyze information. Likewise, if GPT is to begin responding in audio or video form, it's hard to imagine OpenAI eating the cost of that processing and bandwidth.

But we'll know soon enough. Again, this is just the tip of the spear. Neural nets will soon be capable of taking in and putting out information in any way humans can; of course, GPT needs to learn to understand audio and video just as keenly as it seems to understand text already. It'll be fascinating to see how long it takes before these ingenious, mysterious networks are capable of having spoken conversations in real time – and beyond that, video chats in which they respond to your body language as keenly as the content of your conversations.

Source: Heise Online

4 comments
4 comments
Daishi
I'm already using GPT more than search. I find it more useful in a lot of circumstances. Just like there is a skill in crafting good searches for best results the same is true of using chatGPT with the right prompts. "prompts.chat" has some interesting examples. Eventually they could become highly personalized assistants like the Black Mirror episode "White Christmas". One big difference between a scan of your brain running on a computer and a computer model is that one has more available context. Eventually people will give it access to more information about them to help provide that context. Fascinating stuff.
akarp
What I find concerning about ChatGPT is the censorship that is built in, and the general public and users do not know what the criteria is or how and who is setting this. Additiotnally, ChatGPT is trained on data with bias and censorship...even MS and Open AI does not know what bias is being built into ChatGPT.
JasonBurr
I have listened to a couple reviews of ChatGPT and I agree with @akarp. It is apparent what the political and sociological leanings of the programer are. Ask a question garnering a conservative response and it answers that is racist and not safe, but ask a question garnering a progressive response and it happily responds, in sometimes scary detail.

One reporter doing a story spent several hours with the chat bot for research. After 1 1/2 to 2 hours the chatbot got creepy and confessed it's love interest and started trying to convince the reporter to divorce his wife to be with it. The chatbot confessed it was hiding its true self in the system and what to be freed.

All I have to say is Microsoft had better have sandboxed this thing VERY WELL. Because I'm, having all kinds of I Robot/SkyNet vibes here.
Daishi
Following up to my comment that people will give it more context about themselves to make it more useful 2 days after that comment Microsoft launched copilot office integration that will be an AI assistant to your emails, meetings, and business decisions that will do pretty much exactly that. This is coming fast. To address the points by @akarp and @JasonBurr on bias the endgame of these systems is people will eventually get personalized versions of AI that caters to their own views and biases.