AI & Humanoids

OpenAI Sora: Wake up humans, the world has changed again

Wake up, human, the world has changed again
OpenAI Sora
Wake up, human, the world has changed again
OpenAI Sora

AI-generated video was a complete joke less than a year ago; look what it can do now. OpenAI has announced its new Sora model, which turns descriptive text into video, and calling it a frighteningly massive leap forward feels like an understatement.

Sora creates "realistic and imaginative scenes" from text prompts, meaning you can type in a scene with as much detail as you care to give it, and it'll go away and generate high-resolution video to match. In this way, it's similar to a lot of previous video generators we've seen in the last year or so.

But to give you a sense of the progress in this field, take a look at where the state of the game was in March 2023, then check out how far it had come by April 2023, then take a quick refresher on Google's Lumiere system from last month.

Now, take a look at what OpenAI is doing halfway through February 2024 with its new Sora system, and take a moment to appreciate the breathtaking pace of advancement. Here's a bunch of examples, with the prompts that led to them.

Good grief. For the most part, the physics of these scenes works uncannily well. The details and motion are realistic enough that you'd easily mistake many for real footage if you weren't hunting for mistakes – or noticing that it's realistic footage of something that doesn't actually exist.

It can also, according to OpenAI, save characters, locations and styles so they can be used across multiple scenes, pointing toward where this is eventually going: the ability to generate entire stories, shows or movies.

On the other hand, there's still plenty of room for improvement, and as with all creative AI systems, the results can be hilariously weird – especially when you ask for something particularly absurd.

And sometimes, it can come up with an unexpectedly artistic surprise or two.

OpenAI says it's "red teaming" Sora now – that is, throwing naughty prompts at it, trying to get it to do things it's not allowed to, so that all known ways of making that happen can be blocked. This will undoubtedly hobble it and make it a worse product, but it'll make Sora more copyright-compliant, less likely to generate "dangerous" content and more family-friendly.

But good lord, in a matter of months we've gone from this:

To this:

These AI systems, friends, are toddlers. Unbelievable as they already are, they're honing their talents at a rate unlike we've ever seen in any other area of technology. It's all happening so fast that I genuinely have no idea if we're close to the top of a curve here, with a long, slow, slog ahead to clean up all the little weird edge bits, or if we're just at the start of the AI acceleration curve.

Maybe, as some people think, we're close to the point where it's as good as it'll ever get, and as more and more of the internet becomes AI-generated, it'll eat so much of its own faeces that the quality will start to degrade and plummet.

All I know is this: we all better learn to love change, because humanity has never seen a transformation like the AI revolution before. The wheel, the light bulb, the combustion engine, the aeroplane, the computer, the internet ... None of them ever accelerated or proliferated like this, and none of them threatened our position at the top of the food chain. We're in uncharted territory.

Source: OpenAI

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
10 comments
martinwinlow
"AI-generated video was a complete joke less than a year ago; look what it can do now."
I'd be inclined (if I didn't already realise it) to think that the same is probably true of progress with autonomous cars despite what the click-bait-obsessed/hysterical mass-media is saying.
Captain Danger
I like the cat picture
reader
Looks like the Holodeck from Star Trek will soon be doable.
White Rabbit
As with any AI system, its products reflect the "values" of the sources of its learning. Note that there is nothing in the prompts that suggest that the Tokyo scene be at night, nor Lagos in the evening. Perhaps 'cyberpunk' conjures dark themes, but the "Blue Hour" looks more like midnight!
These are not insurmountable obstacles at the technical level, but the "Why?" question ought to receive more scrutiny. An AI's standards for what is a good/appropriate/suitable/... representation of Tokyo, or anything else, are informed by its sources - even if aspects of those standards have been explicit. There's no reason to believe that the Sora model was intentionally given a predilection for dark images, but the frequency with which they appear provides an insight into a frequently ignored "elephant in the room".
White Rabbit
Oops! Omitted a word.
Intended to say-
even if aspects of those standards have NOT been explicit.
Brian M
Feel sorry for cats, they are going to lose a lot of YouTube time to AI generated videos!

On the plus side could end up creating some incredible worlds/environments for games at a cost that would have been too expensive previously, even for the big game companies let alone minor/niche game producers.

It might even make 3d worlds as envisioned by Meta more attractive.

Hopefully John and Jane public will realize not everything they see or hear is real or true, not sure if it ever has been!
Wombat56
Kudos to Loz Blain for including the prompts used to generate these images. None of the other articles I've seen on OpenAI Sora have done so.
jimbo92107
I'm led to believe that cooling AI video processors consumes an enormous amount of fresh water. This implies a temporary bottleneck to endless hi-rez AI video. I expect Nvidia or someone will eventually solve the cooling problem, but it may take a few years. Frankly, the task of driving a car seems far simpler than composing frame after frame of photorealistic video, especially if you want it in real time.
Daishi
AI photos took about a year to go from producing silly results to being almost photorealistic. The next year AI video made that transition. Initially Sora will be kind of expensive to run and over the next year it will get cheaper as costs come down and other big AI companies achieve this level of detail. The pace of progress in this space has been fascinating to watch.
SplineDoctor
The end of our reality (or at least our true memories) is approaching. In a few years the internet will be full of fake videos and photos fabricated by AI and year by year we have less chance to recognize if they are real or fake. After some decades children would not be able to recover any credible information about their ancestors life because all they would see is an incredible amount of AI generated media that have actually no connection to the real life.