AI Visual News

AI image & AI video news and information delivered by Mintbear. Only the news you shouldn't miss. With Mintbear's insights. Easy and useful.
Sora is Open!
  1. AI Video
  1. Sora
  2. AI Films
 
2024/12/10
  • mintbear
 
Sora In London
  1. AI Video
  1. Sora
Sora was shown at the London C21Media Keynote. The realistic look of the footage and the consistency of the characters are amazing. Will Sora be revealed on the 12th day of the OpenAI Live event?
2024/12/09
  • mintbear
https://slashpage.com/mintbear/SoraInLondon
Hunyuan Video by Tencent
  1. AI Video
  1. Hunyuan
 
2024/12/07
  • mintbear
 
Hailuo I2V-01-Live
  1. AI Video
  1. Hello
 
2024/12/04
  • mintbear
Gen-3 Video Keyframing (Prototype)
  1. AI Video
  2. Updates
  1. Gen-3
 
2024/12/03
  • mintbear
Leaked Sora Videos : Sora API Leak Incident
  1. AI Video
  1. Sora
 
2024/11/27
  • mintbear
Leaked Sora Gallery
  1. AI Video
  1. Sora
 
2024/11/27
  • mintbear
Luma updates - with Image Tools
  1. AI Image
  2. AI Video
  3. Updates
  1. Luma
2024/11/25
  • mintbear
Introducing Flux.1 Tools for General Users
  1. AI Image
  1. Flux
 
2024/11/22
  • mintbear

Sora In London

Status
2024.12
Summary
Sora was shown at the London C21Media Keynote. The realistic look of the footage and the consistency of the characters are amazing. Will Sora be revealed on the 12th day of the OpenAI Live event?
Category
  1. Sora
Tag
  1. AI Video
Dates
2024/12/09
Created by
  • mintbear
SP
https://slashpage.com/mintbear/SoraInLondon
Mint Bear 2024.12.09
The Sora video was released at the London C21Media Keynote event. Above is the full 2-minute video that was displayed on the event screen. The first minute is the edited video created with Sora, and the second minute is a commentary on the image-to-video and consistency of the video.
The first minute of the video consists of three stories (#1 Viking / #2 Jungle / #3 Snowfield) as shown below, and appears to be an edited version of the generated Sora video.

Sora screenshot

The images below are screen captures of the video above.

# Story 01 _ Viking War

# Story 02 _ in Jungle

# Story 03 _ Frozen Frontlines

Image-to-Video

This demonstration showcased image-to-video (I2V) capabilities.
It is said that they used Midjourney images, and it appears that they edited and utilized some of the results of the 'Midjourney-to-Sora' video creation into the above demonstration video.
SORA OUTPUT : MULTIPLE TAKES
"The camera follows determined female warrior's eyes amidst a chaotic battlefield. Her face is spattered with mud and blood, her piercing blue eyes exuding intensity and resolve. She wears a chainmail and leather armor, adorned with a red leather symbol, and her blonde hair is twisted and windswept. The warrior holds a sword, ready for combat. In the background, other armored warriors, holding shields and weapons, prepare for battle, their figures blurred by the misty, overcast setting. The atmosphere is tense, with a sense of urgency and anticipation, capturing the gritty reality of medieval warfare and the warrior's unwavering courage and readiness to face the conflict. Shot on 35mm film, muted color, strong depth of field."
“The camera follows the eyes of a determined female warrior in a chaotic battlefield . Her face is smeared with mud and blood, and her eyes are intense and determined blue. She is dressed in chainmail and leather armor, adorned with red leather emblems. Her blond hair is twisted and blown by the wind. The warrior is holding her sword, preparing for battle. In the background, other armored warriors are seen preparing for battle, holding shields and weapons, in a foggy and overcast environment, their figures blurred. The atmosphere is tense and desperate , vividly capturing the harsh realities of medieval warfare, the warrior’s stern courage, and her readiness to face battle. Shot on 35mm film, the film is characterized by muted tones and strong depth of field.”
This image-to-video (I2V) footage retains the original blood stains on the face very well, and many people are amazed by some of the actions, facial expressions, and realistic directing. However, there are also some slightly disappointing parts in the directing and actions.
I used a video prompt full of tension and urgency, but it is difficult to use the exaggerated facial expressions and awkward action poses without context in the middle. The video footage is generated up to 1 minute, but it is not controlled by the prompt. It is a demonstration, so it must have shown a good result.
In fact, natural video generation from image-to-video (I2V) has already been well implemented in Hailuo, Kling, etc.
The important thing is that it is generated up to 1 minute long, and can it be controlled with text prompts. Even if a 1-minute video is generated, what if the result does not meet the user's needs? In the end, it means that it must be cut and edited.

In that case, it would be more useful to create a method that extends it by 6 or 10 seconds from the beginning according to your intention.
On the other hand, it is also important whether the UI and editing and auxiliary tools are provided. In the initial introduction video in February, there was a part where video-to-video (V2V) editing was done with natural language prompting, and I hope to see a strong implementation of related functions.

AI Video Creation with Unmatched Detail and Realism

Comments from presenter Chad Nelson in the latter half of the video
“So I'll be candid, I majored in Mid-Journey 6.1, and I like the resolution. One of the reasons I chose this is because one of the classic cases where you see AI fall apart is when you have high-density pixel patterns. So if you notice her face and her skin, it's clearly a lot of mud and blood. But if you look at the detail here in Sora's focal screen, you can see that not only does Sora actually give that mud, keep the pixel patterns intact without going off the interface errors, but it actually gave it 3D depth. And if you look at her nose, like the tip of her nose, you actually see the blood gliding, hardening there at the tip. That's just Sora basically saying, well, we know how the world kind of operates. How do we take this image and give it more detail? We have a little blood splattering on her teeth. Obviously, her teeth are a little nice for that era. But the fact is, she never even saw her teeth in the JPEG.”
“Honestly, I mostly used MidJourney 6.1 and really liked the resolution. One of the reasons I chose this tool is because it shows a classic example of how AI often fails when dealing with high-density pixel patterns. If you look at her face and skin, you can see that there’s a lot of mud and blood. But when you look at the details on Sora’s focal screen, Sora doesn’t just represent mud, she does it in 3D depth without compromising the pixel pattern and without any interface errors.

For example, if you look at the tip of her nose, you can see that there is blood flowing down and hardening. It’s as if Sora is saying, ‘I understand how the world works, so let’s make this image more detailed.’ Even the blood splatter on her teeth is captured in detail. Of course, considering the setting of that time, her teeth look a bit more modern and well-expressed. The funny thing is that in the original JPEG image, her teeth were not visible at all.”


Sora v2?

There are posts going around saying that it will be launched as a version called Sora v2, although we don't know if that will be the official name.
Output a 1 minute video
Text-to-video
Text+Image-Video
Text+Video-Video
I haven't even experienced v1 yet, so I think v2 is just an arbitrary name.

Mintbear Comment

OpenAI, 12 Days Live: last is Sora?

We're on day 3 of the 12-day OpenAI Live, and there are rumors that Sora is about to be released.
It is said that Sora may be announced on the last day of the 12th live. Sam Altman is also leaking various sources. Sora has been announced since February of this year, so it must have been burdensome for Sam Altman to end 2024 without any additional announcements.

When will it be released and how much will it cost to subscribe?

However, it is not known whether it will be released to the general public. Even if it is released, there may be restrictions on the length or quality of the video. There may also be differences in subscription costs.
The o1 Pro subscription cost announced a few days ago is $200 (280,000 won), and it seems unlikely that Sora will be included. If it is, I would invest in Pro. However, I think it should be set as a separate service to maintain Sora branding and quality. It is not a service product in terms of the scale of the server that will be used.
We will conduct an analysis of the public video itself separately.
Fn.

Sora Contents

Regarding Sora, please also refer to the content below.
If you found this post helpful, please like or comment below :)
Feel free to ask questions or share your video creations.
See you again-
👍