w8LH5TJyb7b35F3kkr5ZBc-857-80.png
April 10, 2026

Has Adobe just solved one of AI video’s biggest problems?



Mangled nightmares like the Coca-Cola AI advert and the withdrawn McDonald’s Christmas ad show that AI-generated video is still of dubious usefulness as a finished creative asset. Meanwhile, OpenAI’s closure of Sora has raised questions again about whether there’s demand for it and whether it can be both safe and profitable given the amount of resources it uses.

Adobe thinks it has a solution, at least for one of the major technical problems that makes AI video so difficult to use. It’s released a preview of an experimental product called MotionStream that allows users to take a more hands-on approach to controlling AI-generated footage.

animation, following a process that typically takes hours, if not days depending on scope.

“Instead, the underlying video generator behind MotionStream is basically simulating the world in real time. So, the elephant’s legs move naturally, and the ears flap naturally as the elephant moves. The model provides you with knowledge about the world and you can interact with it.”

He thinks the same technology could also change how people edit photos and other still images.

“Once video becomes interactive, your canvas could be a video that’s always running. When you interact with it, you see a smooth video changing toward the edit you’ve specified. You can watch the transition, and you could even stop it in the middle if you like the intermediate result. There’s big promise here for both image and video”.

What to read next

The paradigm shift behind MotionStream would also speed up work with AI video. Early models would generate an entire video before delivering it to the user as each frame would look at every other frame.

That improved generation quality, but Senior Research Scientist and MotionStream collaborator Richard Zhang says “knowing both the past and future isn’t how the universe works”.

Adobe Research wanted to remove that constraint so it developed a method that could generate a video in pieces, with future frames depending only on what’s already been created, a process described as “autoregressive”. As users watch their first piece, the tool is generating the second, making it possible to show a generated video to the user in more real-time fashion.

For now, MotionStream remains in development as a research project. There’s no detail on if, when or how it could be added to tools like Adobe Firefly or Adobe’s video-editing software Premiere.



Source link

RSVP