Artificial Intelligence is progressing at a brisk pace and so is its adoption. On one hand, the threat of AI replacing jobs looms large, while on the other hand, it is showcasing numerous ways to amplify human creativity. US-based Runway AI has introduced its latest AI model Gen-3 Alpha. The company claims it to be ‘a new frontier for high-fidelity, controllable video generation.’

Gen-3 Alpha is the first in the upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. Runway claims that the new model is a major improvement in fidelity, consistency, and motion over Gen-2. It is its step towards building General World Models – the next major advancement in AI as these will be systems that will understand the visual world and its dynamics.

Ever since the launch of Gen-3 Alpha model, internet users have been sharing their unique creations with the world. These high-definition videos showcase the versatility and range of the new AI model from Runway AI. Here is a look at some spellbinding videos by Gen-3 Alpha.

Create your monster fiction

A text-to-video model like Gen-3 Alpha can truly amplify your creativity. A user on X (formerly Twitter) known as Uncanny Harry AI used the model to create a short video of a fictional monster rising from the Thames River in London. The video shows a ‘hideous monster’ rising from the river evoking the famous Godzilla or Kaiju. The 11-second clip is cinematic with a grim London scene under a cloudy sky, and the monster slowly rising above the fierce waves.

Time Lapse pencil drawing

Another user, Anu Akash, who claims to be ‘exploring AI tools’ in her bio on X, shared a short video generated by Gen-3 Alpha where a pencil drawing of a girl is shown in time-lapse. Akash used the prompt describing the top view time-lapse video of a pencil artwork drawn by hand. She described it as an art of a girl with rabbit hair from beginning to end. The user also acknowledged that the hair was a typo in the prompt that she gave as she intended it to be “rabbit-like ears”. However, she seemed pleased with Gen-3 Alpha’s output.

A floral storytelling

Gen-3 Alpha can materialize even your wildest dreams. Martin Haerlin, another X user, used the model to create a visual carousel of flowers. One could see the unfurling of pink and red petals of flowers over a megacity, guns shooting flowers of all colors and sizes, a warrior’s bow turning into a sunflower, daisies floating in the air, soldiers and martial artists manoeuvring flowers In his post, Haerlin exclaimed that with Gen-3 Alpha it felt like his toolset for storytelling has been supercharged and upleveled by leaps.

Create your sci-fi movie

Gen-3 Alpha could potentially turn your sci-fi ideas into reality. Former Google Maps AR/VR creator, Bilawal Sidhu, took to his X account to share his experiments with Runway AI’s Gen-3 Alpha. In a long thread of videos, he praised the AI ​​model for its impressive particle simulation visuals, light interaction effects, and complex camera movements in some cases.

Sidhu also highlighted the Gen-3 Alpha’s ability to maintain high-frequency detail, First-Person shooter-style video generation, and exert control using text prompts regardless of the imperfect physics. The creator also noted realistic motion graphics, physics, and city visualization. Although he found human renderings good, he stated they were difficult to control. Sidhu said that heads-up displays and augmented reality prompts were realistic.

Text prompts to control camera speeds

AI art enthusiast vkuoo shared a unique creation by Gen-3 Alpha. This is perhaps a first in AI text-to-video generation. The user showcased a demo where he is shown controlling camera speeds using text commands. When one of the users requested the prompt he used to create the video, vkuoo responded with the prompt – “Ultra-fast disorienting hyper-lapse racing through a tunnel into a labyrinth of rapidly growing vines. The tunnel lights flicker at high frequency, and the vines quickly grow to block the path. Rapid camera movement with intense focus shifts.”

A video of a cruising sports car

Heather Cooper, whose bio describes her as an AI educator and consultant, shared a stunning short video of a sports car wading through wet pavement. The video shot at a low-angle shows the futuristic car moving through a street flanked by neon lights. Cooper used the prompt – “Low-angle tracking shot following a sleek sports car with neon lights reflecting off the wet pavement.”

Rich details and realistic lip sync

Chrissie, another X user who is an AI video creator, shared a short clip created using Gen-3 Alpha. The clip shows a woman walking and speaking about Gen-3 Alpha. The user noted that Runway AI’s Gen-3 Alpha’s lip sync abilities are fun. “Look at her expression as she gives that light little shimmy at the end lol,” wrote Chrissie

Hyper Realistic visuals

Digital artist and filmmaker, Christopher Fryant, shared a 53-second short film called ‘This Town isn’t Real’. Fryant used the Gen-3 Alpha model with some additional editing and sound design by him. Fryant said that the output is entirely text-to-video. The video footage shows the camera panning through a night scene showing people in motion. At first, it may appear like real footage.

Flying through time and landscapes

Blaine Brown, whose X bio says he is an Innovation leader, tried Gen-3 Alpha for the first time. Brown took to his X account to share the output. His prompt read – “A fly through a castle in Ireland that becomes a futuristic cyberpunk city with skyscrapers.” The video created by Gen-3 Alpha is rich in detail as it accurately depicts the castle’s corner towers, its cobblestone walkways, and a smooth transition into a cyberpunk city with shimmering skyscrapers.

AI video models are a testament to the potential AI holds in the field of visual communication. Earlier this year, OpenAI shocked the world with its superior text-to-video model Sora. While AI video models have been persisting, in recent times more and more AI start-ups are coming up with their AI models which are essentially outdoing their predecessors.

Festive offer

Based on the above creations from various users, it seems Runway’s Gen-3 Alpha is on par with Sora, even exceeding it in some cases based on the video samples shared by OpenAI. Sora is not available yet. Former CEO of Stability AI, Emad Mostaque also shared a post drawing comparisons between Gen-3 Alpha and Sora.

Runway AI is among one of the earliest startups to work on AI for video generation. The Gen-3 Alpha which is now generally available allows users to make hyper-realistic AI videos from text, images or even video prompts. Those signed up with RunwayML platform can use the model’. While Gen-1 and Gen-2 were free models, to use Gen-3 users will have to buy a subscription starting from $12 per month/per editor.