Meta’s new “Make-A-Video” AI makes videos out of text
For months now, DALL-E has been stealing the spotlight for its surreal text-to-image capabilities, though it now has a competitor, Meta‘s ‘Make-A-Video,’ an AI that can make videos out of text prompt.
As the name implies, the AI lets users describe a scene and then creates a short video matching what they are told. Well, you will not get some cinematic footage, the videos come out looking artificial with wrenched animations, though it is still a testament to what can be achieved with AI.
Not just text, but “Make-A-Video” can make new videos out of older ones and also from a single image or set of them.
In a Facebook post, Mark Zuckerberg, CEO, Meta, described the work on this new AI as “amazing progress.” He further added, “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.”
There’s a video of “A teddy bear painting a portrait,” “Robot dancing in Times Square” and “Cat watching TV with a remote in hand,” and the cat has a human hand. Meanwhile, there are a few realistic videos too – “An artist’s brush painting on a canvas close up”, “A young couple walking in a heavy rain” and “Horse drinking water” – and these look much more real.
Despite the “amazing progress,” the videos are shorter than 5 seconds and have no audio. Also, Meta is not allowing users to test the model, it has provided samples for different prompts, but one cannot test it out themselves, so these samples could be the ones showing the best of the AI.
As the name implies, the AI lets users describe a scene and then creates a short video matching what they are told. Well, you will not get some cinematic footage, the videos come out looking artificial with wrenched animations, though it is still a testament to what can be achieved with AI.
Not just text, but “Make-A-Video” can make new videos out of older ones and also from a single image or set of them.
In a Facebook post, Mark Zuckerberg, CEO, Meta, described the work on this new AI as “amazing progress.” He further added, “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.”
There’s a video of “A teddy bear painting a portrait,” “Robot dancing in Times Square” and “Cat watching TV with a remote in hand,” and the cat has a human hand. Meanwhile, there are a few realistic videos too – “An artist’s brush painting on a canvas close up”, “A young couple walking in a heavy rain” and “Horse drinking water” – and these look much more real.
Despite the “amazing progress,” the videos are shorter than 5 seconds and have no audio. Also, Meta is not allowing users to test the model, it has provided samples for different prompts, but one cannot test it out themselves, so these samples could be the ones showing the best of the AI.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.
Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.