Google Demo Text Video AI Model Imagen Video

After Meta announced Make-A-Video, its text-to-video AI model, Google demoed its own equivalent, Imagen Video . It produces 1280×768 video at 24 frames per second based on written prompts. From text-to-text to text-to-video, AI diffusion models have made significant progress in the past six months. According to Google researchers, Imagen Video’s videos can express different styles, such as generating videos based on Van Gogh’s work. Google hopes that a general video synthesis model will significantly reduce the difficulty of generating high-quality content. Imagen Video is trained using the LAION-400M image-text dataset, as well as 14 million video-text pairs and 60 million image-text pairs, and will not be available to the public for as short a time as Meta.

This article is reprinted from: https://www.solidot.org/story?sid=73003
This site is for inclusion only, and the copyright belongs to the original author.