Create new videos in a realistic and consistent manner either by applying the composition and style of an image or text prompt to an existing video (Video to Video) or by using only words (Text to Video). This process is akin to filming something new without actually filming anything at all. In this video, we test the Gen 2 from RunwayML and the open-source ModelScope text2video.
It’s based on a 1.7 billion parameter model by Alibaba (on ModelScope).
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
Links:
Hugging Face Text-To-Video: https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis
Google Colab Notebook: https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing
Runwayml: https://runwayml.com/follow-us/
Watermark remover: https://anieraser.media.io/app/
Original repo: https://modelscope.cn/models/damo/text-to-video-synthesis/summary
#text2video #aiart #text2image #runwayml #gen2