Wan 2.7 AI Video Suite Rolls Out on Together AI Starting with Text-to-Video Generation
Alibaba’s Tongyi Lab has made its Wan 2.7 video generation models available through Together AI, the AI-native cloud platform, with the text-to-video component live as of April 3, 2026.
The release introduces a four-model suite that covers text-to-video, image-to-video, reference-to-video, and video editing, all accessible through the same APIs, authentication, and billing system developers already use on Together AI.
![]() |
| Credit: Together AI |
The initial rollout focuses on the Wan 2.7 Text-to-Video model, accessible via the endpoint Wan-AI/wan2.7-t2v. It supports native 720p or 1080p output at durations between 2 and 15 seconds, with optional audio input and multi-shot narrative control driven directly by prompt language.
Many people are looking for a free version, but Wan 2.7 is not completely free. Instead, it offers more affordable pricing, starting at $0.10 per second of generated video for serverless inference.
Image-to-video, reference-to-video, and dedicated video-edit models are scheduled to follow in the coming days, according to Together AI’s announcement.
Wan 2.7 builds directly on the capabilities introduced in Wan 2.6 by adding structured control surfaces that address common workflow friction in AI video production.
Users can now anchor both first and last frames for precise start-to-end interpolation, feed up to five reference images or videos for subject and character consistency, and apply plain-language instructions to edit existing clips without regenerating the entire sequence.
- Many social media influencers are using this method to make clones of their fans who comment with their photos and ask them to clone using AI. The generated videos look almost real, and not everybody can understand if it is the original person or it is made with AI.
The model also supports temporal feature transfer, including motion, camera behavior, and stylistic elements, along with native audio synchronization across the generated output.
The same model family includes image-generation and editing features released by Alibaba on April 1, 2026, under the Wan 2.7 Image and Wan 2.7 Image Pro variants.
Those add a “thinking mode” that applies chain-of-thought reasoning before output, improved long-text rendering across 12 languages, precise color palette control, and the ability to fuse up to nine reference images or generate sequential sets of up to 12 consistent frames.
Official documentation on the Wan AI site lists instruction-based video editing, creative video transfer, frame-guided continuation, and infinite video outpainting as core functions now available for download and API access.
Together AI’s blog post frames the release as a response to the practical limitations developers encounter once they move beyond initial prompt-to-clip generation.
The company states:
“AI video is easy to generate and hard to steer. A team can get a promising clip from a prompt, but continuing it, matching a reference, or revising it without starting over usually means leaving the model that made it and patching the rest together somewhere else. The more control a project needs, the more the workflow turns into re-renders, handoffs, and manual cleanup. That is the gap Wan 2.7 is built to close across generation, continuation, reference-driven workflows, and editing.”
The models are also listed on the official Wan AI platform at wan.video, where they are described as open-source and accessible via API and mobile apps.
If you want a free version, you can try your luck at https://wan2-7.io/ as this website is offering 30 credits for free when you signup (it may soon stop this offering).
Additional hosting partners, including Picsart, Imagine.art, ComfyUI custom nodes, and Scenario have integrated or announced support within the past week, expanding reach beyond Together AI’s serverless inference.
Wan 2.7 therefore consolidates text-to-video, image-to-video, multi-reference control, instruction-driven editing, and native audio into a single pipeline that developers can call without switching providers or tools.
Since the OpenAI Sora is no longer accessible, and xAi's Grok Imagine is now a completely paid tool behind a costly paywall, Wan 2.7 AI is actually a winner for everyone who loves creating videos using AI but doesn't wanna pay much.
The phased rollout of Wan 2.7 on Together AI gives teams immediate access to the core text-to-video endpoint while the remaining components land over the next several days.
