DeepSeek plans V4 multimodal model release this week, sources say

DeepSeek is set to launch its V4 multimodal large language model this week, its first significant release since January 2025. The model can generate text, images, and video and was optimized in collaboration with Huawei and Cambricon, though responses from these companies were unavailable.
Key Points
- DeepSeek will release V4 this week.
- V4 is a multimodal model capable of producing text, images, and video.
- This is DeepSeek's first major launch since January 2025.
- Collaboration with Huawei and Cambricon focused on optimizing V4 for specific hardware.
- No immediate comments were received from DeepSeek, Huawei, or Cambricon.
Relevance
- The development follows the trend of multimodal AI, which is becoming a crucial aspect of IT solutions in 2025.
- DeepSeek's launch aligns with the rising demand for AI systems that can integrate various forms of media, reflecting advancements in technology.
- The enhanced cooperation with hardware manufacturers highlights an ongoing strategy in the AI sector to maximize performance.
The upcoming launch of DeepSeek's V4 model signals a significant step in the evolution of multimodal AI, reflecting industry trends towards integrated media generation and hardware optimization.
