DeepSeek Unveils Long-Awaited AI Model V4 with Enhanced Performance

DeepSeek Unveils Long-Awaited AI Model V4 with Enhanced Performance

DeepSeek has launched the highly anticipated version of its AI model, V4, which comes in two variants: Pro and Flash, both featuring a default context window of 1 million tokens. The Pro model demonstrates performance comparable to GPT-5.4, Gemini 3.1 Pro, and Claude Opus 4.6 in various benchmarks. It excels particularly in general knowledge, STEM fields, and autonomous coding tasks. Meanwhile, the Flash variant offers impressive speed and affordability, closely trailing the main model. Users can expect to notice immediate enhancements in their tasks, as DeepSeek has matched leading Western AI capabilities in both velocity and intelligence. Behind the scenes, V4 boasts significant architectural advancements, requiring only 27% of the computational power used by its predecessor, V3.2. An optimized attention mechanism allows the model to retain only crucial information while processing ten times more data with the same resources. DeepSeek-V4 is specifically optimized for Chinese Huawei chips, which contributed to the delay in its release. The company is now openly discussing its pursuit of independence from Nvidia hardware and is committed to further tuning for local accelerators, which will enhance the speed and cost-effectiveness of V4 in the future. Users can already experience the model for free through DeepSeek Chat.

Informational material. 18+.

" content="b3bec31a494fc878" />