Alibaba just launched Qwen 2.5, even in its Max version. We are talking about an artificial intelligence model that is in direct competition with the big tech Western ones that we know well – such as Open.ai and Google – but above all with his compatriot DeepSeek, which offered a model open-source competitive and extraordinarily cheaper.
This first created havoc on the stock exchange and in the global AI market, then attracted the attention of European and US privacy regulators. This led to accusations, bans and restrictions. Beyond the lights, shadows and bans, DeepSeek in a few days has left an indelible impact on the way we conceive and enjoy artificial intelligence.
On the field of this technological war Alibaba appears – a Chinese giant that doesn't even need to be introduced – declaring in simple terms that Its AI Qwen 2.5-Max puts DeepSeek in line and which gives the Western giants a hard time.
Content index
Is Qwen 2.5-Max really as competitive as Alibaba claims?
First Qwen 2.5-Max is trained on over 20 trillion tokens. This means that the model has been fed with a huge amount of information. So Qwen 2.5-Max on paper has excellent knowledge, coherence and reasoning ability.
However, more data they also imply higher computational costs and possible biases, especially if the dataset is not well curated. So data quality is more important than quantity: a factor that already risks deflating Qwen's dad's little muscles.
But Alibaba knows this well. So its Qwen 2.5-Max uses a MoE architecture, that activates only the parts of the model that are relevant to each task. This implies computational efficiency, which involves:
- less resource consumption.
- lower energy consumption,
- reduction of operating costs,
- reduction of environmental impact,
- faster calculation speed,
- more cost-effective, because the cost per token processed can be lower than models that use all parameters in every request.
Benefits that undoubtedly help to compensate. In addition the model has been further refined through techniques of SFT (supervised refinement with expert-labeled data to improve the quality of responses) and RLHF (training through human feedback to make responses more natural and aligned with user preferences). Interventions that should therefore guarantee more accurate responses and aligned with human preferences.
On this and on understanding language, AI Qwen 2.5-Max proved superior to DeepSeek in several benchmarks and gives other major Western players a run for their money.

So yes: Qwen 2.5-Max is really competitive as Alibaba says…but benchmarks and tokens are not everything. The battle between open-source possibilities, platform integrations and computational efficiency is no longer fought by focusing only on pure performance.
Alibaba vs DeepSeek: The Homecoming Challenge Between Two Opposing Philosophies
DeepSeek like Qwen 2.5 is based on the MoE architecture seen before, but Alibaba's AI wins in number of tokens and in various benchmarks. This suggests that Qwen is more powerful. But Let's not forget that DeepSeek has launched an open-source AI, therefore accessible to developers and companies. Such a strategy has had a huge impact precisely because it could exponentially increase the progress and uses of AI… but if it were too accessible?
Not by chance DeepSeek has come under fire, but it remains competitive nonetheless because it plays on a field that few are treading and that could increasingly gain people's attention and appreciation. Alibaba he preferred – in this respect – to remain in line with Western AI, maintaining a certain control on his own technology, on his usages and on his distribution, although Qwen-VL, Qwen-Audio and Qwen 1.x and 2.x – Lite and Standard – are actually available in open-source.
So only the less performant versions of Qwen are open-source and their licenses have restrictions on commercial use, unlike DeepSeek. This denotes the opposition of two philosophies: DeepSeek has chosen a more community-oriented strategy, aiming to democratize access to advanced language models; Alibaba instead protects its competitive advantage and monetizes through cloud services and APIs.
Can Qwen 2.5-Max really challenge the AI giants?
The short answer is yes, but the issue is a little more complex.
Qwen 2.5-Max has great capabilities that they put it competing with many other models on a global scale, but it is not that overwhelming. Also, there is no comparison data other than the official data published by Alibaba itself.

So its rise undoubtedly influences the AI market: a battlefield that is difficult – if not impossible – to dominate, which we have seen is characterized by other factors besides just power. Alibaba therefore offers a valid and potentially more accessible alternative compared to the models of Open.ai and Google for example, even if it remains less accessible than its compatriot DeepSeek.
Then you have to consider that Qwen 2.5-Max is cheaper than some models. Just know that Qwen 2.5-Max has a cost of dollars 1,6 per million tokens input e dollars 6,4 per million tokens output, versus $5 and $15 per million for Gpt-4. In this respect, Qwen 2.5-Max is 2-3 times cheaper.
Finally, Alibaba, to compete with its AI, is aiming for a stronger integration with its own cloud ecosystem and E-commerce, favoring the Asian market.
So its real ability to establish itself as a global leader will depend on the quality of its real-world applications and its ability to scale outside of China..
Want to try Qwen 2.5-Max? Here's how
By trial Qwen 2.5-Max You have two main options:
Qwen Chat: access the model directly via the Qwen Chat web interface. Visit chat.qwenlm.ai, select “Qwen2.5-Max” from the model drop-down menu and start interacting with the AI in real time. You can search the web for free, ask questions, translate, generate images, videos… and if you want you can log in directly with your Google account.
Alibaba Cloud API: For more advanced integration, you can use Qwen 2.5-Max API via Alibaba Cloud. Create an account on Alibaba Cloud, activate the Model Studio service, and generate an API key. The API is compatible with OpenAI format, making it easy for developers to implement. For implementation details, see the official documentation.
I, who am writing to you, obviously had some fun with the chat…