{"id":26447,"date":"2026-04-21T08:23:09","date_gmt":"2026-04-21T08:23:09","guid":{"rendered":"https:\/\/bitunikey.com\/news\/alibaba-introduces-qwen-3-6-max-preview-as-its-most-advanced-ai-model-yet\/"},"modified":"2026-04-21T08:23:29","modified_gmt":"2026-04-21T08:23:29","slug":"alibaba-introduces-qwen-3-6-max-preview-as-its-most-advanced-ai-model-yet","status":"publish","type":"post","link":"https:\/\/bitunikey.com\/news\/alibaba-introduces-qwen-3-6-max-preview-as-its-most-advanced-ai-model-yet\/","title":{"rendered":"Alibaba introduces Qwen 3.6-Max-Preview as its most advanced AI model yet"},"content":{"rendered":"<p><\/p>\n<div class=\"post-detail__content blocks\">\n<p>Alibaba has rolled out a preview of its most advanced AI model yet, stepping up its push into the top tier of global AI development.<\/p>\n<div id=\"cn-block-summary-block_e3cda3411a98f82bcbb440bc51925e07\" class=\"cn-block-summary\">\n<div class=\"cn-block-summary__nav tabs\">\n        <span class=\"tabs__item is-selected\">Summary<\/span>\n    <\/div>\n<div class=\"cn-block-summary__content\">\n<ul class=\"wp-block-list\">\n<li>Alibaba launched Qwen 3.6 Max Preview, its most advanced AI model, with top rankings across multiple coding and agent benchmarks.<\/li>\n<li>The model is offered as a proprietary hosted system, marking a move away from the company\u2019s earlier open access approach.<\/li>\n<\/ul><\/div>\n<\/div>\n<p><!-- .cn-block-summary --><\/p>\n<p>According to an <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/x.com\/Alibaba_Qwen\/status\/2046227759475921291?s=20\" target=\"_blank\" rel=\"nofollow\">X post<\/a> from Alibaba\u2019s Qwen team, the new model, Qwen 3.6-Max-Preview, has taken the lead across several key benchmarks, particularly in coding and agent-based tasks. Internal testing placed it ahead on SWE-bench Pro for real-world software work, Terminal-Bench 2.0 for command-line execution, and SkillsBench for general problem-solving, alongside strong results in tool use and web interaction benchmarks.<\/p>\n<p>Performance gains extend beyond coding. SuperGPQA scores rose by 2.3%, pointing to stronger reasoning ability, while QwenChineseBench improved by 5.3%, underlining better performance in Chinese language tasks. Instruction-following capability also ranked at the top in ToolcallFormatIFBench, where the model outperformed competing systems, including Claude.<\/p>\n<p>The release is now live through Qwen Studio and Alibaba Cloud\u2019s Model Studio API under the identifier qwen3.6-max-preview. Developers can integrate it without major changes, as the API supports both OpenAI and Anthropic formats.<\/p>\n<h1 class=\"wp-block-heading\">Proprietary push replaces open model strategy<\/h1>\n<p>Alibaba\u2019s latest move signals a noticeable change in direction. Earlier versions of Qwen built momentum through open-source access, helping the model family gain widespread adoption. Max-Preview, however, is a hosted proprietary system with no open weights.<\/p>\n<p>    <!-- .cn-block-related-link --><\/p>\n<p>Lower-tier models remain open source, but the flagship tier is now positioned as a paid, controlled offering. The shift comes just days after Alibaba open-sourced Qwen 3.6-35B-A3B, a model designed to run efficiently by activating only 3 billion of its 35 billion parameters during inference, cutting compute costs while maintaining output quality.<\/p>\n<p>Combined, the Qwen 3.6 lineup now spans multiple use cases. Max-Preview sits at the top for high-end workloads, while the Plus variant targets balanced tasks, Flash focuses on speed, and 35B-A3B supports local deployments.<\/p>\n<p>A new feature introduced with Max-Preview, called preserve_thinking, allows the model to carry reasoning traces across multiple interactions. Alibaba recommends it for agent-driven workflows where maintaining context across long sessions is important.<\/p>\n<p>Alibaba described the release as an ongoing project, noting the model is still under active development and likely to improve in future updates. Qwen 3.6-Max-Preview currently supports a 256k token context window and is limited to text input, with no image capabilities at launch.<\/p>\n<h2 class=\"wp-block-heading\">Industry transition toward monetization<\/h2>\n<p>Alibaba recently <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.reddit.com\/r\/LocalLLaMA\/comments\/1sn9fdl\/qwencode_cli_free_tier_ended_apr_15_whats_the\/\" target=\"_blank\" rel=\"nofollow\">shut down<\/a> the free tier of Qwen Code, while MiniMax updated its open-source license to restrict commercial use without approval. Both actions point to a gradual move away from free access models that initially drove adoption.<\/p>\n<p>Qwen\u2019s growth has been notable. The model family overtook Meta\u2019s Llama as the most widely deployed self-hosted system, with much of that traction built during its open-access phase. At the same time, Chinese open models expanded their share of global usage from 1.2% in late 2024 to around 30% by the end of 2025.<\/p>\n<p>Max-Preview now sits at the center of Alibaba\u2019s effort to compete directly with leading frontier models from OpenAI and Anthropic.<\/p>\n<p>Independent analysis from Artificial Analysis <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/artificialanalysis.ai\/models\/qwen3-6-max\" target=\"_blank\" rel=\"nofollow\">ranks<\/a> the model as the second-best performer behind Muse Spark, placing it well above the average for reasoning models in its pricing category.<\/p>\n<p>    <!-- .cn-block-related-link --><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Alibaba has rolled out a preview of its most advanced AI model yet, stepping up its push into the top tier of global AI development. Summary Alibaba launched Qwen 3.6&hellip;<\/p>\n","protected":false},"author":1,"featured_media":12195,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-26447","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cryptocurrency"],"_links":{"self":[{"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/posts\/26447","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/comments?post=26447"}],"version-history":[{"count":1,"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/posts\/26447\/revisions"}],"predecessor-version":[{"id":26448,"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/posts\/26447\/revisions\/26448"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/media\/12195"}],"wp:attachment":[{"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/media?parent=26447"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/categories?post=26447"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bitunikey.com\/news\/wp-json\/wp\/v2\/tags?post=26447"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}