G.I.C - GREEN INTERNATIONAL INFORMATION TECHNOLOGY SOLUTIONS COMPANY LIMITED
page-banner-shape-1
page-banner-shape-2

Wave of ChatGPT Boycott

  • March 12, 2026

AI models developed by Chinese companies have, for the first time, surpassed their U.S. counterparts in global token usage.

According to the late-February ranking by OpenRouter, a platform that simplifies interactions with AI models, Chinese developers now occupy several top positions among the world’s most widely used large language models (LLMs). This trend reflects the growing adoption of Chinese AI technologies by developers worldwide.

In practical use, many experts note that for simple tasks such as general information searches, most AI models perform similarly, making it difficult to see clear differences. However, when handling specialized and complex tasks, their strengths and weaknesses become more apparent. For example, the Claude Opus model family is known for its strong coding capabilities, bug detection, and code generation. Gemini excels in machine learning projects and data manipulation, while ChatGPT is often recognized for its deep knowledge across a wide range of topics, including quantitative finance.
John Doe
Designer
blog_review

Specifically, the M2.5 model developed by Chinese startup MiniMax currently leads the global ranking, processing approximately 1.7 trillion tokens per week. The American model Gemini 3 Flash Preview from Google ranks second with around 997 billion tokens, followed by DeepSeek V3.2 with about 798 billion tokens. Other Chinese models, including Kimi K2.5 from Moonshot AI and GLM-5 from Zhipu AI, also appear among the world’s most widely used models, each exceeding 600 billion tokens in weekly usage by developers.

blog_sing01
blog_sing02

Total Token Usage

During the week of February 9–15, Chinese AI models collectively recorded 4.12 trillion tokens, surpassing U.S. models for the first time, which processed 2.94 trillion tokens during the same period. In the following week (February 16–22), usage of Chinese models further increased to 5.16 trillion tokens, while U.S. models declined to 2.7 trillion tokens.

01

Analysts attribute this surge to two primary factors: the increased use of AI during the Lunar New Year period and the broader deployment of AI agents, which significantly increase token consumption per task. OpenRouter confirmed that demand for generating long-form content has grown noticeably in recent weeks, with MiniMax M2.5 leading in workloads consuming 100,000 to one million tokens, a typical range for agent-based AI workflows.

Leave a Reply

Your email address will not be published. Required fields are marked *