On April 24, 2026, the abrupt termination of China’s influential Academic Journal Evaluation System (AJES) by the Ministry of Education sent shockwaves through global research communities, dismantling a de facto gatekeeper of scholarly prestige that had directed funding, promotions, and international collaboration for over a decade. This move, framed as a corrective to metric manipulation and citation cartels, leaves Chinese academics scrambling for alternatives while raising urgent questions about the future of research assessment in an era increasingly dominated by AI-driven analytics and geopolitical tech fragmentation. The vacuum created by AJES’s collapse is not merely administrative—it threatens to destabilize epistemic trust in a system where journal rankings have long functioned as proxies for quality, potentially accelerating a shift toward opaque, algorithmic evaluation tools that could deepen inequities between well-resourced institutions and those lacking access to proprietary AI evaluation platforms.
The Mechanics of Collapse: How AJES Shaped Research Incentives
For years, AJES operated as a tiered classification system dividing thousands of journals into Categories 1 through 4, with Category 1 representing the highest echelon of impact, and rigor. Unlike Western metrics such as Journal Impact Factor (JIF), which rely on transparent citation counting, AJES incorporated opaque peer-review panels and institutional weightings that favored journals affiliated with elite Chinese universities like Tsinghua and Peking. This created a self-reinforcing cycle: publishing in Category 1 journals unlocked substantial government grants—sometimes exceeding ¥500,000 per paper—and was often a prerequisite for tenure at top-tier institutions. Crucially, AJES did not merely reflect research quality; it actively shaped it, incentivizing researchers to prioritize volume in approved venues over exploratory or interdisciplinary work. A 2025 study by the Chinese Academy of Social Sciences found that 68% of STEM researchers admitted to selecting projects based on AJES compatibility rather than scientific merit, a dynamic that contributed to growing concerns about homogenization and incrementalism in Chinese output.

The system’s downfall began with mounting evidence of gaming: citation rings, honorary authorship swaps, and pressure on editors to reject foreign-submitted papers to protect domestic rankings. In late 2025, an investigation by Retraction Watch revealed coordinated efforts to inflate metrics across 12 Category 1 journals, prompting the Ministry to suspend AJES pending reform. When no replacement emerged by early 2026, the decision was made to abolish it entirely—a stark admission that the system had grow more liability than asset.
What Comes Next? The Rise of AI-Powered Evaluation and Its Perils
In the absence of AJES, Chinese institutions are rapidly adopting AI-driven alternatives, most notably the “Scholarly Impact Matrix” (SIM) developed by iFlytek in partnership with the Chinese Academy of Sciences. SIM uses natural language processing to analyze full-text content, assessing novelty, methodological soundness, and even potential for real-world application—moving beyond citation counts to evaluate semantic depth. Early pilots show promise: in a 2026 internal trial at Fudan University, SIM identified 22% of high-impact papers that AJES had misclassified due to low citation velocity in emerging fields like quantum topology. But, critics warn that such systems risk encoding bias through training data. As Dr. Li Wei, a computational linguistics researcher at Zhejiang University, noted in a recent interview:
“If your AI is trained predominantly on papers from well-funded labs publishing in high-impact Chinese journals, it will learn to favor that style—not necessarily better science, but more familiar science. We’re automating the Matthew effect.”

This concern is amplified by SIM’s closed architecture. Unlike open-source metrics such as those offered by OurResearch’s Unpaywall analytics or the Leiden Ranking’s transparent methodology, iFlytek has not disclosed SIM’s model weights, training corpus, or validation benchmarks. This opacity raises alarms about platform lock-in: institutions adopting SIM may become dependent on a proprietary tool whose evolution is dictated by corporate and state priorities rather than scholarly consensus. The lack of interoperability with global systems like Dimensions or Scopus could further isolate Chinese research from international evaluation frameworks, complicating cross-border collaboration at a time when science increasingly demands transnational cooperation.
Global Ripple Effects: From Open Science to Tech Sovereignty
The AJES shutdown is not occurring in a vacuum. It coincides with broader efforts by China to reduce reliance on Western scholarly infrastructure, including the promotion of domestic platforms like CNKI (China National Knowledge Infrastructure) and Wanfang Data as alternatives to Elsevier’s Scopus and Clarivate’s Web of Science. Yet, as Nieman Lab reported in January, these efforts face steep hurdles: CNKI’s search algorithms remain less sophisticated than those of Semantic Scholar, and its international discovery tools are rarely used outside China due to language barriers and limited indexing of non-Chinese-language content.

Meanwhile, the global academic community is watching closely. In Europe, where initiatives like the Coalition for Advancing Research Assessment (CoARA) are pushing for responsible metrics that reject journal-based proxies, the AJES collapse is seen as both a cautionary tale and an opportunity. “China’s experiment shows what happens when you let a single ranking system become too powerful,” said Dr. Elena Rossi, a science policy expert at Erasmus University Rotterdam, in a March 2026 panel.
“But it also proves that change is possible—even in deeply entrenched systems. The question now is whether we build something better, or just replace one black box with another.”
For open-source advocates, the void left by AJES presents a chance to promote decentralized, community-governed alternatives. Projects like Metrics Today, which uses blockchain-verified peer reviews and open API access to offer real-time impact scoring, are seeing increased interest from Chinese researchers seeking transparent, portable metrics. Unlike proprietary AI tools, Metrics Today allows institutions to host their own nodes, ensuring data sovereignty—a critical factor given rising concerns about surveillance and data localization laws in China’s tech sector.
The Takeaway: Assessment as Infrastructure
The demise of AJES underscores a fundamental truth: research evaluation is never neutral. It is infrastructure—shaping what gets studied, who gets funded, and which voices are deemed authoritative. As AI begins to permeate this space, the stakes grow higher. Transparent, auditable, and internationally interoperable metrics are not just desirable; they are essential to preserving the integrity of global science. Whether China’s next move will lean toward open collaboration or further technological isolation remains uncertain—but one thing is clear: the era of relying on simplified journal rankings, whether East or West, is ending. The challenge now is to build something that measures not just where research has been, but where it might yet go.