The RI² Index and the Global Research Integrity Crisis
Citations serve as a foundation of scholarly work. They not only help in understanding the connections between ideas but also reflect an author’s intellectual contributions. In recent times, academic institutions are increasingly seeking to enhance their global presence, and here the citation metrics have come to play a crucial role in how academic excellence is measured and rewarded. University rankings have become a key determinant of institutional prestige, it attracts funding and get them international collaborations. These rankings rely heavily on bibliometric indicators such as citation rates and publication counts. However, these metrics have caused concerns about their impact on research integrity. The pursuit of rankings, in some cases, appears to incentivise practices that compromise the quality and ethical standards of scholarly output. To address these, Professor Lokman I. Meho introduced the Research Integrity Risk Index (RI²). RI² is the first composite metric designed to offer a data-driven approach to assess institutional-level risks related to research governance and publishing ethics. The index shifts the focus from research volume to research values. Hence, providing a more balanced framework for evaluating academic performance.
At the core of RI² are two transparent, independently verifiable indicators: the R Rate, which captures the number of retracted publications per 1,000 scholarly articles measured over the most recent two full calendar years, and the D Rate, which measures the proportion of publications appearing in journals delisted from leading indexing services such as Scopus and Web of Science measured over the most recent two full calendar years. These two indicators, normalised on a scale from 0 to 1, are averaged to produce a composite score for each university. Institutions are then categorised into five risk tiers: Red Flag, High Risk, Watch List, Normal Variation, and Low Risk. These categories are determined based on their placement within the global distribution of the 1,000 most publishing universities in the world, offering a rare example of a fixed benchmarking approach that avoids the distortions of locally rescaled data.

The June 2025 edition of RI² uses a large, complex dataset, including 42,732 retracted articles matched through SciVal, with sources ranging from Retraction Watch and Medline to the Web of Science. In terms of delisted journals, the analysis tracks 974 unique titles removed between 2009 and 2025, 855 by Scopus and 169 by Web of Science. This delisting often results from ethical concerns, publishing misconduct, or the exposure of systemic weaknesses such as fake peer reviews, paper mills, and citation cartels. Among the 206 journals indexed during 2023–2024 alone, 124,945 articles were examined for institutional affiliations. The global retraction rate for 2022–2023 stands at 2.2 per 1,000 publications, but this average obscures striking disciplinary disparities: mathematics and computer science lead with 9.3 and 7.6 retractions per 1,000, respectively, while arts and humanities report only 0.2.
A recent preprint article by Prof. Lokman I. Meho Meho titled “Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University Rankings,” published on 19 June 2025 in the open-access preprint server bioRxiv, further underscores the structural vulnerabilities that RI² seeks to address. Though not peer-reviewed, it investigates how fast-growing universities may be engaging in practices that inflate bibliometric performance at the cost of research ethics. The study analysed 18 universities, 7 from India, 9 from Saudi Arabia, and one each from Lebanon and the UAE selected from a pool of 98 for sharp declines in first and corresponding authorship, which the study identifies as an early warning sign of metric manipulation. Institutions in these four countries have reported publication growth exceeding 400% in five years, with seven Indian universities in the study recording growth of up to 766%, compared to a national average of 50%. According to the author, the surge in publications, increased citations, reliance on delisted journals, dense reciprocal collaborations, and higher retraction rates suggest systemic issues beyond organic academic development. The concentration of these trends in STEM fields, and the lack of corresponding growth in social sciences and clinical medicine, point to a ranking-driven expansion strategy.
The geographical distribution of RI² findings reinforces these concerns. South Asia, led by India, has emerged as the epicentre of global research integrity risks. Of the 92 institutions assessed in the region, 71 are from India, which alone accounts for 27 of the top 100 high-risk institutions globally. Graphic Era University in Dehradun tops the index with a staggering RI² score of 0.916 and a retraction rate of 37.29 per 1,000 articles, which is nearly 4 percent, far above the global mean. Following closely are Vel Tech University (0.868), Koneru Lakshmaiah Education Foundation (0.834), and JNTU Hyderabad (0.817). Anna University, with nearly 17,000 publications, recorded 374 retractions placing its retraction rate at an alarming 23.54 per 1,000. The University of Pune published 15.35% of its research in delisted journals, while institutions like Saveetha Institute and Dr. APJ Abdul Kalam Technical University show similarly troubling trends.
This institutional clustering reveals deeper fault lines across Indian academia, particularly in technical education. The presence of multiple campuses of Jawaharlal Nehru Technological University, Visvesvaraya Technological University in Karnataka, and VIT University in Tamil Nadu indicates widespread issues in engineering and applied science disciplines. Southern states, notably Andhra Pradesh, Tamil Nadu, Telangana, and Karnataka, dominate the high-risk list, raising questions about regional governance models, institutional incentives, and publication oversight mechanisms. Even Kerala, often lauded for its strong educational standards, appears in the broader rankings.
The Middle East and North Africa region, assessed through 123 institutions, shows similar patterns, with Iran (29) Saudi Arabia (25) standing out. Of the 25 Saudi institutions in the index, King Saud University alone contributed nearly 1,000 papers in delisted journals in 2023–2024, the second-highest number globally. The Lebanese American University recorded a 908% increase in publications from 2018–19 to 2023–24. This decline is a key warning signal, identified by Meho as indicative of metric manipulation and authorship outsourcing.
In East Asia and the Pacific, the index evaluated 529 institutions. China dominates with 334 universities, followed by Japan (44), South Korea (40), and Australia (32). While Chinese universities have shown rapid expansion in publication volume, their integrity risk scores vary widely. Indonesia’s Bina Nusantara University, ranked 11th globally, had an 18% D Rate, reflecting growing concerns in Southeast Asia’s rapidly evolving academic landscape. Malaysia (15), Indonesia (13), Thailand (9), and Vietnam (4) show limited but significant exposure to delisted journals.
Europe and Central Asia, comprising 470 institutions, reflects a more stable integrity profile, though variation persists. The United Kingdom and Germany lead in representation with 62 and 53 institutions respectively, followed by Italy (48), France (47), and Spain (44). Eastern European countries such as Poland (27) and Turkey (32) have a more uneven distribution, likely influenced by disparities in research funding and oversight. Russia’s 19 institutions show modest retraction rates but relatively high use of delisted journals, suggesting pockets of vulnerability.
In Latin America and the Caribbean, 58 institutions were evaluated, with Brazil accounting for more than half (35). Chile (8), Mexico (6), and Argentina (3) make up the remainder. While the region’s institutions have not surfaced in the highest-risk categories, concerns persist about the sustainability of research quality amid rising publication pressures and limited funding support.
North America presents a mixed picture. The United States, with 178 institutions included, and Canada, with 28, show generally low RI² scores, though a handful of universities reflect elevated retraction rates, particularly in the biomedical sciences. The region’s regulatory frameworks, broader engagement with integrity auditing, and transparency initiatives such as ORCID verification have likely contributed to mitigating systemic vulnerabilities.
In Sub-Saharan Africa, only 22 institutions were evaluated, reflecting a limited publication base. South Africa, the regional leader with 10 universities, demonstrates a relatively low risk profile. Other countries such as Ethiopia, Nigeria, and Kenya appear infrequently in the rankings.
By design, RI² offers a new means for navigating the complex terrain of global scholarly accountability. The tool’s resistance to manipulation lies in its simplicity and reliance on verifiable bibliometric data. A Red Flag designation does not amount to an accusation of misconduct. Rather, it serves as a signal inviting institutions to self-examination and to reform their academic practices for better research outputs. Unlike university self-reporting mechanisms, RI² does not depend on unverifiable claims or opaque inputs. It's a call for ranking agencies to incorporate integrity-sensitive metrics. Proposals such as mandating ORCID IDs, auditing authorship patterns, and penalising publications in delisted journals should be made to reorient incentives toward genuine scholarly value.
Read RI2 Research Integrity Risk Index at https://sites.aub.edu.lb/lmeho/methodology/
Read Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University Rankings by Professor Lokman I. Meho at https://www.biorxiv.org/content/10.1101/2025.05.09.653229v3