Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Hindu
The Hindu
National
Moumita Koley

Explained | Is the National Institutional Ranking Framework flawed?

In a country as diverse as India, ranking universities and institutions is not an easy task. The Ministry of Education (formerly the Ministry of Human Resource Development) established the National Institutional Ranking Framework (NIRF) in 2016 to determine the critical indicators on which institutions’ performance could be measured. Since then, institutions nationwide, including universities and colleges, eagerly await their standings in this nationally recognised system every year.

How does the NIRF rank institutes?

Currently, the NIRF releases rankings across various categories: ‘Overall’, ‘Research Institutions’, ‘Universities’, and ‘Colleges’, and specific disciplines like engineering, management, pharmacy, law, etc. The rankings are an important resource for prospective students navigating the labyrinth of higher education institutions in India.

NIRF ranks institutes by their total score; it uses five indicators to determine this score: ‘Teaching, Learning & Resources’ (30% weightage); ‘Research and Professional Practice’ (30%); ‘Graduation Outcomes’ (20%); ‘Outreach and Inclusivity’ (10%); and e) ‘Perception’ (10%).

Academic communities have had concerns about the construction of these indicators, the transparency of the methods used, and the overall framework. An important part of it is focused on the research and professional practices part of the evaluation because they pay a lot of attention to bibliometric measures.

What are bibliometrics?

Bibliometrics refers to the measurable aspects of research, such as the number of papers published, the number of times they’re cited, the impact factors of journals, etc. The allure of bibliometrics as a tool for assessing research output lies in its efficiency and convenience compared to qualitative assessments performed by subject experts, which are more resource-intensive and require some time.

Then again, science-policy experts have repeatedly cautioned authorities against relying too much on bibliometrics as a complete assessment in and of itself. They have argued that bibliometric indicators don’t fully capture the intricacies of scientific performance, and that we need a more comprehensive evaluation methodology.

The journal Science recently reported that a dental college in Chennai was using “nasty self-citation practices on an industrial scale” to inflate its rankings. The report spotlighted the use of bibliometric parameters to understand the research impact of institutions as well as the risk of a metric becoming the target.

What’s the issue with over-relying on bibliometrics?

This criticism has been levelled against the NIRF as well, vis-a-vis the efficacy and fairness of its approach to ranking universities. For example, the NIRF uses commercial databases, such as ‘Scopus’ and ‘Web of Science’, to get bibliometric data. But these entities are often works in progress, and aren’t impervious to inaccuracies or misuse. Recently, for example, ‘Web of Science’ had to delist around 50 journals, including a flagship journal of the publisher MDPI.

Similarly, the NIRF’s publication-metrics indicator solely considers research articles, sidelining other forms of intellectual contributions, such as books, book chapters, monographs, non-traditional outputs like popular articles, workshop reports, and other forms of grey literature.

As a result, the NIRF passive encourages researchers to focus on work that is likelier to be published in journals, especially international journals, at the cost of work that isn’t the NIRF isn’t likely to pay attention to. This in turn disprivileges work that focuses on national or more local issues, because international journals prefer work on topics of global significance.

This barrier is more pronounced for local issues stemming from low- and middle-income countries, further widening an existing chasm between global and regional needs, and disproportionately favouring the narratives from high-income nations.

Is the NIRF transparent?

Finally, university rankings are controversial. NIRF, the Times Higher Education World University Rankings, and the QS World University Rankings all have flaws. So experts have emphasised that they ought to be transparent about what data they collect, how they collect it, and how that data becomes the basis for the total score.

While NIRF is partly transparent – it publicly shares its methodology – it doesn’t provide a detailed view. For example, the construction of the indicator of research quality is opaque. This is illustrated by considering the NIRF’s ranking methodology for research institutions.

The current framework considers five dimensions for assessment and scoring: “metric for quantitative research” (30% of the total score); “metric for qualitative research” (30%); the collective ‘contributions of students and faculty’ (20% ); ‘outreach and inclusivity initiatives’ (10%); and ‘peer perception’ (10%).

The first two dimensions are both based on bibliometric data and together make up 60% of the total score. However, there is a potential discrepancy in how they label research quantity and quality. The labels in question are imprecise and potentially misleading.

“Metrics of quantitative research” is more accurately “quantity of scientific production” – and “metrics for qualitative research” is more accurately “metrics for research quality”. Both “quantitative research” and “qualitative research” are research methodologies; they are not indicators. Yet the NIRF appears to treat them as indicators.

What’s the overall effect on the NIRF?

The case of the dental college is emblematic of the dangers of over-relying on one type of assessment criterion, which can open the door to manipulation and ultimately obscure the true performance of an institution. The Centre for Science and Technology Studies, at Leiden University, the Netherlands, has specified ten principles that ranking systems must abide – including accounting for the diversity of an institution’s research, its teachers’ teaching prowess, and the institute’s impact on society, among other factors.

The rankings also don’t adequately address uncertainty. No matter how rigorous the methods, university rankings invariably involve some level of ambiguity. The NIRF’s emphasis on rankings can lead to unhealthy competition between universities, fostering a culture that puts metrics in front of the thing they are trying to measure: excellence in education and research.

Dr. Moumita Koley is a consultant with the Future of Scientific Publishing project and an STI policy researcher and visiting scientist at the DST-Centre for Policy Research, Indian Institute of Science, Bengaluru.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.