When performance measurement is based on a limited set of criteria, every human being tends to adapt its behaviors to the criteria to obtain the best result. It’s a well-known bias which drives managers to frequently modify evaluation criteria to circumvent behaviors targeting criteria performance rather than pure efficiency.
Researchers evaluation, for employment or fund raising, gives a massive role to bibliometrics. Pressure is high: everybody knows « Publish or Perish ». But publish is not sufficient. A high Impact Factor is a must. With such challenges, no surprise if some “has adapted”: hasty publication, self-quoting, duplications or similar articles published twice, plagiarism for other, but also questionable tricks by publishers to increase their Impact Factor.
Again, fraud comes with every performance measurement system. But in recent years, a few cases of massive fraud have appeared, in Canada, in China, and probably in other countries. And this might not be strictly due to the bibliometrics obsession pushing the weakest to unethical behaviors. There might also be a structural reason, linked to the fast increase of the researchers’ population over the last 40 years. If the number of researchers in need to publish at any cost (and therefore the number of published articles) increases faster than the number of appropriate reviews, the publishing system becomes a bottleneck. And the Grail of publishing becomes inaccessible for the average researcher, unless he is smart enough to circumvent the system.
Let’s take a simple model: starting with, 1000 researchers equally distributed in 10 field of research, and 10 specialized journals, publishing each 100 articles per year in their specific domain. If each researcher only publishes once and buys the specialized journal of his field, all is good in the hood! He will be able to promote his work upon his colleagues and access the 100 articles published in his field.
Let’s move now to 2000 researchers: still 10 domains, hence 10 specialized journals which should publish now 200 articles… That’s where it gets complicated! The obvious solution would be to multiply the number of journals. This would imply to multiply the cost of journal subscriptions for researcher too… The more researchers, the more articles published, the more articles to read. And each researcher must publish at least once a year! Since reading time for researchers, as well as reviewing resources for scientific journal are limited, we can see where the dilemma lies: if there are too many competing journals in one domain, then researchers cannot know which one to choose and spend more time to find the right article. Conversely, if there are too few journals in each domain, each journal would have to publish more and more articles, which becomes tricky to preserve quality (Peer Review). Unless we accept a part of Science is lost… In this case how to choose which one to keep and which one to condemn?
Once upon a time, the system worked well (corresponding to 1000 researchers case in our model). But with the boom of the researchers’ population, this model might have reached its limit: researchers need to read more articles with the same efficiency, and must publish despite the publishing bottleneck. Or, they will try to circumvent the system.
One side effect is the journals workload for articles selection. If the number of competent reviewers does not increase fast enough, the reviewing time must increases or the time spent per article must reduce. And for the most prestigious reviews, the time for selection becomes unbearable (several months to more than a year!). In the meantime, researchers need to publish more and more frequently. Long term financing is scarce. They need to publish twice or three times a year… And today research trend pushes to collective publications which take even more time…
Ooops! It looks like there is a critical threshold in our model due to researchers population increasing way faster than journals capability to analyze the world scientific production (but still not fast enough to grow like big editors revenues!)
The old model has reached its limit. Pressure on researchers and reviews will keep increasing, with negative outcomes on review quality, publication delays, knowledge availability and accessibility, and in the end, on the quality of scientific articles written by researchers.
Luckily, Internet may offer other models: like PubPeer with collaborative review process (Open Peer Review) or GinGo with an artificial intelligence helping scientists when the workforce available for the review is not enough!