External publication

Bias against novelty in science: a cautionary tale for users of bibliometric indicators

Research which explores unchartered waters has a high potential for major impact but also carries a higher uncertainty of having impact. Such explorat

Publishing date
09 June 2016

Viewing scientific research as a combinatorial process, the authors measure novelty in science by examining whether a published paper makes first time ever combinations of referenced journals, taking into account the difficulty of making such combinations.

They apply this newly developed measure of novelty to all Web of Science research articles published in 2001 across all scientific disciplines. They find that highly novel papers, defined to be those that make more (distant) new combinations, deliver high gains to science: they are more likely to be a top 1% highly cited paper in the long run, to inspire follow on highly cited research, and to be cited in a broader set of disciplines.

At the same time, novel research is also more risky, reflected by a higher variance in its citation performance. In addition, the authors find that novel research is significantly more highly cited in “foreign” fields but not in its “home” field.

They also find strong evidence of delayed recognition of novel papers and that novel papers are less likely to be top cited when using a short time window. Finally, novel papers typically are published in journals with a lower than expected Impact Factor. These findings suggest that science policy, in particular funding decisions which rely on traditional bibliometric indicators based on short-term direct citation counts and Journal Impact Factors, may be biased against “high risk/high gain” novel research. The findings also caution against a mono-disciplinary approach in peer review to assess the true value of novel research.

Related content