Accountants’ tricks can help identify cheating scientists, new study finds

This article has been reviewed in accordance with Science X’s editorial process and policies. The editors have highlighted the following attributes while ensuring the credibility of the content:


peer-reviewed publication

reliable source


Benford’s law for the first digit. Graphical representation of Benford’s law applied to the first digits of a theoretical data set that fits the law perfectly, displaying the characteristic negative logarithmic curve of the probability of occurrence, P(d), as the value of the digit increase. Credit: Research Integrity and Peer Review (2023). DOI: 10.1186/s41073-022-00126-w

According to a new study from the University of St Andrews, auditing practices in the financial sector can be adapted to identify academic fraud.

In an article published in the journal Research Integrity and Peer Reviewthe authors show how effective statistical tools can be used by those reviewing scientific studies to help detect and investigate suspicious data.

When scientists publish their latest findings in journals, sometimes the articles are taken down after they have been published. This can happen because corrections are needed, or because there is concern that the research involved was not done properly, or even that the data was manipulated or fabricated.

Retractions of scientific articles approached 5,000 worldwide in 2022 according to Retraction Watch, or nearly 0.1% of articles published. Although rare, cases of scientific fraud have a disproportionate impact on public trust in science.

Drawing on well-established financial auditing practices, the researchers recommend improving fraud controls within scientific institutions and publishers to more effectively weed out fraudsters. The article examines Benford’s Law as a means of examining the relative frequency distribution for leading digits of numbers in data sets, which is used in professional auditing practice.

Since the beginning of recorded science, historical accounts suggest that fraud has existed. The question has been increasingly brought to the fore in recent decades: since the study published in The Lancet linking the measles, mumps and rubella vaccine to autism, to the recent accusations of scientific deception leveled against the president of Stanford University, more and more high-profile potential fraud cases seem to be happening.

Although the reasons for the increase in cases of potential fraud are not entirely clear, it is clear that controls within scientific institutions and publishers could be strengthened.

The rise in article retractions comes at a time when society’s faith in science has already been shaken by suggestions from prominent figures that scientific facts are “fake news”.

Gregory Eckhartt, the paper’s lead author, said, “It’s time to empower individuals and institutions to separate science fact from fiction. With relatively simple statistical tools, anyone can question the veracity of many data sets.

The trick behind these tools is that it’s actually harder than you might imagine to fabricate essentially random numbers, like the last digit of everyone’s bank balances. Financial auditors have known this for a long time and have a variety of tools at their disposal to review lists of figures and highlight those that seem odd (and therefore in need of investigation for fraud).

The authors hope that this article will serve as an introduction to these tools for anyone who wants to challenge the integrity of a data set, not just in financial data, but in any field that generates a lot of data.

Graeme Ruxton, a professor in the School of Biology at the University of St Andrews and co-author of the paper, said: “This in-depth review necessarily requires open access to data. We hope this could be the starting point for discussions about institution-level reforms in the way data is stored and verified. »

In the future, we may see tighter controls on scientific data, with the possibility of data verification software using such statistical tools and machine learning algorithms not far behind.

However, Steven Shafer MD, professor of anesthesiology, perioperative medicine, and pain medicine at Stanford University, advises caution in how we interpret sources of fraud: “I think serial misconduct is a form of mental illness. For these people, dishonesty is simply the obvious way to succeed in a system where the rest of us are reckless to assume that people represent themselves honestly.”

Part of the problem, he says, stems from publication bias: “There is virtually no incentive to publish papers that confirm or refute previous studies.”

This focus on exciting results, which ultimately affects the career success of scientists, may be a key factor to consider in future reforms.

Ultimately, maintaining truth in science benefits everyone. Greg said, “Everyone in science needs to be open to harnessing new approaches to make science demonstrably more trustworthy. Science simply doesn’t work without widespread public trust in scientists.”

More information:
Gregory M. Eckhartt et al, Investigating and Preventing Scientific Misconduct Using Benford’s Law, Research Integrity and Peer Review (2023). DOI: 10.1186/s41073-022-00126-w

Journal information:
The Lancet

#Accountants #tricks #identify #cheating #scientists #study #finds

Leave a Reply

Your email address will not be published. Required fields are marked *