In 2010, two famous economists, Carmen Reinhart and Kenneth Rogoff, published an article confirming what many fiscally conservative politicians had long suspected: that a country’s economic growth stagnates if public debt exceeds a certain percentage of GDP. The document fell on the receptive ears of the future chancellor of the United Kingdom, George Osborne, who cited it several times in a speech in which he laid out what would become the political manual of the era of austerity: cutting public services to pay for National debt.
There was only one problem with Reinhart and Rogoff’s article. They had inadvertently omitted five countries from their analysis: they ran the numbers on only 15 countries instead of the 20 they thought they had selected in their spreadsheet. When some lesser-known economists adjusted for this error and some other irregularities, the most striking part of the results disappeared. The debt-to-GDP ratio was still there, but the effects of high debt were more subtle than the dramatic cliff edge alluded to in Osborne’s speech.
Scientists, like the rest of us, are not immune to mistakes. “It’s clear that errors are everywhere, and a small portion of these errors will change the conclusions of the papers,” says Malte Elson, a professor at the University of Bern in Switzerland who studies, among other things, research methods. The problem is that not many people look for these errors. Reinhart and Rogoff’s errors were only discovered in 2013 by an economics student whose professors had asked her class to try to replicate the findings in prominent economics papers.
With fellow metascience researchers Ruben Arsland and Ian Hussey, Elson has created a way to systematically find errors in scientific research. The project—called MISTAKE— follows the model of bug bounties in the software industry, where hackers are rewarded for finding bugs in the code. In Elson’s project, researchers are paid to look for possible errors in documents and are given bonuses for each verified error they discover.
The idea came from a discussion between Elson and Arsland, who encourages scientists to find errors in their own work by offering to buy them a beer if they identify a typo (with a limit of three per article) and 400 euros ($430) for a error that changes the main conclusion of the article. “We were both aware of articles in our respective fields that were completely flawed due to demonstrable errors, but it was extremely difficult to correct the record,” Elson says. All of these public mistakes could pose a big problem, Elson reasoned. If a PhD researcher spent his or her degree searching for a result that turned out to be a mistake, that could equate to tens of thousands of dollars wasted.
Error checking is not a standard part of publishing scientific papers, says Hussey, a metascience researcher in Elson’s laboratory in Bern. When an article is accepted by a scientific journal, such as Nature either Science–is sent to some experts in the field who offer their opinions on whether the article is of high quality, logically sound, and makes a valuable contribution to the field. However, these reviewers typically do not check for errors and in most cases will not have access to the raw data or code they would need to remove errors.