I found it interesting that the investigative team did a last digit analysis because I did not include any last digit analysis in my complaint. The idea of the last digit analysis is that a set of last digits from a group of related numbers should in most cases be randomly uniformly distributed from 0 to 9. Fraudsters, however, like to cover up their tracks and so they accidentally avoid common numbers such as 5, 0 or even numbers. In the analysis the JMR team appears to have found a problem with last digits in Table 1 (which was the anomalous table that led to the retraction). Here is what they said:
In order to review the concerns raised to the journal with only the published table and corrected table, but no data, the independent panel examined the rightmost digits in the reported data of the published table—a standard method of detecting data fraud (Mosimann, Wiseman, and Edelman 1995). The panel found that the published table, as the complaints had suggested, is inconsistent with the data process that generated the data.– JMR retraction notice
For the last digit analysis that I did (but did not include in the complaint), instead of looking at the Table 1 data as this team did, I looked at all of the coefficients and corresponding t-values reported in the paper. Again, I never reported this, but I did notice a strong preference for 1, 2 and 9 in the last digits position. Sometimes weird things happen and you can have what seems like a last digit preference but it is an artifact of a legitimate data generating process in some way. In this case, though, the last digit preference is shared between t-values and coefficients which shouldn’t be affected in the same way by any artifact. It’s hard to explain indeed unless we say, “well somebody just made these #’s up.” Not saying the numbers are made up but it does get hard to find any other explanation.
You should be able to click on these images to zoom in.