One must be intrinsically motivated to be ethical and honest. Integrity cannot be imposed by peer review.
This is not another SFYL (sorry for your loss) tale of cryptocurrency scamming. That is merely a grace note. Academic plagiarism can happen, regardless of whether bitcoin, blockchains, or cryptocurrency are involved.
One’s own professional community, and the moral implications of having lied and plagiarized i.e. shame, should be enough to keep scientific and other original researchers (and investigatory work in general) honest. It clearly isn’t. I make that observation based on this passage via Andrew Gelman (emphasis mine):
Peer review is not perfect. But saying you do peer review, and then not doing it, that’s really bad.
The Carlisle story is old news, and I know that some people feel that talking about this sort of thing is a waste of time compared to doing real science. And, sure, I guess it is. But here’s the thing: fake science competes with real science. NPR, the Economist, Gladwell, Freakonomics, etc.: they’ll report on fake science instead of reporting on real science. After all, fake science is more exciting! When you’re not constrained by silly things such as data, replication, coherence with the literature, you can really make fast progress! The above story is interesting in that it appears to feature an alignment of low-quality research and unethical research practices.
No, Andrew Gelman, recounting the Carlisle story is NOT a waste of time. Thank you for making us aware of what is going on. It is easy to be lulled into complacency, myself included, by new paradigms of open access, open peer review, data sharing, and so forth.
Benjamin Carlisle wrote a blog post, “Proof of pre-specified endpoints in medical research with the bitcoin blockchain” in August 2014. The paper that did the copying was called “RETRACTED: How blockchain-timestamped protocols could improve the trustworthiness of medical science” and published by F1000 in 2016. The plagiarized version received attention far and wide. It appeared in the U.S. National Library of Medicine, National Institutes of Health ncbi.nlm.nih.gov (since retracted) and even was cited in printed books.
A comparison of Google Search results on the original post yielded nine (9) hits, whereas a search of the plagiarized article returned 61 million results. This is clearly a travesty, that the original post by a graduate student has sunk without a ripple in the online world, while two practicing medical doctors with listed affiliations to Cambridge University plagiarise the student’s work to great acclaim.
Propagation of errors: patent filing
I am not an attorney. I am not a patent attorney. I was distressed to note that the title of the plagiarized article was associated with a patent application filed with WIPO (World International Patent Organization) via the NLM NIH link out service.
The Lens is a free, open patent and scholarly online search sevice; nothing wrong with that. I found a patent application, System For Rapid Tracking Of Genetic And Biomedical Information Using A Distributed Cryptographic Hash Ledger with one scholarly work cited. That work is the retracted, plagiarized article, attributed to the two physicians.
The patent was not filed by the physicians. I spent a bit of time scanning the four names associated with the patent, noting only that they appeared to be Swiss French. and associated with Zug, Switzerland. What are the consequences of filing a patent based on a retracted scholarly journal article? Has Ben Carlisle been adversely affected as a result of the initial unethical act? I don’t know. I do know that retractions and other follow-ups, such as announcements of conflict of interest, rarely receive the attention that initial publication does. I read all about that at Retraction Watch.
Apparently, the consequences of doing low quality and unethical research seem to be minimal to those who do it. Gelman says that Carlisle is wryly amused by it all, but he has every right to be righteously aggrieved. If he were to be, would anyone listen? It doesn’t seem like it.
How can we do “real science”, given the fact that much of science is based on an edifice of trusted foundations, i.e. axiomatic principles? There is:
- the problem of making new inquiries when the foundations are flawed, and
- general trust in the entire edifice being undermined.
I feel glum now.