Peer Review and the Web of Trust
There’s quite a fracas brewing out there in the world of academic publication, as the world moves towards open access for journals. Despite the publishing industry realizing that they too can buy congressmen, it seems increasingly like the academic community is deciding not so much whether to keep the closed-journal model, but what to do in a post-closed-journal world.
This left me thinking about Peer Review, and what it really accomplishes. In my eyes, peer review (at least in the Linguistic world) accomplishes three things:
1) It weeds out papers which are clearly unfit for publication (due to bad science, missing data, or overall crank-ish-ness.
2) It improves the quality of papers by forcing needed revisions before papers can see the light of day.
3) Most importantly, it’s establishing a web of trust, in this case, between the journal and the reader, that the contents represent good scholarly work.
The third point is, to my mind, most interesting. When I read a paper from the Journal of the Acoustical Society of America, I trust that it is reasonably likely to be describing sound (sorry, bad pun) research. I can assume that somebody with some expertise on the matter has read the paper, and that if it had major faults, it wouldn’t have gotten through the black-box review process. I then, as an academic, decide whether each individual journal is worthy of my trust. I may decide that although JASA is worthy of my trust, a trade journal for hearing aid companies may not necessarily be, and in doing so, I develop a web of trust.
Another prominent web of trust
This is somewhat analogous to the way that PGP’s Web of Trust is structured. Using PGP, let’s say I want to check whether a given cryptographically signed email really comes from John Q. Smith.
Well, if I know John personally and have exchanged (and signed) each other’s PGP keys in person, I can just check to see if the key I have directly from him matches the key which signed the email. If it matches, no problems.
However, you’ve not met everybody you might want to receive ID-confirmed email from. So, the Web of Trust comes into play. Imagine instead that John Smith is a good friend of Jane Doe, who is a good friend of yours. John and Jane, may have exchanged keys at some point, and in the process, Jane would have signed his key (a complex process which doesn’t merit full explanation here), asserting that that key really belongs to John. Jane and I, being friends, would have exchanged and signed keys as well.
When I get the email from “John”, my PGP software will look to see whether I’ve signed and trust John’s key. If not, it’ll see whether anybody I do trust has signed the key as actually belonging to John. In this case, because Jane says that it’s really him, and I trust Jane, I trust the key on the incoming email, and I can say (reasonably) that the email comes from who it says it does.
How do these ideas mix?
Right now, the journal system is oddly equivalent to the above web. I may have never met the author(s) of a given paper, and I have absolutely no idea whether their work merits discussion, examination, or citation. However, because JASA has, in effect, signed the work by publishing it, I choose to trust a given work as being of a better, citation-quality nature than the same paper floating around an author’s personal website. An author who publishes frequently in a journal I trust then earns trust for future publications.
Revocation of trust happens, too (see what happened with the (bogus) Wakefield Vaccine study and The Lancet), but by and large, academic journals serve as the foundation for the academic publishing Web of Trust.
That’s what people don’t want to lose
For many of us, raised in open-source culture and working on projects funded by government grants, it seems bizzarre to consider signing one’s work over to a journal which will make large amounts of money by restricting access to our work, not a dime of which will ever reach us. So, the idea of open-access and the elimination of paywall-based journals is an attractive one.
However, simply cutting these journals out of the loop would, overnight, destroy the web of trust around which we have so far built our academic community. Without a replacement we’re left only able to trust the work we’ve explicitly and carefully reviewed, or which comes from authors whose work we inherently trust.
The democratization of academic publishing isn’t just about open access or reducing journal bureaucracy. Instead, it also has to be based on the opening (and increased transparency) of the review process, a more efficient and open way of choosing which articles are worthy of note, citation, or derision.
A half-baked proposal
Imagine a system in which a paper is submitted to an online archive, and considered by anybody who cares to review it. If a paper is found to be sound by a given reader or reviewer, it can be signed (much like in the PGP sense above) by that person. Then, if I decide to search for a paper, I can find first papers trusted by people I know and trust. Then, if I find none, I can start the more arduous process of fully examining papers which are signed by people I don’t trust, or which aren’t signed at all. Then, if I find a paper reliable enough, I sign it, and so the web expands.
This, unfortunately, has many downsides. It does away with anonymous peer review, allowing tensions and malice to build quite easily between reviewers and authors. This, though, may not be a terrible thing, as the most picky, unpleasant, or theoretically-encumbered reviewers would easily fall to the side.
It also doesn’t allow for revisions as easily as the journal model, and doesn’t provide a mechanism to drop the lowest quality work outright. That said, potential, higher-profile signers could certainly request certain revisions before signing. This, in turn, could very easily lead to inequality among reviewers, with big names able to push for specific changes (to better support their own work, say) before signing.
Also, you would get people who sign for pay, for reciprocation, due to pressure from others, or who just don’t give a damn about the quality of the paper and sign for some other reason. These people, especially if prominent in the field, could very easily pull down the fabric of the system, and allow bad work through for their own theoretical, political or personal reasons. So, this system requires a degree of objectivity and sense of what’s best for the field which many humans may lack.
Finally, it requires more participation and thought about trust than most are willing to put in. You need to ask yourself uncomfortable questions about who you trust, whose papers really are well written, and how much you need to know about a person’s integrity and work before their research is beyond question.
But most of these are actually failings which are already in the existing system, but are masked by the journal process.
I’m not saying this is the way of the future, nor that it’s even a good idea, but I am saying that perhaps the academic community has a lot to learn from the world of cryptography, where trust is examined more closely and pondered more abstractly than it currently is in the world of academic and scientific publication. You’ll just have to trust me about that.
Have a question, comment, or concern about this post? Contact me!