Loading…

20: A Reddit-based model of Peer Review

This is adapted from our recent paper in F1000 Research, entitled “A multi-disciplinary perspective on emergent and future innovations in peer review.” Due to its rather monstrous length, I’ll be posting chunks of the text here in sequence over the next few weeks to help disseminate it in more easily digestible bites. Enjoy!

This section outlines what would a model of Reddit-style peer review could look like. Previous parts in this series:

  1. An Introduction.
  2. An Early History
  3. The Modern Revolution
  4. Recent Studies
  5. Modern Role and Purpose
  6. Criticisms of the Conventional System
  7. Modern Trends and Traits
  8. Development of Open Peer Review
  9. Giving Credit to Referees
  10. Publishing Review Reports
  11. Anonymity Versus Identification
  12. Anonymity Versus Identification (II)
  13. Anonymity Versus Identification (III)
  14. Decoupling Peer Review from Publishing
  15. Preprints and Overlay Journals
  16. Two-stage peer review and Registered Reports
  17. Peer review by endorsement
  18. Limitations of decoupled Peer Review
  19. Potential future models of Peer Review

—————————————————————————————-

Reddit (reddit.com) is an open-source, community-based platform where users submit comments and original or linked content, organized into thematic lists of subreddits. As Yarkoni (2012) noted, a thematic list of subreddits can be automatically generated for any peer review platform using keyword metadata generated from sources like the National Library of Medicine’s Medical Subject Headings (MeSH). Members, or redditors, can upvote or downvote any submissions based on quality and relevance, and publicly comment on all shared content. Individuals can subscribe to contribution lists, and articles can be organized by time (newest to oldest) or level of engagement. Quality control is invoked by moderation through subreddit mods, who can filter and remove inappropriate comments and links. A score is given for each link and comment as the sum of upvotes minus downvotes, thus providing an overall ranking system. At Reddit, highly scoring submissions are relatively ephemeral, with an automatic down-voting algorithm implemented that shifts them further down lists as new content is added, typically within 24 hours of initial posting.

3.1.1 Reddit as an existing “journal” of science. The subreddit for Science (reddit.com/r/science) is a highly-moderated discussion channel, curated by at least 600 professional researchers and with more than 15 million subscribers at the time of writing. The forum has even been described as “The world’s largest 2-way dialogue between scientists and the public” (Owens, 2014). Contributors here can add “flair” (a user-assigned tagging and filtering system) to their posts as a way of thematically organizing them based on research discipline, analogous to the container function of a typical journal. Individuals can also have flair as a form of subject-specific credibility (i.e., a peer status) upon provision of proof of education in their topic. Public contributions from peers are subsequently stamped with a status and area of expertise, such as “Grad student|Earth Sciences.”

Scientists already further engage with Reddit through science AMAs (Ask Me Anythings), which tend to be quite popular. However, the level of discourse provided in this is generally not equivalent in depth compared to that perceived for peer review, and is more akin to a form of science communication or public engagement with research. In this way, Reddit has the potential to drive enormous amounts of traffic to primary research and there even is a phenomenon known as the “Reddit hug of death”, whereby servers become overloaded and crash due to Reddit-based traffic. The /r/science subreddit is viewed as a venue for “scientists and lay audiences to openly discuss scientific ideas in a civilized and educational manner”, according to the organizer, Dr. Nathan Allen (Lee, 2015). As such, an additional appeal of this model is that it could increase the public level of scientific literacy and understanding.

3.1.2 Reddit-style peer evaluation. The essential part of any Reddit-style model with potential parallels to peer review is that links to scientific research can be shared, commented on, and ranked (upvoted or downvoted) by the community. All links or texts can be publicly discussed in terms of methods, context, and implications, similar to any scholarly post-publication commenting system. Such a process for peer review could essentially operate as an additional layer on top of a preprint archive or repository, much like a social version of an overlay journal. Ultimately, a public commenting system like this could achieve the same depth of peer evaluation as the formal process, but as a crowd-sourced process. However, it is important to note here that this is a mode of instantaneous publication prior to peer review, with filtering through interaction occurring post-publication. Furthermore, comments can receive similar treatment to submitted content, in that they can be upvoted, downvoted, and further commented upon in a cascading process. An advantage of this is that multiple comment threads can form on single posts and viewers can track individual discussions. Here, the highest-ranked comments could simply be presented at the top of the thread, while those of lowest ranking remain at the bottom.

In theory, a subreddit could be created for any sub-topic within research, and a simple nested hierarchical taxonomy could make this as precise or broad as warranted by individual communities. Reddit allows any user to create their own subreddit, pending certain status achievements through platform engagement. In addition, this could be moderated externally through ORCID, where a set number of published items in an ORCID profile are required for that individual to perform a peer review; or in this case, create a new subreddit. Connection to an academic profile within academia, such as ORCID, further allows community validation, verification, and judgement of importance. For example, being able to see whether senior figures in a given field have read or upvoted certain threads can be highly influential in decisions to engage with that thread, and vice versa. A very similar process already occurs at the Self Journal of Science (sjscience.org/), where contributors have a choice of voting either “This article has reached scientific standards” or “This article still needs revisions”, with public disclosure of who has voted in either direction. Threaded commenting could also be implemented, as it is vital to the success of any collaborative filtering platform, and also provides a highly efficient corrective mechanism. Peer evaluation in this form emphasizes progress and research as a discourse over piecemeal publications or objects as part of a lengthier process. Such a system could be applied to other forms of scientific work, which includes code, data and images, thereby allowing contributors to claim credit for their full range of research outputs. Comments could be signed by default, pseudonymous, or anonymized until a contributor chooses to reveal their identity. If required, anonymized comments could be filtered out automatically by users. A key to this could be peer identity verification, which can be done at the back-end via email or integrated via ORCID.

3.1.3 Translating engagement into prestige. Reddit karma points are awarded for sharing links and comments, and having these upvoted or downvoted by other registered members. The simplest implementation of such a voting system for peer review would be through interaction with any article in the database with a single click. This form of field-specific social recommendation for content simultaneously creates both a filter and a structured feed, similar to Facebook and Google+, and can easily be automated. With this, contributions get a rating, which accumulate to form a peer-based rating as a form of reputation and could be translated into a quantified level of community-granted prestige. Ratings are transparent and contributions and their ratings can be viewed on a public profile page. More sophisticated approaches could include graded ratings—e.g., five-point responses, like those used by Amazon—or separate rating dimensions providing peers with an immediate snapshot of the strengths and weaknesses of each article. Such a system is already in place at ScienceOpen, where referees evaluate an article for each of its importance, validity, completeness, and comprehensibility using a five-star system. For any given set of articles retrieved from the database, a ranking algorithm could be used to dynamically order articles on the basis of a combination of quality (an article’s aggregate rating within the system, like at Stack Exchange), relevance (using a recommendation system akin to Amazon), and recency (newly added articles could receive a boost). By default, the same algorithm would be implemented for all peers, as on Reddit. The issue here is making any such karma points equivalent to the amount of effort required to obtain them, and also ensuring that they are valued by the broader research community and assessment bodies. This could be facilitated through a simple badge incentive system, such as that designed by the Center for Open Science for core open practices (cos.io/our-services/open-science-badges/).

3.1.4 Can the wisdom of crowds work with peer review? One might consider a Reddit-style model as pitching quantity versus quality. Typically, comments provided on Reddit are not at the same level in terms of depth and rigor as those that we would expect from traditional peer review—as in, there is more to research evaluation than simply upvoting or downvoting. Furthermore, the range of expertise is highly variable due to the inclusion of specialists and non-specialists as equals (“peers”) within a single thread. However, there is no reason why a user prestige system akin to Reddit flair cannot be utilised to differentiate varying levels of expertise. The primary advantage here is that the number of participants is uncapped, therefore emphasizing the potential that Reddit has in scaling up participation in peer review. With a Reddit model, we must hold faith that sheer numbers will be sufficient in providing an optimal assessment of any given contribution and that any such assessment will ultimately provide a consensus of high quality and reusable results. Social review of this sort must therefore consider at what point is the process of review constrained in order to produce such a consensus, and one that is not self-selective as a factor of engagement rather than accuracy. This is termed the “Principle of Multiple Magnifications” by Kelty et al. (2008), which surmises that in spite of self-selectivity, more reviewers and more data about them will always be better than fewer reviewers and less data. The additional challenge here, then, will be to capture and archive consensus points for external re-use. Journals such as F1000 Research already have such a tagging system in place, where reviewers can mark a submission as approved after successive peer review iterations.

“The rich get richer” is one potential phenomenon for this style of system. Content from more prominent researchers may receive relatively more comments and ratings, and ultimately hype, as with any hierarchical system, including that for traditional scholarly publishing. Research from unknown authors may go relatively under-noticed and under-used, but will at least have been publicized. One solution to this is having a core community of editors, drawing on the r/science subreddit’s community of moderators. The editors could be empowered to invite peers to contribute to discussion threads, essentially wielding the same executive power as a journal editor, but combined with that of a forum moderator. Recent evidence suggests that such intelligent crowd reviewing has the potential to be an efficient and high quality process (List, 2017).

Tennant JP, Dugan JM, Graziotin D et al. A multi-disciplinary perspective on emergent and future innovations in peer review [version 3; referees: 2 approved]F1000Research 2017, 6:1151 (doi: 10.12688/f1000research.12037.3)

4 thoughts on “20: A Reddit-based model of Peer Review

    1. Oh, wow, thanks for the update! I guess this was after we wrote the paper, but still interesting to know! 🙂

Leave a Reply