Loading…

24: A Wikipedia-style model of peer review

This is adapted from our recent paper in F1000 Research, entitled “A multi-disciplinary perspective on emergent and future innovations in peer review.” Due to its rather monstrous length, I’ll be posting chunks of the text here in sequence over the next few weeks/months to help disseminate it in more easily digestible bites. Enjoy!

This section outlines what would a model of Wikipedia-style peer review could look like. Previous parts in this series:

  1. An Introduction.
  2. An Early History
  3. The Modern Revolution
  4. Recent Studies
  5. Modern Role and Purpose
  6. Criticisms of the Conventional System
  7. Modern Trends and Traits
  8. Development of Open Peer Review
  9. Giving Credit to Referees
  10. Publishing Review Reports
  11. Anonymity Versus Identification
  12. Anonymity Versus Identification (II)
  13. Anonymity Versus Identification (III)
  14. Decoupling Peer Review from Publishing
  15. Preprints and Overlay Journals
  16. Two-stage peer review and Registered Reports
  17. Peer review by endorsement
  18. Limitations of decoupled Peer Review
  19. Potential future models of Peer Review
  20. A Reddit-based model
  21. An Amazon-based model
  22. A Stack Exchange/Overflow-style model
  23. A GitHub-style model

——————————————————————————

Wikipedia is the freely available, multi-lingual, expandable encyclopedia of human knowledge (wikipedia.org/). Wikipedia, like Stack Exchange, is another collaborative authoring and review system whereby contributing communities are essentially unlimited in scope. It has become a strongly influential tool in both shaping the way science is performed and in improving equitable access to scientific information, due to the ease and level of provision of information that it provides. Under a constant and instantaneous process of reworking and updating, new articles in hundreds of languages are added on a daily basis. Wikipedia operates through a system of collective intelligence based on linking knowledge workers through social media (Kubátová et al., 2012). Contributors to Wikipedia are largely anonymous volunteers, who are encouraged to participate mostly based on the principles guiding the platform (e.g., altruistic knowledge generation), and therefore often for reasons of personal satisfaction. Edits occur as cumulative and iterative improvements, and due to such a collaborative model, explicitly defining page-authorship becomes a complex task. Moderation and quality control is provided by a community of experienced editors and software-facilitated removal of mistakes, which can also help to resolve conflicts caused by concurrent editing by multiple authors (wikipedia.org/wiki/Help:Edit_conflict). Platforms already exist that enable multiple authors to collaborate on a single document in real time, including Google DocsOverleaf, and Authorea, which highlights the potential for this model to be extended into a wiki-style of peer review. PLOS Computational Biology is currently leading an experiment with Topic Pages (collections.plos.org/topic-pages), which are published papers subsequently added as a new page to Wikipedia and then treated as a living document as they are enhanced by the community (Wodak et al., 2012). Communities of moderators on Wikipedia functionally exercise editorial power over content, and in principle anyone can participate, although experience with wiki-style operations is clearly beneficial. Other non-editorial roles, such as administrators and stewards, are nominated using conventional elections that variably account for their standing reputation. The apparent “free for all” appearance of Wikipedia is actually more of a sophisticated system of governance, based on implicitly shared values in the context of what is perceived to be useful for consumers, and transformed into operational rules to moderate the quality of content (Kelty et al., 2008).

“Peers” and “reviews” in a wiki-world.

Wikipedia already has its own mode of peer review, which anyone can request as a way to receive ideas on how to improve articles that are already considered to be “decent” (wikipedia.org/wiki/Wikipedia:Peer_review/guidelines). It can be used for nominating potentially good articles that could become candidates for a featured article. Featured articles are considered to be the best articles Wikipedia has to offer, as determined by its editors and the fact that only ∼0.1% are selectively featured. Users submitting a new request are encouraged to review an article from those already listed, and encourage reviewers by replying promptly and appreciatively to comments. Compared to the conventional peer review process, where experts themselves participate in reviewing the work of another, the majority of the volunteers here, like most editors in Wikipedia, lack formal expertise in the subject at hand (Xiao & Askin, 2012). This is considered to be a positive thing within the Wikipedia community, as it can help make technically-worded articles more accessible to non-specialist readers, demonstrating its power in a translational role for scholarly communication (Thompson & Hanley, 2017).

When applied to scholarly topics, this process clearly lacks the “peer” aspect of scholarly peer review, which can potentially lead to propagation of factual errors (e.g., Hasty et al. (2014)). This creates a general perception of low quality from the research community, in spite of difficulties in actually measuring this (Hu et al., 2007). However, much of this perception can most likely be explained by a lack of familiarity with the model, and we might expect comfort to increase and attitudes to change with effective training and communications, and increased engagement and understanding of the process (Xiao & Askin, 2014). If seeking expert input, users can invite editors from a subject-specific volunteers list or notify relevant WikiProjects. Furthermore, most Wikipedia articles never “pass” a review although some formal reviews do take place and can be indicated (wikipedia.org/wiki/Category:Externally_peer_reviewed_articles). As such, although this is part of the process of conventional validation, such a system has little actual value on Wikipedia due to its dynamic nature. Indeed, wiki-communities appear to have distinct values to academic communities, being based more on inclusive community participation and mediation than on trust, exclusivity, and identification (Wang & Wei, 2011). Verifiability remains a key element of the wiki-model, and has strong parallels with scholarly communication in fulfilling the dual roles of trust and expertise (wikipedia.org/wiki/Wikipedia:Verifiability). Therefore, the process is perhaps best viewed as a process of “peer production”, but where attainment of the level of peer is relatively lower to that of an accredited expert. This provides a difference in community standing for Wikipedia content, with value being conveyed through contemporariness, mediation of debate, and transparency of information, rather than any perception of authority as with traditional scholarly works (Black, 2008). Therefore, Wikipedia has a unique role in digital validation, being described as “not the bottom layer of authority, nor the top, but in fact the highest layer without formal vetting” (chronicle.com/article/Wikipedia-Comes-of-Age/125899. Such a wiki-style process could be feasibly combined with trust metrics for verification, developed for sociology and psychology to describe the relative standing of groups or individuals in virtual communities (ewikipedia.org/wiki/Trust_metric).

Democratization of peer review.

The advantage of Wikipedia over traditional review-then-publish processes comes from the fact that articles are enhanced consistently as new articles are integrated, statements are reworded, and factual errors are corrected as a form of iterative bootstrapping. Therefore, while one might consider a Wikipedia page to be of insufficient quality relative to a peer reviewed article at a given moment in time, this does not preclude it from meeting that quality threshold in the future. Therefore, Wikipedia might be viewed as an information trade-off between accuracy and scale, but with a gap that is consistently being closed as the overall quality generally improves. Another major statement that a Wikipedia-style of peer review makes is that rather than being exclusive, it is an inclusive process that anyone is allowed to participate in, and the barriers to entry are very low—anyone can potentially be granted peer status and participate in the debate and vetting of knowledge. This model of engagement also benefits from the “many eyes” hypothesis, where if something is visible to multiple people then, collectively, they are more likely to detect any errors in it, and tasks become more spread out as the size of a group increases. In Wikipedia, and to a larger extent Wikidata, automation or semi-automation through bots helps to maintain and update information on a large scale. For example, Wikidata is used as a centralized microbial genomics database (Putman et al., 2016), which uses bots to aggregate information from structured data sources. As such, Wikipedia represents a fairly extreme alternative to peer review where traditionally the barriers to entry are very high (based on expertise), to one where the pool of potential peers is relatively large (Kelty et al., 2008). This represents an enormous shift from the generally technocratic process of conventional peer review to one that is inherently more democratic. However, while the number of contributors is very large, more than 30 million, one third of all edits are made by only 10,000 people, just 0.03% (wikipedia.org/wiki/Wikipedia:List_of_Wikipedians_by_number_of_edits). This is broadly similar to what is observed in current academic peer review systems, where the majority of the work is performed by a minority of the participants (Fox et al., 2017Gropp et al., 2017Kovanis et al., 2016).

One major implication of using a wiki-style model is the difference between traditional outputs as static, non-editable articles, and an output which is continuously evolving. As the wiki-model brings together information from different sources into one place, it has the potential to reduce redundancy compared to traditional research articles, in which duplicate information is often rehashed across many different locations. By focussing articles on new content just on those things that need to be written or changed to reflect new insights, this has the potential to decrease the systemic burden of peer review by reducing the amount and granularity of content in need of review. This burden is further alleviated by distributing the endeavor more efficiently among members of the wider community—a high-risk, high-gain approach to generating academic capital (Black, 2008). Reviews can become more efficient, akin to those in software development, where they are focussed on units of individual edits, similar to the “commit” function in GitHub where suggested changes are recorded to content repositories. In circumstances where the granularity of the content to be added or changed does not fit with the wiki page in question, the material can be transferred to other pages, but the “original” page can still act as an information hub for the topic by linking to those other pages.

A possible risk with this approach is the creation of a highly conservative network of norms due to the governance structure, which could end up being even more bureaucratic and create community silos rather than coherence (Heaberlin & DeDeo, 2016). To date, attempts at implementing a Wikipedia-like editing strategy for journals have been largely unsuccessful (e.g., at Nature (Zamiska, 2006)). There are intrinsic differences in authority models used in Wikipedia communities (where the validity of the end result derives from verifiability, not personal authority of authors and reviewers) that would need to be aligned with the norms and expectations of research communities. In the latter, author statements and peer reviews are considered valid because of the personal, identifiable status and reputation of authors, reviews and editors, which could be feasibly combined with Wikipedia review models into a single solution. One example where this is beginning to happen already is with the WikiJournal User Group, which represents a publishing group of scholarly journals that apply academic peer review to their content (meta.wikimedia.org/wiki/WikiJournal_User_Group). However, a more rigorous editorial review process is the reason why the original form of Wikipedia, known as Nupedia, ultimately failed (Sanger, 2005). Future developments of any Wikipedia-like peer review tool could expect strong resistance from academic institutions due to potential disruption to assessment criteria, funding assignment, and intellectual property, as well as from commercial publishers, since academics would be releasing their research to the public for free instead of to them.

Reference

Tennant JP, Dugan JM, Graziotin D et al. A multi-disciplinary perspective on emergent and future innovations in peer review [version 3; referees: 2 approved]. F1000Research 2017, 6:1151 (doi: 10.12688/f1000research.12037.3)

3 thoughts on “24: A Wikipedia-style model of peer review

  1. What is called peer review in Wikipedia is more calling on (outsiders) for ideas on how to improve an article. I only tried once to get a “peer review”, got one review after several months. It has little to do with peer review as we practise it in science.

    The peer review that makes Wikipedia articles high quality is inherent in the system where everyone can make and suggest edits and talk about necessary changes in case there are disagreements.

    Wikipedia explicitly forbids doing original research. It is only interested in settled knowledge. It also discourages citing the primary literature, like we would do, and prefers secondary literature. Wikipedia articles are only writing for topics with sufficient notability. This to limit the number of articles and increase the number of people reading and editing and debating problem with the articles that are published.

    To make this work for original research, by scholars who need credit (in the current fake competitive system), about topics where often only a few people in the world can participate, will be a challenge and likely such a large change that it will be hard to recognize the origins.

    1. Thanks for the comment, Victor! Yes, you’re right, it’s more a form of peer-to-peer review, but ultimately the key goal is the same (content improvement). I think you’re only half right with the original research thing though, as now there are Wiki-journals! And I don’t think it forbids citing the primary literature, and indeed there are efforts to increase the coverage of citations here, and particularly for OA articles. You can see that displayed now on pages, I think.

      1. Wikipedia *discourages* citing primary literature. If there is secondary literature, a text which interprets the primary literature, that takes precedent. Just wanted to say the culture, system and aims are quite different.

        Wikipedia also does not reward expertise. So a scientist with citations to the scientific literature (primary literature) can be overruled in the discussions on the talk page by amateurs providing links to tabloid newspapers (secondary literature). Climate “sceptics” hate the Wikipedia pages on climate because it will not publish their nonsense as science, but also climate scientists can be frustrated by the system.

        Wiki Journals are different. They have authors, original research and a frozen version, which is peer reviewed.

Leave a Reply