Loading…

25: A Hypothesis-style annotation model of peer review

This is adapted from our recent paper in F1000 Research, entitled “A multi-disciplinary perspective on emergent and future innovations in peer review.” Due to its rather monstrous length, I’ll be posting chunks of the text here in sequence over the next few weeks/months to help disseminate it in more easily digestible bites. Enjoy!

This section outlines what would a model of Hypothesis-style peer review system of annotation could look like. Previous parts in this series:

  1. An Introduction.
  2. An Early History
  3. The Modern Revolution
  4. Recent Studies
  5. Modern Role and Purpose
  6. Criticisms of the Conventional System
  7. Modern Trends and Traits
  8. Development of Open Peer Review
  9. Giving Credit to Referees
  10. Publishing Review Reports
  11. Anonymity Versus Identification
  12. Anonymity Versus Identification (II)
  13. Anonymity Versus Identification (III)
  14. Decoupling Peer Review from Publishing
  15. Preprints and Overlay Journals
  16. Two-stage peer review and Registered Reports
  17. Peer review by endorsement
  18. Limitations of decoupled Peer Review
  19. Potential future models of Peer Review
  20. A Reddit-based model
  21. An Amazon-based model
  22. A Stack Exchange/Overflow-style model
  23. A GitHub-style model
  24. A Wikipedia-style model

——————————————————————————

Hypothesis (web.hypothes.is) is a lightweight, portable Web annotation tool that operates across publishing platforms (Perkel, 2015), ambitiously described as a “peer review layer for the entire Internet” (Farley, 2011). It relies on pre-existing published content to function, similar to other annotation services, such as PubPeer and PaperHive. Annotation is a process of enriching research objects through the addition of knowledge, and also provides an interactive educational opportunity by raising questions and creating opportunities to collect the perspectives of multiple peers in a single venue; providing a dual functionality for collaborative reading and writing. Web annotation services like Hypothesis allow annotations (such as comments or peer reviews) to live alongside the content but also separate from it, allowing communities to form and spread across the internet and across content types, such as HTML, PDF, EPUB, or other formats (Whaley, 2017). Examples of such use in scholarly research already exist in post-publication peer review (e.g., Mietchen (2017)). Further, as of February 2017, annotation became a Web standard recognized by the Web Annotation Working Group, W3C (2017) (W3C). Under this model of Web annotation described by the W3C, annotations belong to and are controlled by the user rather than any individual publisher or content host. Users use a bookmarklet or browser extension to annotate any webpage they wish, and form a community of Web citizens.

Hypothesis permits the creation of public, group private, and individual private annotations, and is therefore compatible with a range of open and closed peer review models. Web annotation services not only extend peer review from academic and scholarly content to the whole Web, but open up the ability to annotate to any Web-browser. While the platform concentrates on focus groups within publishing, journalism, and academia, Hypothesis offers a new way to enrich, fact check, and collaborate on online content. Unlike Wikipedia, the original content never changes but the annotations are viewed as an overlay service on top of static content. This also means that annotations can be made at any time during the publishing process, including the preprint stage. Document Object Identifiers (DOIs) are used to federate or compile annotations for scholarly work. Reviewers often provide privately annotated versions of submitted manuscripts during conventional peer review, and Web annotation is part of the digitization of this process, while also decoupling it from journal hosts. A further benefit of Web annotations is that they are precise, since they can be applied in line rather than at the end of an article as is the case with formal commenting.

Annotations have the potential to enable new kinds of workflows where editors, authors, and reviewers all participate in conversations focussed on research manuscripts or other digital objects, either in a closed or public environment (Vitolo et al., 2015). At the present, activity performed by Hypothesis and other Web annotation services is poorly recognized in scholarly communities, although such activities can be tied to ORCID. However, there is definite value in services such as PubPeer, an online community mostly used for identifying cases of academic misconduct and fraud, perhaps best known for its user-led post-publication critique of a Nature paper on STAP (Stimulus-Triggered Acquisition of Pluripotency) cells. This ultimately prompted the formal retraction of the paper, demonstrating that post-publication annotation and peer review, as a form of self-correction and fraud detection, can out-perform that of the conventional pre-publication process. PubPeer has also been leveraged as a way to mass-report post-publication checks for the soundness of statistical analyses. One large-scale analysis using a tool called statcheck (statcheck.io/ was used to post 50,000 annotations on the psychological literature (Singh Chawla, 2016), as a form of large-scale public audit for published research.

Tennant JP, Dugan JM, Graziotin D et al. A multi-disciplinary perspective on emergent and future innovations in peer review [version 3; referees: 2 approved]. F1000Research 2017, 6:1151 (doi: 10.12688/f1000research.12037.3)

Leave a Reply