This is adapted from our recent paper in F1000 Research, entitled “A multi-disciplinary perspective on emergent and future innovations in peer review.” Due to its rather monstrous length, I’ll be posting chunks of the text here in sequence over the next few weeks to help disseminate it in more easily digestible bites. Enjoy!
This section outlines what would a model of GitHub-style peer review could look like. Previous parts in this series:
- An Introduction.
- An Early History
- The Modern Revolution
- Recent Studies
- Modern Role and Purpose
- Criticisms of the Conventional System
- Modern Trends and Traits
- Development of Open Peer Review
- Giving Credit to Referees
- Publishing Review Reports
- Anonymity Versus Identification
- Anonymity Versus Identification (II)
- Anonymity Versus Identification (III)
- Decoupling Peer Review from Publishing
- Preprints and Overlay Journals
- Two-stage peer review and Registered Reports
- Peer review by endorsement
- Limitations of decoupled Peer Review
- Potential future models of Peer Review
- A Reddit-based model
- An Amazon-based model
- A Stack Exchange/Overflow-style model
Git is a free and open-source distributed version control system developed by the Linux community in 2005 (git-scm.com/). GitHub, launched in 2008, works as a Web-based Git service and has become the de facto social coding platform for collaborative and open source development and code sharing (Kosner, 2012; Thung et al., 2013) (github.com/). It holds many potentially desirable features that might be transferable to a system of peer review (von Muhlen, 2011), such as its openness, version control and project management and collaborative functionalities, and system of accreditation and attribution for contributions. Despite its capability for not just sharing code, but also executable papers that automatically knit together text, data, and analyses into a living document, the true power of GitHub appears to be acknowledged infrequently by academic researchers (Priem, 2013).
Social functions of GitHub.
Software review is an important part of software development, particularly for collaborative efforts. It is important that contributions are reviewed before they are merged into a code base, and GitHubprovides this functionality. In addition, GitHub offers the ability to discuss specific issues, where multiple people can contribute to such a discussion, and discussions can refer to code segments or code changes and vice versa (but note that GitHub can also be used for non-code content). GitHub also includes a variety of notification options for both users and project repositories. Users can watch repositories or files of interest and be notified of any new issues or commits (updates), and someone who has discussed an issue can also be notified of any new discussions of that same issue. Issues can also be tagged (labelled in a manner that allows grouping of multiple issues with the same tag), and assigned to one or more participants, who are then responsible for that issue. Another item that GitHub supports is a checklist, a set of items that have a binary state, which can be used to implement and store the status of a set of actions. GitHub also allows users to form organizations as a way of grouping contributors together to manage access to different repositories. All contributions are made public as a way for users to obtain merit.
Prestige at GitHub can be further measured quantitatively as a social product through the star-rating system, which is derived from the number of followers or watchers and the number of times a repository has been forked (i.e., copied) or commented on. For scholarly research, this could ultimately shift the power dynamic in deciding what gets viewed and re-used away from editors, journals, or publishers to individual researchers. This then can potentially leverage a new mode of prestige, conferred through how work is engaged with and digested by the wider community and not by the packaging in which it is contained (analogous to the prestige often associated with journal brands).
Given these properties, it is clear that GitHub could be used to implement some style of peer evaluation and that it is well-suited to fine-grained iteration between reviewers, editors, and authors (Ghosh et al., 2012), given that all parties are identified. Making peer review a social process by distributing reviews to numerous peers, divides the burden and allows individuals to focus on their particular area of expertise. Peer review would operate more like a social network, with specific tasks (or repositories) being developed, distributed, and promoted through GitHub. As all code, data, and other content are supplied, and peers would be able to assess methods and results comprehensively, which in turn increases rigor, transparency, and replicability. Reviewers would also be able to claim credit and be acknowledged for their tracked contributions, and thereby quantify their impact on a project as a supply of individual prestige. This in turn facilitates the assessment of quality of reviews and reviewers. As such, evaluation becomes an interactive and dynamic process, with version control facilitating this all in a post-publication environment (Ghosh et al., 2012). The potential issue of proliferating non-significant work here is minimal, as projects that are not deemed to be interesting or of a sufficient standard of quality are simply never paid attention to in terms of follows, contributions, and re-use.
Current use of GitHub for peer review.
Two example uses of GitHub for peer review already exist in The Journal of Open Source Software (JOSS; joss.theoj.org), created to give software developers a lightweight mechanism for software developers to quickly supplement their code with metadata and a descriptive paper, and then to submit this package for review and publication, and ReScience (rescience.github.io), created to publish replication efforts in computational science.
The JOSS submission portal converts a submission into a new GitHub issue of type “pre-review” in the JOSS-review repository (github.com/openjournals/joss-reviews). The editor-in-chief checks a submission, and if deemed suitable for review, assigns it to a topic editor who in turn assigns it to one or more reviewers. The topic editor then issues a command that creates a new issue of type “review”, with a check-list of required elements for the review. Each reviewer performs their review by checking off elements of the review issue with which they are satisfied. When they feel the submitter needs to make changes to make an element of the submission acceptable, they can either add a new comment in the review issue, which the submitter will see immediately, or they can create a new issue in the repository where the submitted software and paper exist—which could also be on GitHub, but is not required to be—and reference said issue in the review. In either case, the submitter is automatically and immediately notified of the issue, prompting them to address the particular concern raised. This process can iterate repeatedly, as the goal of JOSS is not to reject submissions but to work with submitters until their submissions are deemed acceptable. If there is a dispute, the topic editor (as well as the main editor, other topic editors, and anyone else who chooses to follow the issue) can weigh in. At the end of this process, when all items in the review check-list are resolved, the submission is accepted by the editor and the review issue is closed. However, it is still available and is linked from the accepted (and now published) submission. A good future option for this style of model could be to develop host-neutral standards using Git for peer review. For example, this could be applied by simply using a prescribed directory structure, such as: manuscript_version_1/peer_reviews, with open commenting via the issues function.
While JOSS uses GItHub’s issue mechanism, ReScience uses GItHub’s pull request mechanism: each submission is a pull request that is publicly reviewed and tested in order to guarantee that any researcher can re-use it. At least two reviewers evaluate and test the code and the accompanying material of a submission, continuously interacting with the authors through the pull request discussion section. If both reviewers can run the code and achieve the same results as were submitted by the author, the submission is accepted. If either reviewer fails to replicate the results before the deadline, the submission is rejected and authors are encouraged to resubmit an improved version later.
Tennant JP, Dugan JM, Graziotin D et al. A multi-disciplinary perspective on emergent and future innovations in peer review [version 3; referees: 2 approved]. F1000Research 2017, 6:1151 (doi: 10.12688/f1000research.12037.3)