Testing Open Peer Review for Conferences

Authors

DOI:

https://doi.org/10.7557/5.4535

Keywords:

Open Peer Review, Conference Management Software, OpenUP

Abstract

Watch the VIDEO.

Peer review is a crucial quality assurance step in the scientific method. Traditional single/double-blinded reviewing can result in bias, does not scale well, and offers little incentives for reviewers to do this important work.

Open peer review (OPR) challenges multiple aspects of this traditional method: (i) open identity (author/reviewer identities are known), (ii) open participation (involve larger communities in the process), (iii) open pre-review (early versions are published before reviewing), (iv) open report (review itself is published), (v) and open final-version comments (public forum for discussions after the publication).

During the H2020 project OpenUP, we conducted a pilot study applying OPR to conference settings. Two separate venues were used to test various OPR ideas, their acceptance within the community and their practicability: the 2nd European Machine Vision Forum (EMVA) 2017 and the eHealth2018 Best Master Student Paper Contest.

For EMVA, we tested the novel approaches to include open identity, open participation, open report, and open final-version comments. For eHealth2018, different variations for open identity, open participation, and open report were evaluated.

The submission, review, and decision process is handled by a dedicated conference management software (CMS). The proposed OPR workflow could not be handled by existing CMS out-of-the-box. Thus, we created a fork of the popular CMS HotCRP and added support for all experimental OPR workflows needed during the pilot.

All pilots were conducted successfully. Feedback was collected by conducting interviews directly at the conference venues and by collecting survey questionnaires. Overall, the feedback for all tested OPR concepts was positive. The open identity aspects resulted in more transparency but some participants feared that this also skewed the reviews to be artificially positive (to prevent repercussions). The open participation aspects were very well received. These concepts may allow for better scaling of peer reviews as we experience strong growth in attendance at many conferences while the number of expert reviewers is not increasing at the same scale. Open report and open final-version comments were seen as positive but their actual usage was very limited.

Our modified CMS software has been released as open source code and is available to the public for future use to foster OPR at conference setups.

Metrics

Metrics Loading ...

Author Biographies

Oliver Zendel, AIT Austrian Institute of Technology GmbH

Oliver Zendel is working on his PhD thesis in the field of machine learning at AIT Austrian Institute of Technology. In the H2020 project OpenUP he leads a pilot study that investigates novel methods for conducting peer review at conferences.

Matthias Schörghuber, AIT Austrian Institute of Technology GmbH

Matthias Schörghuber is working on his Phd thesis in the field of machine learning at AIT Austrian Institute of Technology. In the H2020 project OpenUP he contributes to the improvement of open peer review for conference settings.

Michela Vignoli, AIT Austrian Institute of Technology GmbH

Michela Vignoli is Scientist and Open Science expert at AIT Austrian Institute of Technology. Her focus of interest lays on knowledge management in the digital era and on how to foster the transition of the current science system to a more open science. In the H2020 project OpenUP she contributes to researching novel ways of scholarly communication beyond traditional research channels and alternative peer review methodologies.

Downloads

Published

2018-11-20

How to Cite

Zendel, O., Schörghuber, M., & Vignoli, M. (2018). Testing Open Peer Review for Conferences. Septentrio Conference Series, (1). https://doi.org/10.7557/5.4535