Draft, V1: Transparency in Standards and Practices of Peer Review
Published onMar 27, 2018
Draft, V1: Transparency in Standards and Practices of Peer Review
·
Executive Summary
“Publishing” is a broad—indeed, a broadening—category of activities in the communication of ideas. The vast expansion of means of communications powered by the advent of digital technologies has rendered nearly unrecognizable old and long-established understandings of what publishing meant and the categories within which it took place.
In the midst of this disruption, it has become more important—not to say urgent—to describe clearly and plainly the distinctive qualities of scholarly publishing. Scholarly publishers claim for their work a unique kind of authority in the contested landscape of knowledge. That claim is a matter of increasing importance—and challenge—in a moment that the expertise and importance of scholarship, or indeed any claim to particular authority in knowledge, is under attack.
Scholarly publishers take a variety of forms; some are university presses focused on publishing books; some are scholarly societies supporting and overseeing the publishing of journals. They perform their work through a variety of economic models, serve a complex variety of scholarly audiences, and focus on a broad spectrum of fields of scholarly inquiry. But one distinctive and identifying characteristic they hold in common is the practice of some form of review of a proposed publication by qualified expert referees as part of the decision process in committing to publishing—the practice of peer review.
Just as there are a variety of fields served by scholarly publishers, the purpose and process of peer review takes a variety of forms and has a variety of objectives. Of course, it is impossible and inappropriate to apply the expectations and objectives of a process suited for the needs of one field or discipline to all other fields—or all other publishers. But the fact remains that the concept of peer review is a recognizable characteristic shared by all scholarly publishers—and differentiating those publishers from all other sources of information.
Two contemporaneous but unrelated trends have emerged to make both necessary and desirable the creation of a shared means of denoting this practice of scholarly publishers. The first is the emergence of operations claiming to offer scholars an opportunity to publish—for a fee—in outlets claiming to be both open-access and peer reviewed. The second, more pernicious, is the rise of efforts to discredit practically all claims to authority in knowledge—the emergence of the “fact-free” discourse of “fake news” and choose-your-own-reality information bubbles.1
This report summarizes a year of work undertaken in partnership by a small group of scholarly publishers—the Amherst College Press, the MIT Press, and Lever Press, a publishing consortium of 54 liberal arts institutions—to study the practice of peer review across a broad range of scholarly publishers, and to understand how peer review is perceived, described, and recognized by the users of scholarly information—scholars, readers, librarians, and technologists. Our work was undertaken with a view to exploring a question: Might it be desirable, and possible, to create a way of signaling the form and substance of peer review across scholarly publishers in ways that could be immediately understood by the users of that information?
In undertaking this work, we had in view the example of the system developed and implemented by the efforts of Creative Commons to make possible for authors, artists, and content-creators the communication of the rights they are willing to share with users of their works. This system links together a visual marker (a “button”), a web page of simple explanatory language, the longer and more formal language of a legal license, and a machine-readable element—essentially, a form of metadata—that allows digital systems storing or cataloging the work to record and communicate to potential users the rights associated with that work.
In this report we review our work, focusing on the conversations of a capstone conference held in January of 2018 with a number of thought leaders from publishing, scholarly societies, research libraries, technology innovation centers, and scholarly communications experts. We propose the outline of a system that would make more transparent both the substance and process of peer review implemented by scholarly publishers on the works they publish, making these practices both immediately evident to users and part of the digital record of all works they publish. We set out a proposed taxonomy of types of peer review, offering the idea of a first-order division between “closed” (or historically, but somewhat archaically, “blind) and “open” forms of review, acknowledging that new forms of the latter sort are emerging as more responsive to the needs of an increasing number of fields.
We suggest the language of definitions describing the meaning of these forms of review. And we suggest a rudimentary system of signals that would link a visual marker with a description of the process undertaken for each work and a definition of each of the principal elements of the process—the object reviewed, and the review performed.
Finally, acknowledging the contingent nature of our proposals, we call on large associations in which scholarly publishers share best practices and adhere to common standards—such agencies as the Association of University Presses, the American Council of Learned Societies, and the Open Access Scholarly Publishers Association—to take up the task of collaboratively engaging and refining a sharable and scalable system for denoting the peer review status of an object. We propose that these conversations include the participation of key providers of platforms supporting and enabling the sharing of metadata on digital scholarly objects—specifically, Crossref—so that digital records associated with each scholarly object can quickly and accurately provide information about its peer review status.
We acknowledge that there are many conversations taking place about the practice and process of peer review across all of scholarly communication that we cannot account for here. In our conversations we included those who have taken a leadership role in shaping new means of providing scholars with ways of receiving credit for the scholarly labor they contribute to the work of peer review, and who have developed new processes and standards for open review in their fields. We included as well leaders in the open science moment, who have made major contributions in the development of pre-print repositories.
In the end, we limit our focus and our proposals to the work of scholarly publishers. We understand by this phrase to be addressing ourselves to scholarly presses and the learned societies who take intellectual responsibility for the content of the journals issued in their name or as part of their mission. We do not mean those who perform publishing services on behalf of these actors (who are often referred to as “scholarly publishers”). So, too, while we acknowledge the increasing significance of the role played by pre-print repositories, we limit the focus of our proposals here to the aspects of the publishing process over which scholarly publishers exercise control. While a system making transparent all forms and practices of peer review might be extended to include ways of indicating (say) comments on pre-prints or formal post-publication reviews, we do not feel that scholarly publishers—our main concern here—can take responsibility for defining or signaling aspects of the scholarly communication system over which they have no control.
Peer review and the authority of scholarly publishing
For centuries, the published argument has conveyed a message of distinct authority. Indeed, from nearly the first moment that written argument was linked to the technology of early modern forms of printing, publishing has been used both to establish and to challenge authority—the authority of religious doctrine, of state power, of scientific claims.
The notion of what constitutes “publishing” has been stretched in unprecedented ways by the explosion of digital technologies and forms of communication. With this development, the claims of authority bound up in publishing have become more contentious—and more difficult for readers to parse. Knowledge claims exerting power over not just the importance but the direction of scientific research, as well as over civic discourse and public policy, are based on questionable premises and set forward by actors claiming authority for their ideas. In this increasingly bewildering flood of information, the unique authority of scholarly publishing as a source of rigorous thinking and sound research is challenged—or undermined. Scholarly publishing, as a critical part of the broader mission of institutions of higher education to create and communicate knowledge, may no longer simply argue that the quality of its work is self-evident. It must make clear, to a population overwhelmed with information and uncertain how to sift wheat from chaff, the warrants to its claims.
The foundation on which the authority of scholarly publishing rests is the rigorous evaluation and assessment works must go through before they are published—known as the peer review process. All publishers claiming to be producers of scholarship—whether they are learned societies or scholarly presses—understand that only by the consistent and rigorous implementation of a peer review process can they make such a claim.2 The particular way in which peer review is conducted varies from publisher to publisher, reflecting the standards of a given discipline, the publisher’s mission and purpose, and the audience in view; but the practice itself is a hallmark of the scholarly enterprise.
Of course, peer review is a practice with application in the world of research and scholarship far beyond the relatively focused work of scholarly publishing. Applications for grant funding, systems for evaluating the contributions of individual scholars with a view to professional advancement and the conferral of tenure, are but two examples of other endeavors that bring to bear something recognizable as peer review. Our particular concern, however, is with the role of scholarly publishing as a uniquely trusted source of knowledge, and with the increasingly evident need to support the claim upon which that authority rests.
In light of this, the Amherst College Press and the MIT Press collaborated to examine how a clearer set of agreed definitions for the specific objects and processes of peer review—what gets reviewed, and how it gets reviewed—might be designed by, and shared among, scholarly publishers. This work has encompassed:
Extensive consultations between the organizing partners.
Conference presentations at the Association of University Presses, the Library Publishing Coalition, the Association of College and Research Libraries, and the 2018 University Press Redux conference.
The publication of an article in the Journal of Scholarly Publishing arguing the need for more fully disclosing the nature of the peer review that scholarly publishers bring to bear on their works.
As a capstone to these initial efforts, with the generous support of the Open Society Foundations we gathered key stakeholders in the system of scholarly communication—publishers, academic librarians, scholars, technology innovators, and thought leaders—to share our work, to hear views on both the scope of the problem and the necessary elements of any proposed solution, and to explore the complexity of a number of considerations that shape what peer review is and how it is conducted.
This report summarizes the conversations of that gathering. It concludes with a section proposing for broader consideration an agreed set of definitions articulating what is meant by the various forms of peer review (double-blind, single-blind, peer-to-peer, open report, open identity, etc.) and the scholarly objects that are reviewed (for example, a proposal, a manuscript, or a dataset). We issue this report with the intent of sparking a broader conversation among scholarly publishers, and encouraging these publishers to adhere voluntarily and publicly to a system of transparency in peer review standards and practices proposed here.
Background to the gathering
Nearly two decades ago, as it became clear that digital technology would wreak tremendous change in the systems by which scholars communicate ideas with each other and a broader audience of readers, a gathering convened by (among others) the Association of American Universities and the Association of Research Libraries concluded with a statement on “Principles for Emerging Systems of Scholarly Publishing.” Acknowledging the impossibility of predicting, much less determining, how these technologies would reshape and reconfigure long-established systems of scholarly communication, the statement confined its ambitions to articulating a number of principles serving as basic expectations for any emergent systems. Among them was:
The system of scholarly publication must continue to include processes for evaluating the quality of scholarly work and every publication should provide the reader with information about evaluation the work has undergone.3
While inherent to the meaning of “scholarly publication” is a process for evaluation—the peer review process—it is by no means clear that scholarly publishers have established a simple, meaningful, and uniform means of providing readers with information about how that process was brought to bear on the work in their hands. Yet ask publishers, scholars, or librarians what peer review means, and you are likely to get a range of statements about quality, validation, reputation, trust, and more; the ideas associated with peer review vary according to our roles and our intellectual disciplines.
In the main, the protocols and processes publishers use to peer-review scholarship nevertheless remain opaque to the readers of the content they produce. Historically, the reputation of the publishers themselves was sufficient to give assurance of the quality and rigor of the review processes employed; but that is no longer the case. With the entrance of more and more actors into the work of scholarly publishing—some acting unscrupulously to take advantage of the intense pressures on scholars and researchers to publish—the reputation of all scholarly publishers has become subject to increasing scrutiny. Open access publishers in particular face stiff challenges to overcome a reputation for poor peer review processes—a reputation that, it should be said, is in no way inherent to an open access outcome, and which serves the interest of powerful actors in the system.
To publish scholarship is to vouch for reliable, state-of-the-art research as well as to foster further exploration and informed discussion. Scholarly communication, in the end, is the accredited review and promulgation of scholarship; the value a peer review process confers on a work and its author(s) is increasingly important in a time that truth claims are being challenged and corroded, and facts are open to contestation. As gatekeepers in the process of assuring independent evaluation of the work of scholars, publishers bear a primary responsibility to uphold their rigorous practices of evaluation and assessment. Making these evaluative practices transparent serves the linked interests of giving warrants to the publisher’s claim of authority; affirming the value of the author’s contribution; helping research librarians clearly distinguish peer-reviewed work in the materials they collect; and enriching the discernment of readers assessing contending ideas.
First steps: A stakeholder survey
We began our work by sending out an advance survey to participants in the meeting to assess areas in which consensus might exist and to identify those in which a disparity of views is still evident. We asked participants to indicate what forms of peer review they were familiar with or had participated in; what scholarly objects were appropriately the subject of evaluation by peers; what materials were not, and should not typically be, reviewed; and (because it was a question of particular interest to us) whether respondents thought there would be value in somehow reflecting or signaling the number of reviewers that had been engaged on a specific work.
The responses we received helped us to create an initial sketch of the components of the peer review process that a set of definitions, and a system of signals, would need to encompass. That model (Figure 1) has as a central focus identifying what has been reviewed (a proposal? A complete manuscript? A dataset?); who has done the reviewing (a scholarly peer? An editor? A reader?); and how the work has been reviewed, on a spectrum of fully closed to completely open.4 Other components of peer review that had more salience for some participants than for others were the question of when elements in review process had been conducted, how many reviewers had been engaged, and what the purpose in view of the review was.
With the input of these survey responses, we constructed an agenda for a day-long conversation. Following a reintroduction to the emergence and practice of peer review across the long history of scholarly communication, the conversation focused on the following topics, each facilitated by an invited participant:
Varieties of open review
Peer review of data and experimental protocols
Peer review of preprints
Peer review in time: Sequenced and simultaneous review
Capturing who does peer review
Capturing the intentionality or function of review
The bad actor problem
Metadata and discoverability
A brief history of peer review
The workshop opened with a presentation offering an account of how peer review emerged as the hallmark of scholarly publishing. While on the surface it might seem a practice of long historic standing, peer review has not always functioned as it does today, nor did it have the same primary purposes in view. By some accounts, peer review has its earliest roots in processes created in response to the Reformation to assure theological orthodoxy.5 For our purposes, the long arc of peer review in scholarly publication begins with the Royal Society in the mid-seventeenth century. In its earliest years, the Philosophical Transactions of the Royal Society, established in 1665, came about as the founding editor’s effort to share news with an audience generally composed of the Society’s membership. In its earliest years, Philosophical Transactions—as would later be the case with the Philosophical Magazine and Nature— was guided by a strong editor (Henry Oldenburg) who made most decisions on content autocratically; the authority of its publishing was based on the Royal Society’s licensing privilege, a privilege that conferred upon the Society the right to publish materials and to make an independent determination that it was neither treasonous nor blasphemous.
Only in 1752, nearly ninety years after Philosophical Transactions first appeared, did the Royal Society assume editorial control of the journal. Its reason for doing so was one that would be familiar to most scholars today; it was, primarily, to uphold the reputation of the Royal Society. But in engaging referees to pass judgment on contributions before admitting them into the pages of the Transactions, the Society used a process that would not be familiar to peer reviewers today; after an author submitted a paper to a designated representative of the Society, the paper was read at one of the Society’s weekly meetings, and then referred to a “Committee of Papers”— referees whose task was limited to reading, in silence, abstracts of the contributions considered for publication which had been prepared by the Society’s secretary. By the 1760s, France’s Académie royale des sciences accepted its members’ submissions for publication without further ado; the work of non-members was reviewed by two or three members of the Académie whose task was to draft a consensus report to the authors. The sheer volume of submissions rendered this system unworkable by the 1830s. Contemporaneously in England, written referee reports, in which subject experts begin to make suggestions, became more standard practice at the Royal Academy around 1832.
The point to be grasped in this recounting of the early years of scholarly communication is that the purpose of “peer review” had more to do with defending and maintaining the boundaries between members and non-members of the (selective, private) scholarly societies—and, by extension, protecting the reputation, and the exclusivity, of those organizations. Not until the late nineteenth century does the process of refereeing begin to look like peer review; even so, it remained largely a conversation among a closed circle of members, and it was not deemed necessary for scholarship. At Philosophical Magazine, a team of editors made publishing decisions; they read the full text of manuscripts, keeping in mind the interests of their paying subscribers. Nature, also at first the work of a strong editorial hand, used non-academic editors throughout much of the twentieth century, relying instead both on the reputation of the institutions with which authors were affiliated and on personal endorsements attesting to the author’s research (an example of which was Sir W. L. Bragg’s letter of support for Crick and Watson’s paper on the double-helix structure of the DNA molecule) to select work worthy of publication work. Not until 1953 did Nature work with professional editors to read submitted papers, and not until the 1960s and 1970s did refereeing scholarship become widespread. Indeed, a search of the term “peer review” in the Google books database (Figure 2) shows that the phrase only emerges in the late 1950s, and only becomes a widespread term of art in the last twenty years of the twentieth century.
In contrast to the work of scholarly journals, the practice of peer review in the world of scholarly monographs has not been deeply examined. In the early nineteenth century, most publishing firms were small, and editors typically made their own decisions. Often a reader on retainer with a specialization (generally) in some field was available to review proposals or book chapters, but not called in to review a manuscript. Indeed, university presses founded in the United States in the late nineteenth and early twentieth century often had as a primary focus providing a publishing outlet for the institution’s own faculty.
In conversation, participants picked up on and further developed a number of themes from this presentation. The fact that peer review’s initial raison d’être was more focused on protecting the boundaries and distinctions of membership in a club (albeit a club called an “academy”) was important to bear in mind, some argued, not least because vestiges of that original purpose may still be glimpsed in the way the process unfolds—at least in some places. A broader conversation ensued in which it was noted that specific moments in recent public awareness of science—particularly the AIDS crisis, public debates over genetically modified organisms and more recently in controversies over climate change— “peer review” has attained higher salience in the broader conversation about what constitutes authoritative knowledge and trustworthy scientific claims. Exactly because of this—because those in the academy have often sought to advocate for policy prescriptions or to advance certain ideas in the public square by basing arguments on “peer-reviewed findings”—there was a danger, some argued, that the limitations inherent to peer review were sometimes overlooked.
Varieties of open
Many participants in the conversation had identified as a key priority for conversation the emergence and significance of open forms of peer review. Some had engaged in such processes in their own or others’ work or had designed open review processes for sources in which they have an editorial role.
A first distinction of semantics proved necessary. “Open peer review” is a term in increasing use, yet one with multivalent and confusing meaning. In some instances, it simply means that the fact a given object was peer reviewed is acknowledged in some way and shared with the reader. But of course, that is precisely what is supposed to be the critical distinguishing feature of all scholarly publishing; describing and communicating the fact it has happened as “open” seems an unhelpful distinction. Rather, what we mean by “open” here is a process in which (a) the identities of author(s) and reviewers are disclosed—at least to each other, and sometimes to the broader readership of a work; and (b) perspectives from interested scholars and readers may be sought on the work—under varying circumstances of access and under the condition of openly identified comments—in advance of its attaining “version of record” status.
In a presentation to open the conversation, a taxonomy comprising five types of open peer review along a continuum was proposed, with an example of a publications using each flavor (Table 1, above).
With open forms of review gaining interest and acceptance across many fields—in a sense, pre-print repositories are basically open review processes at large scale—one important distinction for any system of transparency to disclose is the difference between “open” and “closed” (as opposed to “blind”) system of review. (Of course, not all pre-print repositories are intended to support open commenting or critique; but the fact that scholars can have access to, and make their own judgments on, materials in advance of publication is in some way a form of review.) Hence a publisher could use a “fully closed” (double-blind) or “partly closed” (single-blind) form of review; or it could use one of a variety of open forms of review. At the very least, the distinction between open and closed should be communicated.
In open forms of review, the fact that reviews are themselves intellectual labor can be acknowledged; indeed, reviews that are both open and published can even be cited. For a publisher to meet the test of providing effective and appropriate reviews often means—especially in the case of multimodal scholarship (for example, video essays)—identifying reviewers able to assess the work through distinct lenses (for example, one with video expertise, another with content expertise). Certainly the multimodal character of these sites means that any part (the video, the text, the review, etc..) can be cited, even if the publisher encourages citing the entire site or a page of the site. In light of this, a suggestion was raised to attach digital object identifiers (DOIs) to videos and, by extension, to other assets in multimodal work, to assure they can be easily and independently cited and discovered. The attribution of a DOI can be made to mark the end of a certain stage in the development of a scholarly object, thus fixing a moment at when its development was paused for the peer review process to happen. Questions were raised about the practicability of this—web sites are not yet seen as “publishers” permitted to assign DOIs, for example. And the increasing granularity of scholarly objects, creating as it would a significant increase in the number of persistent identifiers, could have the paradoxical result of making discoverability harder, not easier.
Peer review of data and experimental protocols
It is not only the text of scholarly arguments—whether an article or a monograph-length treatment of a subject—that is subject to the scrutiny of scholarly peers. In the sciences, the social sciences (particularly after the “replication crisis”), and even in some humanities disciplines, the accumulation of data on which an argument is based is itself made subject to evaluative review. Datasets are increasingly viewed not merely as instrumentalities, but as scholarly objects in themselves; and questions touching on the experimental and/or research methodologies by which they were compiled, the validity of statistical significance argued on the basis of the data, and the availability to other scholars of data upon which an argument is based are taken in view by review processes.
But of course, what constitutes “data” means different things to different scholars—and therefore what it means to publish data means different things as well. In the sciences, data are typically collected by means of experimental processes or carefully structured research processes. The resulting body of collected data—the “dataset”—is a separate object, often one that can be assigned a specific and distinct digital identity. In the humanities, however, “data” are primary sources or secondary sources, typically in archives and libraries. While it is the case in recent year that humanists have worked with increasing facility with digital resources—for example, using GIS data to create a map of a certain social or cultural phenomenon, like the spread of memorial inscriptions through the Greek and Roman Empires—it is typically the case that such research creates layers of new interpretive data upon an existing dataset (in this case, a GIS database).
Similarly, there are varying ways of publishing data: (1) as an accompaniment to scientific articles, in support of research method(s) and results; (2) data qua data, as publications in data journals; (3) submitted to a repository (e.g. genomic data) and separately from a publication. In the first two cases the journals editors are responsible for ensuring the peer review process; many journals are taking steps to impose more precise requirements on authors, with a view to facilitating reproducibility. The American Economic Review, for example, will only consider submissions for publication “if the data used in the analysis are clearly and precisely documented and are readily available to any researcher for purposes of replication.”6 Similarly, the policy of PLOS journals stipulates that authors are required “to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception”—further noting that “refusal to share data and related metadata and methods in accordance with this policy will be grounds for rejection.”7
An additional consideration raised in discussion focused on the distinction between types of data—raw, processed, and metadata. Current Data Management Programs (DMPs), a matter of concern across all disciplines and fields—but a matter of institutional or funder policy, not publishing practice—are nearly always ambiguous on just what form of collected data must be preserved and made available. Data submitted to journals or deposited into repositories needs to be usable, replicable, or discoverable. But unstructured data is difficult at best to reuse, while data published in journals often appears in the “Supporting Information” section of papers—which, not being indexed, makes the data difficult to discover.
Even so, such policy statements do not clarify whether, and how, the peer review of submitted manuscripts extends to peer review of the data upon which the argument of a manuscript is based—and, if it does, in what way and against what standards. Some participants from the world of scientific journals offered the perspective that datasets submitted to repositories are often of more value to other researchers than a dataset developed specifically to submit in conjunction with an article, and hence forwarded the notion that peer review in the world of datasets should therefore focus on the data in repositories (though who the “publisher” taking responsibility for establishing and curating such review processes would be in the case of repository data is less clear).
Two other salient issues were raised during discussion. First, particularly in research involving the use of human subjects—whether behavioral experimentation, oral history interviews, or ethnographic research—Institutional Review Boards (IRBs) typically perform a type of peer review by assuring experimental protocols satisfy ethical standards. (Notably, a typical requirement of IRB review is a clear plan from researchers to protect identifying or other confidential information collected from human subjects—which means, in some cases, that some aspects of data collected must remain inaccessible.) Some held the view that the reviews performed by these bodies could easily be added to the metadata associated with a larger signal about the review process of a resulting publication. Second, some participants asked whether the extension of peer review to software code should be considered. The software that gathers and processes data is increasingly the “instrumentation” of the research environment; it extends to software that goes into cleaning, analyzing, and manipulating data before preparing data visualizations for publication. Proprietary code and instruments often make using, reusing, or reproducing data challenging— and review impossible.
What seems evident is that the significance of datasets as both the fruit of research and the basis of scholarly argument will only increase over time, and in fields far beyond publishing—for example, in the realms of grant funding and translational research. It can be anticipated that data management plans will demand more specificity, with funders increasing the rigor of requirements—and possibly establishing common frameworks guiding their creation. Peer review practices can be a part of this unfolding set of requirements, and a system for transparently signaling what process of review was undertaken on a given object could, through such tools as Crossref (explored more fully below), become linked to, and track with, that object through the course of its use by other scholars. More requirements for transiting a review requirement, however, means more demand on scholars to provide the service of review—a not insignificant consideration.
Peer review of preprints
Preprint repositories have emerged in recent years as an increasingly significant means of communication between scholars. Established largely out of frustration with both the pace of publishing—particularly in the sciences—and the subsequent restrictions on access to published outcomes, preprint repositories have become a pathway for researchers to communicate their work with others in ways that largely disintermediate publishers.
An opening presentation on this subject forwarded the notion that for an article to be a “preprint” it must satisfy three criteria:
it must have a permanent location and a persistent digital identity;
it must be citable; and
it must be in a preprint repository.
This latter point sparked considerable debate among participants, as it effectively ruled out institutional repositories—even though they satisfy the requirement of providing persistent identity, and often hold citable objects.
Preprints, like manuscripts submitted for publication, can be (and often are) reviewed, with comments provided through the hosting platform; some platforms incentivize reviews by crediting reviewers with “points.” Perhaps because of this, it becomes important to differentiate between peer review and unsolicited (or “crowdsourced”) review; while there can surely be value in review and commentary offered by individuals interested enough to read a work, it is not the case that all reviews are created equal—and there is a need for a system seeking to provide transparency through signaling to be able to capture this distinction.
An increasing number of scholars in the humanities, it was observed, are seeking ways of receiving credit for blog posts and similar texts that have qualities more like preprints and less like formal published objects. One scholar present noted that among his most frequently cited works was a blog post he had written. As discussion unfolded, it became evident that the increasing variety of forms of scholarly expression imposes new challenges for systematizing peer review—and simultaneously increases the necessity of finding clear and consistent ways of communicating by what processes, and on what object, review has been conducted. With more and more scholars simply regarding placement of their works in preprint repositories as sufficient—so long as their work is being cited—and with one possible future for journal publishing being simply offering a curated overlay of existing preprint repositories, clear and transparent practices of peer review can help clarify the scholarly status of a given object, whether or not it is part of a formal publication process. The question, however, becomes—who would have the authority to certify that the peer review signaled on objects outside the publishing process has in fact been done?
Peer review in time: Sequenced and simultaneous review
Does a signaling system for peer review need in some way to capture the sequence of events involved in what “peer review” captures? It was noted that in case of journal articles, peer review takes place once the research is done and the article is written. A published book, by contrast, may be subjected to review at a number of stages—the author’s full proposal, the submitted manuscript, even a revised manuscript. If more than one form of review is implemented—say a crowd review process is implemented on a complete manuscript, as well as a partly closed review—those reviews, though independent of each other, may be conducted simultaneously, or in sequence—in which case the findings of one could inform the structure of the other.
Disclosing at which stages of the lifecycle peer review has taken place would showcase the rigor of the process. Concern was expressed regarding the stigmatization of authors whose peer review process had become more untidy and complex; exposing the timing and number of review(s) could end up being harmful, or at least confusing, information. (A large number of reviewers, for example, might seem to signal enhanced rigor and higher quality, when in fact it might have been the result of a work that needed a great deal of remedial attention.) The claim was made that a temporal element is “spurious precision” and that proposals—even stringently reviewed ones—are really “promissory notes.”
In the realm of journal articles and data, however, a sense emerged that the timing of the data review, which could happen well after the publication of an article the data supports, might matter. Would a late review of data supporting a published article, it was asked, affect the status of peer-review badge awarded that article at publication? One possible way of addressing this concern was to utilize a metadata management system (for example, Crossref) to assure that the unfolding of the review process on related objects (an article and its dataset) could be tracked in a clear and functional way.
Capturing who does peer review
Many of the responses offered by participants to the pre-meeting survey suggested a need to provide some degree of information about the “peers” who conduct the reviews undertaken as part of a publishing process. Conversation among participants began with a presentation speculating on how it might be possible, through the use of well-constructed metadata, to account in some way for the qualifications brought to bear by specific reviewers—their seniority, their expertise, and their perspectives.
Of course, in open review processes this question is somewhat obviated by the simple fact that the names of reviewers are revealed in some way—whether privately to the author or, in even more open processes, to the reading audience. But such systems are very much in their infancy. Because the standard practice—at least to this point—has been for publishers to preserve the anonymity of the reviewers from the author (and, in some cases, the anonymity of the author from the reviewers), there is a degree to which the received paradigm emerges from, and works to uphold, that original function of maintaining and policing the boundaries of the “club” of a discipline or field. That said, in both article and (to a greater extent) monograph publishing, there can be a variety of reviewers whose insights shape the final published result—scholarly peers, to be sure, but in addition professional staff editors, a series editor, or members of an editorial board.
A community norm among scholars in all fields is the expectation that they will devote some small but not insubstantial part of their time and effort to the intellectual labor of reviewing other scholars’ work as part of the evaluative process of peer review. Yet with a steady rise in the percentage of contingent faculty across institutions—scholars who are rarely if ever encouraged (or rewarded) by their institutions to participate in the community labor of scholarship—there is a concomitant rise in the demands placed on regularly appointed faculty to serve as peer reviewers. Further, as intensifying competitive pressures for access both to professional advancement and research funding has led to the emergence of various metadata-based systems proposing to measure the impact of publishing outlets and even specific articles, a logical unfolding of the “quest for ranking” might be a desire to rank the reviewers themselves. How this could be accomplished in practical terms, however, is difficult to imagine; and it might bring as well unintended consequences, binding in some way the quality of an argument to the perceived prestige of the reviewers.
So, too, the interest of publishers in identifying and enlisting reviewers is not necessarily perfectly aligned with the ways in which other scholars in the field might wish to evaluate a work. Publishers, for their part, often want to obtain the perspectives of a diverse group of scholars in a given field and invite the author to consider and respond to them before bringing the work forward for publication. Especially with work that challenges established scholarly narratives or disciplinary expectations, publishers can find themselves curating a process of review intended to help strengthen what may be an unconventional yet worthy argument from anticipated lines of critique. In such circumstances reviewers may not wish to be identified—especially if their participation is seen as an implied endorsement of the work.
A final dimension of this conversation centered on whether the number of reviewers involved in the process of reviewing stages or elements of a particular work should in some way be accounted for in a signaling system. Again, while in open and collaborative reviewing systems this becomes somewhat self-evident, in closed review systems a range of reviews can be sought on a given work, especially in monograph publishing. The concern was raised, however, that such a signal might be misread; the fact that a given work had been subjected to a relatively larger number of individual reviews did not necessarily mean that the resulting work was of higher quality, or even that it had been subjected to a more rigorous process. It might mean, by contrast, that it was in greater need of revision; that the reviewers engaged by the publisher came to very different views of the work; or that the variety of arguments and objects comprising the work required a multifaceted review process.
Capturing the intentionality or function of review
Is it possible that a system for signaling the sort of peer review to which a work was subjected might also in some way communicate the reasons why it was reviewed—or the standards against which it was evaluated? Is it essential to do so?
This is by no means an easy question to answer. In the first instance, it is not immediately clear how a system used by a large number of publishers could effectively communicate qualities or characteristics distinctive to each publisher. That is to say, the single most important consideration shaping the evaluative criteria brought to bear by any given publisher has to do with the scholarly mission that publisher has, and the audience it serves; and these things rightly vary from journal to journal, and from press to press.
Of course, the evaluative criteria brought to bear in each field or discipline are distinct for reasons inherent to those fields of scholarship. So, too, different journals or presses publishing work in a given field may have different approaches to the field, or value distinct scholarly perspectives. How this might be contained within a system designed for use by all publishers is difficult to imagine.
Even more complicating are the different parameters bounding how publishers undertake their work. Scholarly societies acting as publishers generally must keep in view the needs and expectations of their membership, typically as understood and stewarded by a publications committee. University presses depend to greater or lesser degrees on institutional subventions—in which case their areas of editorial interest may need to reflect institutional interests and strengths; alternatively, presses receiving little or no institutional funding depend chiefly on sales revenue to be self-sustaining, and as such ineluctably make publishing choices with an eye to marketability—and not only scholarly merit.
The bad actor problem
Whatever else it may mean, peer review is implemented as a way of assuring that certain qualities are present in a work presented for publication. Those qualities may differ from journal to journal and from press to press, but the end in view is the same; the fact that peer review has been conducted is set before the reader as an assurance that a certain set of scholarly standards has been met.
If a system is implemented by which scholarly publishers signal to their audiences the fact that peer review has been conducted, it will inescapably be seen as an assurance of quality; and that fact alone will mean unscrupulous actors seeking to profit from the intense pressures on scholars to publish their research will seek to use such a system to give the appearance of quality. This class of actors—collected under the label “predatory publishers”—must be regarded as prepared to take any advantage possible of a system designed to communicate an assurance of quality, for much the same reason counterfeiters focus their efforts on luxury goods.
How then should a system of peer review transparency, linked to a signaling system disclosing the processes utilized, be governed? How should the ability of scholarly publishers to have access to and implement such a system in their own work be determined?
In a presentation beginning the conversation, two possible alternatives were sketched—describing two different approaches to the problem. First was a way of approaching the question under which publishers would, in essence, be self-policing in using the system. The definitions giving meaning to each aspect of the system—the descriptions of both processes and objects—would be made public in some way, as would the expectation that the use of a set of signaling tools would mean the voluntary agreement on the part of publishers to ensure that the review processes they implemented fulfilled the intention and meaning of those definitions. Effort would be invested in creating resources to inform both publishers and authors of the agreed definitions.
A second approach—essentially defining the other extreme—would be a tightly controlled and rigidly policed system for granting publishers seeking access to the tools of the signaling system. Publishers would need to indicate to some central registry their willingness to implement review practices adhering to the agreed definitions of review processes and scholarly objects. Based on this agreement they would be given access to the use of a standard set of images signaling the use of these systems and the objects on which they were used. A “white list” approach (as opposed to a “black list” approach, epitomized by the so-called Beall’s List) would thus be put in place, with the central registry providing the service of maintaining a public list of all publishers that, by agreeing to abide by the definitions, had qualified to have access to the signals. At the most rigorous, the functions performed by the central registry would extend to occasionally auditing the peer review processes of each participating publisher—a sort of “peer review of peer review”—to assure compliance with both the terms of the definitional standards and the intended purpose of the signaling system.
In discussion, participants came to a broad consensus that any badging system for peer reviewed works would quickly become seen as a proxy for quality, and would thus become a target of misuse by publishers whose interests do not necessarily align with the standards of the scholarly community. While no agreement emerged, it was generally felt that in order to attract the greatest possible participation from scholarly publishers while at the same time functioning effectively as a sound and meaningful system conveying a clear picture of what review had been implemented on a given object, some middle point needed to be found between the two extremes of self-policing and a highly elaborated system of accreditation. It might be the case that an existing entity might see this initiative as a logical extension of its own mission within scholarly communication, seeking to provide the offices of a central registry and the function of oversight.
Metadata and discoverability
It is possible that technology may provide critical elements of a solution to this problem—specifically, the possibilities inherent in rich metadata associated with scholarly objects. For this concluding discussion section, participants heard a presentation on the work and mission of Crossref, a technology platform focused on the specific task of creating a digital infrastructure to make all scholarly works discoverable through linked data.
As an issuer of Digital Object Identifier (DOI) records for individual scholarly objects, Crossref has helped create means of linking data about research products (articles, book chapters, even books) to data about researchers. An initiative of Crossref directed at publishers, Crossmark, provides a platform for journal publishers to link together on the webpage of an article information regarding the article’s DOI, the license associated with the work, analytics regarding .html and .pdf downloads, and the way in which the work was peer reviewed—both pre- and post-publication. At present, however, Crossmark’s information about the nature and status of peer review comes directly from the publisher—which means that each publisher may mean different things, and adhere to different standards, in applying commonly attributed labels. At present, Crossref has registered metadata for 10,000 individual, openly published peer reviews from both pre- and post-publication sources. The materials included in these records comprise a considerable variety of texts, including referee reports, editors’ decision letters, and author responses.
In conversation, participants speculated about the possibility of extending Crossmark—or a similar tool—toward the publisher’s own workflow, essentially automating a means by which a peer review label could become affixed to each work of that publisher assigned a DOI. Rather than populate this particular metadata field by individual declaration by each publisher, some means could be developed—or coded—by which publishers agreeing to the application of agreed definitional standards in their own work could use a Crossref widget to register each next step of peer review process, with the appropriate signal simply being applied to the work in the end.
The metadata potentially collected for reviews could be extensive. Reviews could have, to begin with, their own DOIs—which would enhance the status of such documents as part of an individual researcher’s scholarly record. They could be linked to the publication for which they were commissioned (or volunteered), with an indication given of the stage in the publication process (or the revision process) in which they were written. The title, date, and author of the review could be recorded (the latter potentially associated with an ORCID identity), as well as an indication of the reviewer’s role with respect to the work (a reviewer, editor, an assistant to the reviewer, a reviewer of associated data, etc.). Declarations regarding competing interests could be shared, as well as the reviewer’s recommendation concerning the work, if relevant.
As the collection of metadata makes more discoverable the content of peer reviews, a related issue arises that publishers will need to contend with—the question of the license under which the text of a review is made available. Publishers will need to have clear agreements clarifying the rights in these reviews, and (in cases of open peer review processes) assuring that they have a license from the author of the review to make the text of it available.
Call to action
Conversations among participants led to a broad consensus at the close of the meeting that the time has come to call on all scholarly publishers to move toward a new norm of disclosing in some consistent fashion the way in which the works they publish have been peer reviewed. Optimally, the basis of this norm would be an agreed set of definitions shared by the publishers of scholarly materials, no matter what kind of institution they may be—a learned society, an independent journal, or a scholarly press.
The wide-ranging discussions of our gathering explored many of the challenges that will be encountered shifting from present practice to a new set of norms making the disclosure of peer review standards and practices routine. Not all of these challenges can be solved at once; but that need not stop steps toward the promulgation of a set of definitions and a proposed set of signals that could be set forward as an invitation to all scholarly publishers to join in a shared system of peer review transparency.
Participants discussed a notional system that would make a simple, binary disclosure, a statement or insignia asserting that a scholarly object had been peer reviewed (or, by its absence, the implication that it had not been.) Some voices questioned the advisability, practicality, or need to disclose any further information. From the publishers’ perspective, however, the complexity of the parts and pieces that comprise what becomes labeled “peer review” demands a system that strikes a balance between too little and too much disclosure.
In order to attract participants and to be meaningful to readers, such a system will need to satisfy five necessary conditions:
It must be modular, with elements that capture both the scholarly objects reviewed and the processes of review utilized.
It must be extensible, able to accommodate the emergence of new broadly shared definitions and widely implemented processes.
It must be flexible, able to be implemented by a broad spectrum of scholarly publishers in ways appropriate to the works they produce and the audiences they serve.
It must be the shared responsibility of the community of stakeholders to update, refine, and oversee.
Finally, it must reflect the collegial essence of peer review, which is, in the end, a work of the scholarly commons.
Additionally, such a system could both benefit from and support the work of emerging efforts, examined and discussed by participants, to both assure credit to peer reviewers for the work they do and capture in metadata the full scope of what goes into the development of a scholarly publication. These parallel areas of development could be linked to a system of definitional standards and signals, yielding a far more substantive system for reflecting the scholarly labor of peer review and the resulting authority of scholarly publications.
Drawing on our preparatory work and the contributions of participants in the Workshop on Transparency in Standards and Practices of Peer Review, we present in two following appendices what we believe are two essential elements of a system of transparency: A proposed set of shared definitions for peer review processes, and a proposed system for signaling visually the ways in which these processes have been implemented in the case of individual scholarly objects. We welcome comment on these proposals and look to develop a means by which this system can be offered as a resource to all scholarly publishers for voluntary adherence and consistent use.
Proposals for consideration
We offer in the following appendices two results of our efforts of research and collaboration. First is a taxonomy of types of review—divided in two broad classes, “closed” and “open” forms of review, essentially separated by the question of whether the identities of either reviewers or authors are in some way kept hidden (or “closed”) from each other and from readers of the work, or fully disclosed in some way (open). So, for example, if an author is at some stage of a work informed of the identities of reviewers—which sometimes occurs in monograph publishing—but the identities of reviewers are not in some formal way disclosed to the reader, then the review process is still categorized as a type of “closed” review. This appendix includes as well simple and straightforward definitions of three types of scholarly objects that can be subjected to peer review: proposals, manuscripts, and datasets. Other objects may emerge, of course.
In a second appendix we offer a suggested system of signals that publishers could use to communicate in a simple and clear way to readers how a specific published work they have produced had been reviewed.
A final index includes a list of those who participated in the Workshop we convened in January of 2018. The inclusion of their names does not imply their endorsement of the contents of this report or its recommendations for a course of action.
Next steps
We are aware that our work needs further development and engagement by broad community of scholarly publishers. We do not claim any authority to insist that publishers adopt a practice of making transparent the kind of peer review that the works they publish have undergone.
At the same time, we believe that the time is ripe for a broader conversation among those who take intellectual responsibility for the content of scholarly publishers—the scholars and editors who shape, implement, oversee, and make decisions based on review processes—to make clear to readers the rigor of the evaluation process a work has undergone. We believe that this step can significantly help to underscore the authority of that work to speak on a matter of scholarly knowledge. And we are aware that individual publishers are beginning to develop and implement systems of their own to do something like what we propose here.
While we applaud such efforts, we believe there is a danger that they may become counterproductive unless they are based on a common effort shared by stakeholders engaged in the system of scholarly communication. And so we set forward our own proposals with a call to organizations that serve as convening points for scholarly publishers to take up these questions and to explore whether consensus can be achieved on the basic building blocks of a system of greater transparency in peer review. This would encompass, at a minimum,a conversation about:
identifying specific scholarly objects that are the subject of review;
establishing broadly shared understandings about the various review processes;
a shared system for signaling to readers the review process implemented, and the object on which it was implemented; and
a common means of recording in the metadata of each published object the peer review status of that object, as well as creating, where appropriate, metadata registrations for the content of openly published peer reviews.
We now call on such gatherings of stakeholders as the Association of University Presses, the American Council of Learned Societies, the Open Access Scholarly Publishers Association, the Association of College and Research Libraries, and the Library Publishing Forum to take up this conversation both within their own constituencies and in collaboration with each other.
Aware that the metadata created by publishers and by librarians on the same scholarly objects can have distinct purposes and be, in some ways, separate and not easily reconciled, we further suggest that, as part of these conversations, the providers of metadata platforms—specifically, Crossref—be included at an early stage. As it now stands, library cataloging systems that signal to users the peer review status of scholarly materials are often designed without input from publishers, and can misrepresent to users the actual peer review process of a given object. More rigorous and systematic implementation of metadata standards can make significant progress toward improving these systems.
Questions of intentionality seems critical to any proposal to substantially reform review mechanisms. This section serves to complicate the idea (usefully), but elides addressing questions of intentionality that are important to the call to action and proposals for consideration below.
Specifically what uses (intentions) of peer review are implicit in this proposal.
If the peer were reformed consistent with the call to action and proposals, how would actors then be able to use the information that work X was reviewed under conditions Y?
Which decisions/judgements.by whom, would be supported/improved ?
What rationale (theory, evidence, principles) suggests that each element of the proposal/call to action is necessary, sufficient, for improving these uses, or an integral (causal/mechanistic) part of the use/decision process?
?
Micah Altman:
This model has similarities to, and is likely influenced by, other works in the field. For example, https://f1000research.com/articles/6-588/v2 includes analogues of the what', who, and open traits, and refers to purpose in framing and discussion. More generally, here, and below, it would be helpful to identify the external work most antecedent to or most related to the core ideas; references for the core factual/historical claims; and to identify where new ideas are being introduced.
?
Micah Altman:
This seems to assert be that the ecosystem would be best off with ‘traditional’ heavy-weight practices.
In contrast however, we have alternate approaches to peer review, such as PLOS’s practice of assessment for validity but NOT impact that may have considerable merit.
I think we can agree that publishers should provide sufficient information for readers to understand and evaluate the review received, I do not think we agree that publishers must uphold their current practices of review.
Mark Edington:
There is no logical connection between “traditional” and “rigorous.” Scholarly publishers may indeed change practices of peer review, but I seriously doubt that any of them would describe them in terms other than rigorous, or at least having the purpose of rigor. An assessment for validity but not impact (pace PLOS) can still be “rigorous” (and one hopes it is).
?
Micah Altman:
Although peer review has never been generically sufficient to establish this authority — reproducibility/replicability/validation has played a commentary role. Moreover, there are increasingly debates in the open science community over whether Peer Review is a necessary component — or whether, at least in scome circumstances, transparency and reproducibility are sufficient to establish authority.
Mark Edington:
This may be one (among many) of the distinctions between science and the humanities. “Reproducibility” is not really a meaningful standard in humanities scholarship; peer review, in these fields, has significant (really, primary) weight in establishing authority…
?
Micah Altman:
for example?
?
Micah Altman:
Reference?
?
Micah Altman:
Might refer to Blaise Cronin’s pioneering work categorizing this area.
?
Micah Altman:
Agreed. And there is some extant work that one might consider recognizing.
E.g.:
https://content.iospress.com/articles/information-services-and-use/isu775 contains a review of the affordances of scholarly publications
https://scholarlykitchen.sspnet.org/2018/02/06/focusing-value-102-things-journal-publishers-2018-update/ provides an inventory of claimed value-add activities/roles in the publishing process itself.
?
Micah Altman:
On attribution across the document generally…
Please add an explicit early clarification as to whom the “we” refers to. This is especially important with respect to “we propose”. My assumption is that “we” = the project PI’s (Mark & Amy).
To acknowledge workshop participants, commentators etc. outside of the “we” please consider adding a contributor statement based on http://docs.casrai.org/CRediT
?
Aileen Fyfe:
cut term ‘referees’ here.
?
Aileen Fyfe:
Move this sentence up before the French example. And rewrite: ‘In the following decades, the Committee sometimes ‘referred’ papers to one of its members for closer scrutiny; the ‘referee’ usually reported orally at the next meeting. By the 1830s, this oral system was transformed into a system of written, confidential referee reports. These were always written by fellows of the Society.’
?
Aileen Fyfe:
‘this system of collaborative review’
?
Aileen Fyfe:
Need paragraph break here, for switching national contexts
?
Aileen Fyfe:
Open new para: In contrast, in late 18thC France, the Academie royale…’
?
Aileen Fyfe:
Royal Society, not Academy
?
Aileen Fyfe:
It’s a good question whether to call these people ‘referees’. I usually do not call them ‘referees’ until we get to the 1830s (though we do have some oral refereeing in 1780s+). I tend to say something like: ‘But the Royal Society’s mode of involving members of the scholarly community to pass judgement on contributions was somewhat different to peer review today’
?
Aileen Fyfe:
I personally only agree with that if it refers to the situation ‘now’ or ‘since the late 20thC’ - I do not agree that it has been (or will be) universally true. ‘on which the authority of scholarly publishing NOW rests…’ would be fine. (but I know you don’t really want to go into an historical argument here!)
Mark Edington:
excellent point.
?
Aileen Fyfe:
Surely also ALPSP?
?
Angela Gibson:
In the figure above and throughout, open and closed review are discussed. Should more prominence be given to the notion of “invited” peer review?
Mark Edington:
Important point. “Invited” actually bridges the two categories. All reviewers in traditional forms of closed review (“double blind” or “single blind”) are, technically, “invited”; and publishers engaging in (or supporting) various forms of open review may invite specific reviewers to take roles in the review process….
?
Angela Gibson:
To avoid potential negative connotation, emend to “stewards of the process that assures”?
?
Aileen Fyfe:
I’m OK with gatekeepers, myself.
?
Jennifer Lin:
All of this pertains to the metadata about a published peer review. And it is already in effect with Crossref’s peer review metadata schema (c.f. https://support.crossref.org/hc/en-us/articles/115005255706-Peer-Reviews).
If we want to keep this paragraph, we need to make clear this is already available for publishers. And it would need to be moved up above the previous paragraph (suggested replacement for the previous paragraph).
?
Jennifer Lin:
Not sure about this paragraph. It’s getting the Crossmark and metadata for peer reviews mixed up, and I don’t recall (based on memory) how the conversation treated both of them on this point. What we have here wouldn’t work so how about another stab at it (to replace this paragraph):
To fully achieve the group’s various aims, we would have a view into the peer review process across all scholarly literature, not just works with the openly published reviews. In conversation, participants speculated about creating a standard suite of metadata to describe the peer review process a publication has undergone. These metadata can become a standard part of the Crossref schema and thus a standard part of each article or book’s metadata. As publishers register their content and include information on the peer review process, the research community would begin to have a view into the type and level of validation performed on these works.
?
Jennifer Lin:
This is a separate point from Crossmark, so I recommend starting a new paragraph on this. Suggested paragraph:
In Nov 2017, Crossref began supporting publishers who post peer reviews. Publishers can register this content and provide metadata that characterizes the peer review asset (for example: recommendation, type, license, contributor info, competing interests). They can also provide metadata, which offers a view into the review process (e.g. pre/post-publication, revision round, review date). At this point, Crossref has registered metadata for 10,000 individual, openly published peer reviews from both pre- and post-publication sources. The materials included in these records comprise a considerable variety of texts, including referee reports, editors’ decision letters, and author responses.
?
Jennifer Lin:
Also, they are often not archived either, which makes it even more tenuous.
?
Jennifer Lin:
DMPs are Data Management Plans
Christie Henry:
Is it as much a danger, or is it that the potential strength of a common effort brings us a more compelling and resonant way of establishing the authority and integrity of knowledge?
Christie Henry:
The rigorous evaluation and assessment of works prior to publication
Christie Henry:
rigorous, critical thinking
Christie Henry:
Is this too limiting? Hasn’t it also engaged more generally new processes and standards of all forms of peer review?
Christie Henry:
and investment? I wonder if there is a stronger word for practice. Peer review is such an elemental component of our DNA
Christie Henry:
differentiates
Christie Henry:
The taxonomy of scholarly publishers is complex;
Christie Henry:
see above, might we want to use “we”?
Christie Henry:
Should we speak in a collective “we”?
Christie Henry:
in which the importance of scholarship, and expertise more broadly, is under attack
Christie Henry:
we find a necessary opportunity to articulate the distinctive qualities of
Christie Henry:
Convert to present? What publishing is and the categories within which it transpires
?
Sabina Alam:
Although, where possible by the publisher, if reviewers are able to include their ORCID ID’s in their reports, it can give readers a quick way to assess the relevance of the reviewers own published works. This would of course only work for signed reports
?
Sabina Alam:
Regarding open review, it should also be distinguished whether this is formal/invited peer review, or ‘informal’ - i.e. voluntary comments from readers
?
Sabina Alam:
Content published in F1000 Platforms (i.e. F1000Research, Wellcome Open Research, Gates Open Research, African Academy of Sciences etc), which all undergo post-publication invited open peer review, also have a mandatory data sharing policy. The referees are also asked to specifically answer the question “ Are all the source data underlying the results available to ensure full reproducibility? “
?
Nick Michal:
What components of the review should be transparent?
o Elements of a review: actual content (# of words, amount of track changes, quality of ideas), synchronicity with editor decision, time to review, review score, # of reviews agreed vs. completed, specific journal, type of review done
o Which of the above should the reviewer be able to approve/deny in terms of sharing (analog to visibility of Publons records being limited by journal)
?
Alexia Hudson-Ward:
There’s a great deal of good information here but it’s a great deal. If the goal is to draw attention to the importance of transparency in scholarly publishing, then I recommend placing the background and call to action sections somewhere else. They may be free standing or make more sense in another area of the document.
Andy Collings:
What about being able to scale effectively, and adopt easily even at large publishing volumes?
Andy Collings:
In February 2018, there was the “Transparency, Recognition, and Innovation in Peer Review in the Life Sciences” meeting (http://asapbio.org/peer-review). In his meeting report, eLife’s Mark Patterson noted: “At the 2018 meeting there seemed to be several points of agreement and potential action. The biggest point of agreement was the support for posting peer review reports.”
Could a signaling system be a step towards this even more transparent future? Do the authors have a view on this, and whether they support moves towards publishing the reviews themselves?
?
Aileen Fyfe:
Maybe the bit in parentheses should say '(even though they are often referred to as ‘scholarly publishers’)
Andy Collings:
I’m not sure I understand why this distinction is being made?