Summary of the Initiative
The fundamental value and distinctive characteristic of scholarly publishing is the fact of peer review. What sets scholarly publishing apart from all other forms of communication is the expectation of a rigorous pre-publication assessment of an author’s ideas and claims by reviewers qualified to make informed judgments as to a work’s quality, accuracy, and originality. In an era of contending ideas about the nature and purpose of the state and the authority of scholarly institutions—a moment in which even the possibility of objective truth has been called into doubt—the clear and consistent implementation of a peer-review process by scholarly and scientific publishers assures the credibility of what Johanna Drucker has called “gold standard of scholarship.”
Yet while the peer review process has for decades set scholarly publishing apart in quality (and often cost) from other forms of publishing, the key stakeholders in scholarly communication—publishers, libraries, scholars, and students—have never had a clear and consistent way of identifying whether a work has been peer reviewed. Beyond the title of the journal or the name of the press on the spine of the book, no assurance is provided, and no clear standards implemented, to give credence to the claim that a given work has been subjected to the scrutiny of academic peers.
Others have suggested the need for greater transparency in peer review practices, but with limited success. In early 2000, the Association of American Universities (AAU) and the Association of Research Libraries (ARL) convened in Tempe, Arizona a meeting to identify “Principles for Emerging Systems of Scholarly Publishing.” The resulting guidelines, known as the “Tempe Principles,” make clear that
The system of scholarly communication must continue to include processes for evaluating the quality of scholarly work[,] and every publication should provide the reader with information about evaluation the work has undergone.
The first of these objectives—the fact of peer review—continues to be observed by all credible scholarly publishers, be they presses or scholarly societies. But the second— the communication of the kind of review undertaken in a way intelligible to the reader— has rarely been implemented, and never in any systematic way representing an agreed set of definitions shared across publishers.
The Amherst College Press and the MIT Press have jointly proposed to the Open Societies Foundation a grant that would make possible the convening of a key group of stakeholders and thought leaders in the field of scholarly communication in a meeting in Cambridge, Massachusetts. Now, with the award of a grant from OSF, we are turning toward convening this meeting on January 24, 2018. By the close of this meeting, we intend to:
(a) confirm the urgency of developing a system for communicating to readers the type of review implemented on any given work;
(b) arrive at an agreed set of definitions articulating what is meant by the various forms of peer review (double-blind, single-blind, peer-to-peer, open) and the scholarly objects that are reviewed (for example, a proposal, a manuscript, or a dataset);
(c) describe a potential system for clear and consistent signaling to readers the object that was reviewed and the process by which it was reviewed prior to publication; and,
(d) invite scholarly publishers to assent voluntarily to these definitions and utilize a system for signaling peer review processes.
The explosion in the number of media outlets and sources of published information in the digital age has led to a conundrum: the trustworthiness of published information seems to be inversely proportional to the amount of information published. The power of false, spurious, and incendiary claims to gain credibility and persuasive power has been amply demonstrated in recent years, proving the truth of Jonathan Swift’s observation that “falsehood flies, and truth comes limping after it.” In such circumstances, it is necessary—and increasingly urgent—that scholarly and scientific publishing develop clear, credible ways to communicate its distinctive value and authority as a source of information and ideas subjected, without exception, to the review of informed, painstaking evaluation prior to public release.
The distinguishing feature of scholarly publishing has always been this high standard of assuring the quality of the work published—whether a scientific article, a booklength historical study, or a work of philosophical argument. Moreover, all participants with a stake in the scholarly communications ecosystem have a shared interest in both assuring that these high standards are consistently observed, and in having ready access to information about what scholarly object was reviewed and what process was employed to review it. A core rationale of such a step is to make assessments of publication quality and associated prestige more transparent, less tied to assumptions about a publisher or journal brand.
Authors would be supported by such a system because it would help undergird the authority of their arguments. Among other benefits, such an approach will make it less fraught for scholars to choose open access models and publishers in the context of promotion and tenure, given independent signals of quality and rigor that can inform assessment processes.
Research libraries—which, as a principal function, teach students to distinguish peer-reviewed materials from other forms of published expression—would be supported in this critical mission by the existence of a clearer means of identifying peer-reviewed materials, potentially by machine-readable methods that integrate with information discovery systems.
Publishers who voluntarily accept the discipline of adhering to agreed definitions of peer review, and who use a common set of signaling devices to share with readers the scrutiny to which every title has been subjected, would set themselves apart from the increasing number of predatory publishers who seek to take monetary advantage of scholars desperate to publish—and who brazenly misrepresent their peer review processes to attract authors.
Finally, all readers would have a way of finding what the Tempe Principles call for, but which has not in fact yet been offered—“information about evaluation the work has undergone.”
Change—even constructive change—happens slowly within scholarly publishing. It is a system characterized by strong traditions, a rigid hierarchy of prestige, and increasingly parlous economics. To maximize the chance that a new system of ensuring and communicating peer review finds widespread adoption among scholarly publishers and societies, it will be necessary to gather a critical mass of leaders and respected voices in the community to articulate a consensus around the need for, and the path to, a new regime.
Our theory of change begins from the position that scholarly communication is a global system of significant influence but constrained resources, in which the most influential actors are often best positioned to propose and model change. As such, we believe the path to change depends first on building a strong consensus among a key group of leaders and opinion-shapers in the field.
To build consensus among a key group of leaders and influencers in scholarly communication by gathering them together for consultation focused on developing a taxonomy of standard peer review practices, and a system of communicating the application of these practices.
To communicate the resulting consensus emerging from this meeting to the community of scholarly publishers and scholarly societies, inviting their voluntary collaboration in a system of abiding by agreed definitions of peer review and a common system for communicating how and when they are utilized.
To invite scholarly and scientific publishers and societies to adopt the resulting peer review taxonomy and agree voluntarily to the use of a common set of signaling systems (similar to those developed by Creative Commons in the case of rights) applied to their published works. For example, values in this peer review taxonomy could be coded as metadata associated with Digital Object Identifiers (DOIs) for journal articles and books that publishers register with CrossRef, and thereby become machine readable, mineable, and integrated downstream into library and other discovery systems.
A report on the meeting asserting the need for more systematic means of communicating peer review processes, articulating the agreed definitions of peer review, and proposing a system of tracking and signaling.
The distribution of this report to all members of the Association of American University Presses, the Society for Scholarly Publishing, the American Council of Learned Societies, and other gatherings of scholarly and scientific publishers.
The creation of a web site (e.g., prstandards.org) providing information about standard peer review practices, resources to help publishers apply these practices, and offering a pathway for publishers to confirm their support for, and participation in, peer review transparency.
The development of a list of scientific publishers, scholarly presses, and scholarly societies using these standards in their published work.
The routine use of a tracking and signaling system by publishers (see one proposed example in the Appendix) that makes clear the peer review process employed for a given work, and the inclusion of that information by librarians in cataloging these works.
The creation of a website (noted above) detailing, explaining, and advocating for adherence to and use of peer review transparency standards.
Blog posts authored by key participants in the effort describing the work and emphasizing its significance as a means of setting apart, and making clear the value of, scholarly publishing. We will target such outlets as The Chronicle of Higher Education, the Scholarly Kitchen (a publication of the Society for Scholarly Publishing), Inside Higher Ed, Educause Review, and others.
Organizing a speaker’s panel from among the planners of this effort to speak about peer review transparency at key gatherings of stakeholder communities in scholarly and scientific communications. Examples include the annual conferences of the Association of American University Presses, the Society for Scholarly Publishing, the Association of College and Research Libraries, and the Charleston Library Conference.
The impact of this initiative can be measured and evaluated by tracking:
The number of publishers voluntarily adopting these standards and joining the signaling regime.
The number of research and library discovery systems integrating the signaling regime.
The number of works published utilizing a system for signaling the use of these standards.
The proposal to host this meeting is the culmination of a set of ideas first discussed and developed by the editorial board of the Lever Press, a scholarly publishing initiative launched by a consortium of leading liberal arts colleges for which the Amherst College Press serves as the editorial lead. The first result of this conversation was Lever’s development and publication of a “Statement of Peer Review Commitments and Guidelines,” undertaken principally to help bolster the press’s reputation as a new publisher of scholarly work.
As the leaders of Lever Press began speaking with colleagues about this document, it quickly became clear that considerable interest in sharing these ideas and creating a community of practice would offer an answer to the critical need for scholarly publishing to give warrants for its unique quality and value. What followed were formal presentations at both the March 2017 meeting of the Library Publishing Coalition (in Baltimore) and the June 2017 meeting of the Association of American University Presses (in Austin). Further presentations are scheduled for the October 2017 meeting of the Oberlin Group (at Reed College) and the November 2017 Institute for Chief Academic Officers of the Council of Independent Colleges.
Our plan for outreach takes as a baseline an expectation of hosting this meeting on January 24, 2018.
Thirty days prior to the meeting:
Establish a website at prstandards.org and a Twitter feed for @prstandard
30 days after the meeting:
Draft and circulate among participants a summary report of the meeting, together with an agreed set of definitions for types of peer review, a set of proposed signaling icons for indicating the types of peer review employed on a given publication, and a pathway for presses and scholarly societies to indicate their adherence to the transparency regime.
Contact member presses of the AAUP and scholarly societies (particularly those part of the American Council of Learned Societies) to inform them of the meeting and its outcome, and invite their participation in advance of a public announcement of the website.
60 days after the meeting:
Public launch of the website; press release announcing the standards and list of participating presses and scholarly societies.
3-12 months after the meeting:
Organize panels and presentations at key stakeholder and scholarly gatherings to explain and discuss the standards and signaling system.
Draft and place blog posts by influencers involved in the initial meeting for publication in key outlets in scholarly communication.
Monthly press releases to announce new signatories to the standards.