from the this-is-the-future dept
The key idea behind open access is that everyone with an Internet connection should be able to read academic papers without needing to pay for them. Or rather without needing to pay again, since most research is funded using taxpayers' money. It's hard to argue against that proposition, or that making information available in this way is likely to increase the rate at which medical and scientific discoveries are made for the benefit of all. And yet, as Techdirt has reported, academic publishers that often enjoy profit margins of 30-40% have adopted a range of approaches to undermine open access and its aims -- and with considerable success. A recent opinion column in the Canadian journal University Affairs explains how traditional publishers have managed to subvert open access for their own benefit:
An ironic twist to the open-access movement is that it has actually made the publishers richer. They've jumped on the bandwagon by offering authors the option of paying article processing charges (APCs) in order to make their articles open access, while continuing to increase subscription charges to libraries at the institutions where those authors work. So, in many cases, the publishers are being paid twice for the same content -- often charging APCs higher than purely open access journals.
Another serious problem is the rise of so-called "predatory" open access publishers that have distorted the original ideas behind the movement even more. The Guardian reported recently:
More than 175,000 scientific articles have been produced by five of the largest "predatory open-access publishers", including India-based Omics publishing group and the Turkish World Academy of Science, Engineering and Technology, or Waset.
But the vast majority of those articles skip almost all of the traditional checks and balances of scientific publishing, from peer review to an editorial board. Instead, most journals run by those companies will publish anything submitted to them -- provided the required fee is paid.
These issues will be hard, if not impossible, to solve. As a result, many are now looking for a different solution to the problem of providing easy and cost-free access to academic knowledge, this time in the form of preprints. Techdirt reported earlier this year that there is evidence the published versions of papers add very little to the early, preprint version that is placed online directly by the authors. The negligible barriers to entry, the speed at which work can be published, and the extremely low costs involved have led many to see preprints as the best solution to providing open access to academic papers without needing to go through publishers at all.
Inevitably, perhaps, criticisms of the idea are starting to appear. Recently, Tom Sheldon, who is a senior press manager at the Science Media Centre in London, published a commentary in one of the leading academic journals, Nature, under the headline: "Preprints could promote confusion and distortion". As he noted, this grew out of an earlier discussion paper that he published on the Science Media Centre's blog. The Science Media Centre describes itself as "an independent press office helping to ensure that the public have access to the best scientific evidence and expertise through the news media when science hits the headlines." Its funding comes from "scientific institutions, science-based companies, charities, media organisations and government". Sheldon's concerns are not so much about preprints themselves, but their impact on how science is reported:
I am a big fan of bold and disruptive changes which can lead to fundamental culture change. My reading around work on reproducibility, open access and preprint make me proud to be part of a scientific community intent on finding ways to make science better. But I am concerned about how this change might affect the bit of science publication that we are involved with at the Science Media Centre. The bit which is all about the way scientific findings find their way to the wider public and policymakers via the mass media.
One of his concerns is the lack of embargoes for preprints. At the moment, when researchers have what they think is an important result or discovery appearing in a paper, they typically offer trusted journalists a chance to read it in advance on the understanding that they won't write about it until the paper is officially released. This has a number of advantages. It creates a level playing field for those journalists, who all get to see the paper at the same time. Crucially, it allows journalists to contact other experts to ask their opinion of the results, which helps to catch rogue papers, and also provides much-needed context. Sheldon writes:
Contrast this with preprints. As soon as research is in the public domain, there is nothing to stop a journalist writing about it, and rushing to be the first to do so. Imagine early findings that seem to show that climate change is natural or that a common vaccine is unsafe. Preprints on subjects such as those could, if they become a story that goes viral, end up misleading millions, whether or not that was the intention of the authors.
That's certainly true, but is easy to remedy. Academics who plan to publish a preprint could offer a copy of the paper to the group of trusted journalists under embargo -- just as they would with traditional papers. One sentence describing why it would be worth reading is all that is required by way of introduction. To the extent that the system works for today's published papers, it will also work for preprints. Some authors may publish without giving journalists time to check with other experts, but that's also true for current papers. Similarly, some journalists may hanker after full press releases that spoon-feed them the results, but if they can't be bothered working it out for themselves, or contacting the researchers and asking for an explanation, they probably wouldn't write a very good article anyway.
The other concern relates to the quality of preprints. One of the key differences between a preprint and a paper published in a journal is that the latter usually goes through the process of "peer review", whereby fellow academics read and critique it. But it is widely agreed that the peer review process has serious flaws, as many have pointed out for years -- and as Sheldon himself admits.
Indeed, as defenders note, preprints allow far more scrutiny to be applied than with traditional peer review, because they are open for all to read and spot mistakes. There are some new and interesting projects to formalize this kind of open review. Sheldon rightly has particular concerns about papers on public health matters, where lives might be put at risk by erroneous or misleading results. But major preprint sites like bioRxiv (for biology) and the upcoming medRxiv (for medicine and health sciences) are already trying to reduce that problem by actively screening preprints before they are posted.
Sheldon certainly raises some valid questions about the impact of preprints on the communication of science to a general audience. None of the issues is insurmountable, but it may require journalists as well as scientists to adapt to the changed landscape. However, changing how things are done is precisely the point about preprints. The present academic publishing system does not promote general access to knowledge that is largely funded by the taxpayer. The attempt by the open access movement to make that happen has arguably been neutered by shrewd moves on the part of traditional publishers, helped by complaisant politicians. Preprints are probably the best hope we have now for achieving a more equitable and efficient way of sharing knowledge and building on it more effectively.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: academic publishing, culture, knowledge, open access, preprints, sharing knowledge