Scientists! What are you supporting?

Much has been said about how expensive academic journals are. Large companies like Elsevier, Sage, Springer Nature, Taylor & Francis, and Wiley publish most of the major journals, and their shareholders pocket much of the “rent” they receive thanks to academics’ labor.


CC BY-SA 4.0 Fluffybuns.

There are alternatives. One of them is based on Wikipedia, whose process for vetting information is more transparent than that of most journals. The back-and-forth between authors and other Wikipedia volunteers that result in changes to Wikipedia is right there in the talk pages, available as it is happening, and anyone can chime in. Contrast this with academic journals, which are largely a closed shop.

To be fair, while the “shops” may be closed, they do have more windows than they used to. Many journals have come out from behind the paywalls, and now practice more accountability, such as by indicating which editor handled each article, and by having a policy on editors publishing in their own journal. To their credit, the Association for Psychological Science journals, for example, have long had a policy that when an editor or associate editor submits to their own journal, the review process for their article is managed by an external guest editor to avoid conflicts of interest. When I was an associate editor several years ago at one APS journal (AMPPS), this is what we did. I recently realized that not all APS editors are aware of their own policy, however, and that sort of forgetting is another example of why keeping the windows open, so that we can see what is happening inside, is important.

Photo: Public domain.

As part of the open windows principle, we should also expect journals to produce evidence that they effectively evaluate submissions for whether they are scientifically sound. Now, if asked how they we can be confident that they are publishing quality scholarship, most journal editors would point to peer review. When asked to produce examples, however, they’d have to say something like “Can’t do that! Peer review reports are confidential.”

This “you’ll just have to trust us” type of situation is ironic for a class of people who long have held skepticism to be critical to what they do. And for me as an acculturated academic, I confess it almost feels like a betrayal to state this as plainly as I have. I imagine colleagues trying to push aside the point, with responses like “Alex, you know we try hard to get good peer reviewers, besides, in the end, science yields things that work, so your point is misleading.”

Photo: Public domain

I actually agree that science works on average, but often readers need to know whether there is much reason to have confidence in particular papers. Fortunately not all editors are so defensive that they cannot acknowledge this point. It took time, but by about a decade or so ago, a bunch of journal editors had freed the peer review reports from the confines of their password-protected journal management systems, allowing anyone to read them. Finally, readers had direct evidence of how well a journal is actually vetting its articles. Just as importantly, readers no longer had to rely on the overall journal reputation to make a guess about the process undergone by an individual article – they could actually see the peer review reports for an article they were interested in.

While the processes happening inside journals had to be dragged into the open, Wikipedia and its associated projects have always had openness baked in.

One project associated with Wikipedia is the WikiJournal of Science. This is a proper scholarly journal, one indexed by mainstream publication databases such as the Scopus database maintained by Elsevier. But unlike a conventional journal, most of the peer review process at the WikiJournal of Science happens in the open from the beginning. It’s all in the “Discuss” page that sits alongside each article.

In another convergence with mainstream journals, four years ago the prestigious eLife journal announced that they would only review manuscripts that had already been published elsewhere as a preprint, as part of their “long-term plan to create a system of curation around preprints that replaces journal titles as the primary indicator of a paper’s perceived quality and impact”. This has always been the preferred route for the WikiJournal of Science – manuscripts ideally are submitted by linking to a publicly-available preprint.

I’ve been an associate editor for the WikiJournal of Science for a year or so. One manuscripts I handled reported a study suggesting that geckos spontaneously “play” by running in running wheels. As the editor, I was pleased to have the opportunity to usher in new knowledge about these gravity-defying reptiles.

The Australian house gecko. CC-BY me.

One of my first jobs was to email several experts on animal play to ask them to review the manuscript, which the author had posted as a preprint on WikiJournal Preprints. Two agreed, and after receiving the peer reviews, I posted them on the preprint’s Discuss page where, if anyone else were moved to do so, they could also comment. The author responded to the reviewers’ comments, and those responses also can be seen on the Discuss page. Much of the reviewing process, then, works like a conventional journal, just more transparently and able to appear in real time.

When I edited that manuscript, I had no scientific knowledge of animal play (moreover, I had consistently resisted our dog’s offers to give me real-world experience).

Hugo. More attractive dogs abound, but yeah, you can use this photo if you want.

It would have been nice if we had had a more knowledgeable editor for the gecko manuscript, but we’re currently spread pretty thin in the editorial department. That’s one reason for this post (apply to be an editor! You don’t need to know anything about geckos!).

As the 🦎 example illustrates, like a conventional academic journal, we publish original research at the WikiJournal of Science. But the most common use of the journal is for academic peer review of articles that are intended for Wikipedia itself, and these typically don’t include original research. Before I joined the journal as an editor, for example, I saw that the Wikipedia article for “multiple object tracking” was a bit spotty in its coverage. Unsurprising, of course, as it’s quite an obscure topic. But because I had just written a short book on object tracking, I considered myself well-placed to write a more comprehensive Wikipedia entry. The eventual article I wrote was based on my book, together with others’ publications, so it didn’t count as the type of original research that is prohibited by Wikipedia.

I submitted my draft Wikipedia article to the WikiJournal, and it eventually passed peer review. As a result, the editor replaced the existing Wikipedia entry with my article. This was quite satisfying – given how widely Wikipedia is used, my contributions to this obscure topic are probably now much more influential then if they had remained confined to academic journals and my book.

A nice aspect of the WikiJournal of Science is that part of the revision process occurs almost instantaneously, thanks to its wiki infrastructure. As I read through a submission, I typically make small edits on the preprint itself to improve the language, just as many Wikipedians do when they come across a Wikipedia entry they are interested in. The reviewers of the manuscript are able to do the same thing. The author is not obliged to keep these edits, of course; they can revert them and explain why in their response letter.

This really should be seen as basic functionality, as it is similar to the nearly universally-used Track Changes in Word or Google Docs. But despite most of us collaborating on documents in that way for decades now, most academic journals still don’t have this functionality.

Reviewers and editors at traditional journals typically aren’t able to enter the journal’s system and directly make suggestions on the manuscript. Instead, they write their comments in a separate, standalone document or form. This lack of functionality for scholarly communication is one illustration of how little the scientific community has gotten for the billions of dollars that they have been paying to publishers each year (the previous link is for APCs alone; it doesn’t even include the subscriptions payments, and the free peer reviewing that academics do).

The unwieldiness of journals’ systems is not because corporations generally don’t deliver good products or continually improve their service; many do. But academic journals are not part of a functioning economic market. In the dreamworld of a functioning market for scholarly communication, the journal that provides the best service and features would win the most market share. In the world we actually live in, the owners of the journals (who are sometimes the publisher, and sometimes a scientific society) simply wait for submissions from the researchers. They know that researchers will stick with the journals that have the highest impact factors in their field, which then results in those journals maintaining a high impact factor, with little effect of the fees charged or the quality of the services provided.

I think that all of this means that you should support diamond open access journals in general, not just the WikiJournal of Science.

Rob Lavinsky, iRocks.com. CC-BY-SA-3.0.

Diamond open access journals are those that are free to read and to publish in. They typically use open source software (the wiki infrastructure in the case of the WikiJournal of Science, and Open Journal Systems for thousands of other diamond OA journals) hosted by a nonprofit institution, such as a university. The open source software does tend to be more klunky than the big publishers’ systems, which does mean it’s more annoying for the academic editors involved. But the alternative, the tradeoff of letting corporate publishers handle things in return for billions of dollars and a corruption of academic values, is an even worse deal.

But why should you, an individual scholar, have to do something about this? The primary way that scientists in a field come together to get things done (aside from doing science, reviewing, and editing itself) is through scholarly societies. Scientific societies were designed to serve scientists’ interests. They should be leading the way to reducing dependence on corporate publishers and creating diamond OA journals.

But many scientific societies have been captured by their publishers. Here’s how it happened. As part of a contract giving a publisher the right to publish the society’s journal, the publisher provides the society with a payment. Over the years, this payment rose, reflecting the steady increase in subscription and/or APC fees. While the payment is only a small fraction of how much the journal makes (otherwise the publisher wouldn’t have the high profits that they do), it’s a substantial amount of money for a scientific society, and quite a high percentage in the current era of declining in-person conference fees. Societies pay much of their staff salaries off of this, and many hired more staff with this money. For many societies, these staff end up making most of the society’s decisions, or advising the academics who ostensibly make the decisions but offer little resistance. As the staff’s jobs depend on maintaining the society’s revenue, giving up the publishing income is a non-starter. This dynamic has played out even at some of the most respected and active scientific societies, as we recently learned in the case of the American Association for the Advancement of Science.

Within psychology, the Association for Psychological Science (APS) is another example. Six months ago, APS suddenly announced they were starting a new journal, with no evidence of consultation of academics. Indeed, the announcement was strangely light on details of why they were starting a journal and what the vision for it was. So I wasn’t the only one who suspected this was concocted simply to create a new revenue stream.

Yesterday, I did some digging. The publisher used by APS, Sage, maintains a spreadsheet with their list of publication fees (APCs) for the open access journals they publish. Advances in Psychological Science Open is now in that list, just below Advances in Methods and Practices in Psychological Science, formerly APS’ only fully open access journal. The price to publish in the new journal? Two Benjamins and five bills!

That APC (Article Processing Charge) of $2500 is $1500 more than that for APS’ more well-established journal (AMPPS).

In short, APS is starting an expensive journal that has little to no buy-in from the community (judging from social media) and hoping that demand for the prestige of the APS brand, combined with the reject-and-refer system developed by PLOS, and perfected by Nature Publishing, will bring the money rolling in.

I’m here all week, folks, re-inventing old jokes.

If you’re a tenured academic, you shouldn’t be editing for journals like that!

I better re-phrase that. Because admittedly, I myself took an editorial stipend from APS, first at Perspectives on Psychological Science over ten years ago when some of us started the Registered Replication Report format there, and subsequently when we co-founded the journal Advances in Methods and Practices in Psychological Science.

Here’s my rewrite: if you are a tenured academic, you should be devoting a bunch of your time to cultivating alternatives to the usual money-sucking journal racket.

Over at freejournals.org, we highlight quality diamond OA journals and we diamond OA editors try to support each other. So here I am, trying to promote this. While not many people read this blog, a lot of people are occasionally forced to read emails from me (simply because I am a more-or-less tenured academic). Therefore, I have changed my email signature I now advertise the diamond OA initiatives that I am most involved in.

My email signature, some of the time.

And now it is time for me to turn to other activities for avoiding the news.

Bridgeman Art Library. Supplied by The Public Catalogue Foundation.

Postscript. Perhaps the biggest challenge facing the WikiJournal of Science is our high liability insurance bill (for things like defamation suits); my colleagues have contacted dozens of insurers but none would give us a lower bill. And that was before Elon Musk started threatening Wikipedia! If you think you can help us, please get in touch.

Let your blog run free!

Don’t allow your writing to be tied to one platform – register your science-related blog with Rogue Scholar, the free blog indexing service helping bring science blogs into scholarly database infrastructure.

I checked with Martin Fenner, who created and runs Rogue Scholar, and he said it works fine with non-paywalled Substack blog posts, I think because the full text of free posts are provided by Substack in the RSS feed… I believe Rogue Scholar needs the full text partially to try to find some of the metadata needed to populate into scholarly databases.

I should say, however, that I found the registration page difficult to navigate and needed Martin’s help to register my blog. Turns out that this is because the site was recently re-worked and some parts still rely on the legacy codebase, causing some of the site’s internal links to be confusing. Growing pains are to be expected, however, especially for a free project, one that I believe is very much is worth your support!

Check out what Rogue Scholar has accomplished over the last year: https://blog.front-matter.io/posts/report-rogue-scholar-advisory-board-meeting-october-16-2024/

An executive summary of science’s replication crisis

To evaluate and build on previous findings, a researcher sometimes needs to know exactly what was done before.

Computational reproducibility is the ability to take the raw data from a study and re-analyze it to reproduce the final results, including the statistics.

Empirical reproducibility is demonstrated when, if the study is done again by another team, the critical results reported by the original are found again.

Poor computational reproducibility

Economics Reinhart and Rogoff, two respected Harvard economists, reported in a 2010 paper that growth slows when a country’s debt rises to more than 90% of GDP. Austerity backers in the UK and elsewhere invoked this many times. A postgrad failed to replicate the result, and Reinhart and Rogoff sent him their Excel file. They had unwittingly failed to select the entire list of countries as input to one of their formulas. Fixing this diminished the reported effect, and using a variant of the original method yielded the opposite result than that used to justify billions of dollars’ worth of national budget decisions.

A systematic study found that only about 55% of studies could be reproduced, and that’s only counting studies for which the raw data were available (Vilhuber, 2018).

Cancer biology The Reproducibility Project: Cancer Biology found that for 0% of 51 papers could a full replication protocol be designed with no input from the authors (Errington, 2019).

Not sharing data or analysis code is common. Ioannidis and colleagues (2009) could only reproduce about 2 out of 18 microarray-based gene-expression studies, mostly due to lack of complete data sharing.

Artificial intelligence (machine learning) A survey of reinforcement learning papers found only about 50% included code, and in a study of publications associated with neural net recommender systems, only 40% were found to be reproducible (Barber, 2019).

Poor empirical reproducibility

Wet-lab biology.  Amgen researchers were shocked when they were only able to replicate 11% of 53 landmark studies in oncology and hematology (Begley and Ellis, 2012).

“I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story.” Begley

A Bayer team reported that ~25% of published preclinical studies could be validated to the point at which projects could continue (Prinz et al., 2011). Due to poor computational reproducibility and methods sharing, the most careful effort so far (Errington, 2013), of 50 high-impact cancer biology studies, decided only 18 could be fully attempted, and has finished only 14, of which 9 are partial or full successes.

From Maki Naro’s 2016 cartoon.

Social sciences

62% of 21 social science experiments published in Science and Nature between 2010 and 2015 replicated, using samples on average five times bigger than the original studies to increase statistical power (Camerer et al., 2018).

61% of 18 laboratory economics experiments successfully replicated (Camerer et al., 2016).

39% of 100 experimental and correlational psychology studies replicated (Nosek et al.,, 2015).

53% of 51 other psychology studies (Klein et al., 2018; Ebersole et al., 2016; Klein et al. 2014) and ~50% of 176 other psychology studies (Boyce et al., 2023)

Medicine

Trials: Data for >50% never made available, ~50% of outcomes not reported, author’s data lost at ~7%/year (Devito et al, 2020)

I list six of the causes of this sad state of affairs in another post.

References

Barber, G. (n.d.). Artificial Intelligence Confronts a “Reproducibility” Crisis. Wired. Retrieved January 23, 2020, from https://www.wired.com/story/artificial-intelligence-confronts-reproducibility-crisis/

Begley, C. G., & Ellis, L. M. (2012). Raise standards for preclinical cancer research. Nature, 483(7391), 531–533.

Boyce, V., Mathur, M., & Frank, M. C. (2023). Eleven years of student replication projects provide evidence on the correlates of replicability in psychology. PsyArXiv. https://doi.org/10.31234/osf.io/dpyn6

Bush, M., Holcombe, A. O., Wintle, B. C., Fidler, F., & Vazire, S. (2019). Real problem, wrong solution: Why the Nationals shouldn’t politicise the science replication crisis. The Conversation. http://theconversation.com/real-problem-wrong-solution-why-the-nationals-shouldnt-politicise-the-science-replication-crisis-124076

Camerer, C. F., et al.,  (2018). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour, 2(9), 637–644. https://doi.org/10.1038/s41562-018-0399-z

Camerer, C. F. et al. Evaluating replicability of laboratory experiments in economics. Science 351, 1433–1436 (2016). DOI: 10.1126/science.aaf0918

DeVito, N. J., Bacon, S., & Goldacre, B. (2020). Compliance with legal requirement to report clinical trial results on ClinicalTrials.gov: A cohort study. The Lancet, 0(0). https://doi.org/10.1016/S0140-6736(19)33220-9

Ferrari Dacrema, Maurizio; Cremonesi, Paolo; Jannach, Dietmar (2019). “Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches”. Proceedings of the 13th ACM Conference on Recommender Systems. ACM: 101–109. doi:10.1145/3298689.3347058. hdl:11311/1108996.

Ebersole, C. R. Et al. (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Journal of Experimental Social Psychology, 67, 68–82. https://doi.org/10.1016/j.jesp.2015.10.012

Errington, T. (2019) https://twitter.com/fidlerfm/status/1169723956665806848

Errington, T. M., Iorns, E., Gunn, W., Tan, F. E., Lomax, J., & Nosek, B. A. (2014). An open investigation of the reproducibility of cancer biology research. ELife, 3, e04333. https://doi.org/10.7554/eLife.04333

Errington, T. (2013). https://osf.io/e81xl/wiki/home/

Glasziou, P., et al. (2014). Reducing waste from incomplete or unusable reports of biomedical research. The Lancet, 383(9913), 267–276. https://doi.org/10.1016/S0140-6736(13)62228-X

Ioannidis, J. P. A., Allison, D. B., et al. (2009). Repeatability of published microarray gene expression analyses. Nature Genetics, 41(2), 149–155. https://doi.org/10.1038/ng.295

Klein, R. A., et al. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225

Klein, R. A., et al. (2014). Investigating Variation in Replicability. Social Psychology, 45(3), 142–152. https://doi.org/10.1027/1864-9335/a000178

Nosek, B. A., Aarts, A. A., Anderson, C. J., Anderson, J. E., Kappes, H. B., & Collaboration, O. S. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716–aac4716.

Prinz, F., Schlange, T. & Asadullah, K. Nature Rev. Drug Discov. 10, 712 (2011).

Vilhuber, L. (2018). Reproducibility and Replicability in Economics https://www.nap.edu/resource/25303/Reproducibility%20in%20Economics.pdf

A legacy of skepticism and universalism

Many of the practices associated with modern science emerged in the early days of the Royal Society of London for Improving Natural Knowledge, which was founded in 1660. Today, it is usually referred to as simply “the Royal Society”.  When the Royal Society chose a coat of arms, they included the words Nullius in verba

Nullius in verba is usually taken to mean “Take nobody’s word for it”, which was a big departure from tradition. People previously had mostly been told to take certain things completely on faith, such as the proclamations of the clergy and even the writings of Aristotle. 

In the early 1600s, René Descartes had written a book urging people to be skeptical of what others claim, no matter who they are.

Rene Descartes. Image: public domain.

This caught on in France, even among the public — many people started referring to themselves as Cartesians. Meanwhile in Britain, the ideas of Francis Bacon were becoming influential. His skepticism was less radical than Descartes’, and included many practical suggestions for how knowledge could be advanced.

Francis Bacon in 1616. Image:public domain.

Bacon’s mix of skepticism with optimism about advancing knowledge using observation led, in 1660, to the founding in London of “a Colledge for the Promoting of Physico-Mathematicall Experimentall Learning”. This became the Royal Society.

The combination of skepticism and the opening up of knowledge advancement to contemporary people, not just traditional authorities, set the stage for the success of modern science. When multiple skeptical researchers take a close look at the evidence behind a new claim and are unable to find major problems with the evidence, everyone can then be more confident in the claim. As the historian David Wootton has put it, “What marks out modern science is not the conduct of experiments”, but rather “the formation of a critical community capable of assessing discoveries and replicating results.”

Taking the disregard of traditional authority further, in the 20th century the sociologist Robert Merton suggested that scientists value universalism. By universalism, Merton meant that in science, claims are evaluated without regard to the sort of person providing the evidence. Evidence is evaluated by scientists, Merton wrote, based on “pre-established impersonal criteria”. 

Universalism provides a vision of science that is egalitarian, and universalism is endorsed by large majorities of today’s scientists. However, those who endorse it don’t always follow it in practice. Scientific organizations such as the Royal Society can be elitist. For example, sometimes the scholarly journals that societies publish treat reports by famous researchers with greater deference than those by other researchers.

Placing some trust in authorities (such as famous researchers) is almost unavoidable, because in life we have to make decisions about what to do even when we can’t be completely certain of the facts. In such situations, it can be appropriate to “trust” authorities, believing their proclamations. We don’t have the resources to assess all kinds of scientific evidence ourselves, so we have to look to those who seem to have a track record of making well-justified claims in a particular area. But when it comes to the development of new, cutting-edge knowledge, science thrives on the skepticism that drives the behavior of some researchers.

Together, the values of communalism, skepticism, and a mixture of universalism and elitism shaped the growth of scientific institutions, including the main way in which researchers officially communicated their findings: through academic journals.