The philosopher Felix Hausdorff, famous for his foundational philosophical work Das Chaos in kosmischer Auslese (Chaos in Cosmic Selection) in which he rejects metaphysics, also did mathematics as a “hobby”. In mathematics he is known for Hausdorff spaces, the Hausdorff paradox, the Hausdorff gap, etc. etc.
Hausdorff became a professor of mathematics in Leipzig, Germany, in 1901. This was shortly after he married Charlotte (Lotta) Goldschmidt in 1899. Charlotte was the granddaughter of Henriette Goldschmidt, who founded the first school for women in Germany and was a major figure in the feminist movement.
The Henriette Goldschmidt school in Leipzig (photo taken by me April 25, 2023).
The dean at the University of Leipzig who appointed Hausdorff wrote at the time “The faculty considers itself, however, duty bound to inform the Royal Ministry that the present proposal was not approved by all members in the meeting on the 2nd of November this year, but rather by a vote of 22 to 7. The minority who voted against Dr. Hausdorff did so because he is of the Jewish faith.” Just a year later, in 1902, the rabid antisemite Theodor Fritsch founded the antisemitic publishing house Hammer-Verlag in Leipzig, and published a German translation of Protocols of the Elders of Zion, a fake document claiming to contain details of a Jewish plot to overtake the world. Thus, in 1910, Hausdorff moved to Bonn. At the time he wrote “In Bonn, one has the feeling, even as a junior faculty member (non-Ordinarius) of being formally accepted, a sense I could never bring myself to feel in Leipzig”.
Those feelings were shortlived.
On the 25th January, 1941, anticipating deportation to the Endenich camp in Bonn which Hausdorff knew was a stop on the way to death camps, Hausdorff penned the following letter to his friend, lawyer Hans Wollstein:
In translation it begins:
Dear friend Wollstein!
If you receive these lines, we (three) have solved the problem in a different manner — in the manner of which you have constantly tried to dissuade us. The feeling of security that you have predicted for us once we would overcome the difficulties of the move, is still eluding us; on the contrary, Endenich may not even be the end!
What has happened in recent months against the Jews evokes justified fear that they will not let us live to see a more bearable situation.
and ends with:
I am sorry that we cause you yet more effort beyond death, and I am convinced that you are doing what you can do (which perhaps is not very much). Forgive us our desertion! We wish you and all our friends to experience better times.
Your truly devoted
Felix Hausdorff
Felix Hausdorff z”l, his wife Charlotte (Lotta) Goldschmidt z”l, and Charlotte’s sister Edith Pappenheim z”l, committed suicide the next day on January 26, 1942. A Stolpersteine memorial commemorates them, in front of their house in Bonn (photo taken by me, April 20th, 2023). Hans Wollstein z”l was murdered at Auschwitz on September 28, 1944.
Among the millions of pages in the Epstein files, there is an inconspicuous, seemingly mundane short email, EFTA00896810, dated 14th September, 2010. It is a message from Jeffrey Epstein to Martin Nowak (at the time Professor of Mathematics and Biology at Harvard University) and Corina Tarnita (at the time a Junior Fellow at Harvard, now Professor of Ecology and Evolutionary Biology at Princeton University), suggesting strategy for how to defend a paper they had recently published with E. O. Wilson (at the time Professor of Entomology at Harvard University). The paper in question was “The evolution of eusociality” published in Nature, on the 26th of August, 2010:
I first became aware of the NTW paper a few years ago when working on the Price equation, a project recently published as “Analogies between the virial theorem and the Price equation“, Felce, Liorsdóttir and Pachter, Physical Review E, 2025. The Price equation relates the change of the average of a phenotype across individuals (or subpopulations) to selection and transmission:
One of the main points of the Felce et al. paper is that the Price equation is useful in much the same way as the virial theorem is useful (the latter was used to identify a luminosity gap, namely “dark matter“, by Fritz Zwicky). Yet I learned that Nowak insists that the Price equation is a tautology without explanatory power. In his book SuperCooperators, Nowak invokes a remark attributed to the late Dutch footballer Johan Cruyff that to win a soccer match, one must score one more goal than the opponent, suggesting that the Price equation is similarly true but devoid of content in that it tells you nothing about how to win.
But the analogy misfires for two reasons. First, Cruyff’s statement is not a tautology but a definitional truth, holding by virtue of the rules of the game rather than logical form. Second, while the Price equation does not prescribe a winning strategy, it does describe the instantaneous direction in which play is unfolding, i.e., who is advancing, who is retreating, and where pressure is building on the field. Extending the Price equation – football analogy, inclusive fitness, which the NTW paper trashes in much the same way that Nowak dismisses the Price equation, is a particular way of expressing the local direction of play in chosen coordinates, e.g. along teammate-aligned axes. Dismissing such information as vacuous confuses useful live commentary on the flow of the match with a playbook for scoring goals, and mistakes a description of motion for an instruction on how to engineer it.
The flawed Nowak analogies are not a minor matter. They are central in defending the legitimacy of his famous paper “Five rules for the evolution of cooperation“, published in Science, 2006. The five rules paper is Nowak’s most cited work (> 7,500 citations), and he has described it as characterizing all the cooperation scenarios in nature (“Why we help“, Nowak, 2012). Tellingly, Nowak mentions the Price equation only once in a parenthetical remark in the five rules paper, and it is conspicuously absent from his other major works on the topic. The NTW paper can be understood to double down on this: Nowak et al. insist that cooperation can only be understood in terms of mechanisms that may generate it, mechanisms that he described and studied in his evolution of cooperation papers.
Nowak’s five rules are all corollaries of the Price equation (specifically they are based on linearization of fitness under weak selection, derivation of an assortment coefficient, and then application of the Price equation). In other words, not only is the Price equation not vacuous and not a tautology, Nowak’s five rules are directly based on it. This isn’t a new insight, Steven Frank (and others) have understood this for a long time; my contribution here is maybe to just write down the cooperation rules as corollaries of the Price equation in one place.
Nowak’s rules are all based on the selection term (the covariance) in the Price equation. There is also a transmission term and it leads to other inequalities (this was also known, I’ve just collated the insight along with the exposition of Nowak’s rules as both being corollaries of the Price equation).
The Price equation, as usually described, is discrete, but it has a continuous analog (published by Price in 1972). In Felce et al., we show how the continuous version is the limit of the discrete version (subject to a change of coordinates from Wrightian fitness to Malthusian fitness). This leads to the following question: are there analogs of the five rules for continuous evolutionary games. This line of inquiry led me to a spectral existence theorem for cooperation, which can be seen as generalizing Hamilton’s rule from rb > c to
.
Passing to the limit turns out to clarify what is going on, in much the same way that calculus is the right language for Newton’s laws, and is why he developed the subject. The mathematics is rather beautiful: discrete evolutionary games are replaced by an interaction matrix driving a replicator equation. The spectral rule emerges from a Rayleigh quotient, which I happened to be thinking of contemporaneously as I was working on a pair of papers on contrastive PCA with Maria Carilli and Kayla Jackson (see “The Rayleigh Quotient and Contrastive PCA Analysis I“, Carilli et al. 2025 and “The Rayleigh Quotient and Contrastive PCA Analysis II “, Jackson et al., 2026). I won’t explain the details here (they are in the paper), but the main (and only) figure in the paper provides some geometric intuition:
Returning to the NTW paper, it is notable that it was resoundingly criticized (there are many rebuttals; the Abbott et al. Matters Arising is a good start). My paper adds to this chorus of dissent, perhaps contributing an additional point: by denying the relevance of the Price equation in understanding the evolution of cooperation, Nowak et al. missed out on thinking about the continuous Price equation and its consequences. They therefore failed to identify spectral conditions for cooperation. In other words, they didn’t just miss the forest for the trees. They missed the trees.
It’s important to note that mathematical content of these trees and forests is trivial by the standards of modern mathematics research. There is some introductory calculus, elements of linear algebra, and basic differential equations. Not much more mathematics content than Math 21B at Harvard University (a class taken mainly by Freshmen and Sophomores). In fact working through Nowak’s five rules and deriving the spectral condition for the continuous Price equation took me a few evenings of work. That is not to say mathematical biology is trivial; there are plenty of interesting and deep ideas and results. However, the fact that Nowak’s five rules for cooperation, a topic he has worked on for more than thirty years and that was clearly a major factor for his tenure at Harvard, are direct consequences of the Price equation raises the question of how and why the mathematics department at Harvard hired Martin Nowak in the first place. Did the $6.5 million gift from Jeffrey Epstein that he brought with him play a role?Mathematicians take pride that the math is all that counts, especially at Harvard, yet it appears that in 2013, even after he was informed that Epstein was “a former donor with reputational issues” and that the gifts “might engender press or controversy”, the chair of the Harvard mathematics department still sought Epstein’s support.
E.O. Wilson didn’t even believe math was needed at all to do the work of NTW. In “Letters to a Young Scientist“, published in 2013, he argued that biologists don’t need to be literate in mathematics to be successful. Whether it is true for any biologist is debatable. What is certain is that Wilson’s mathematics illiteracy was definitely a problem for him in regards to NTW, although apparently the math department at Harvard was happy to overlook the mathematics shortcomings in Nowak’s work. Perhaps not unrelated, while Harvard’s mathematics department was eager to chase Epstein, they were averse to hiring women (spurred on by the opinions of the president of Harvard University, who was friends with Jeffrey Epstein). Harvard first tenured a woman in mathematics in 2009. She left shortly thereafter leaving Harvard without a single woman in the mathematics department. A tenured woman was hired again only in 2018.
I believe that these issues are not tangential to the diminishing of the Price equation by Nowak et al. Nowak, in his many (highly cited) papers and books on the subject of evolution of cooperation, he presented an illusion of profound insight. He has referred to his cooperation rules as a “third fundamental principle of evolution beside mutation and natural selection”. Since his rules are in fact a corollary of the Price equation, which describes change in phenotype in terms of selection and transmission, of course Nowak’s rules are not an additional principle of evolution. Moreover, his rules of cooperation are not even canonical. However. Nowak kept promising Epstein the keys to evolution (“let’s build [a new theory of biology] so grand and ambitious that its supremacy will never be challenges“). One shudders to think how it may have also played into Epstein’s fantasies of eugenics.
Martin Nowak, center, leans back in his chair during a meeting with Jeffrey Epstein seated next to him. Image via U.S. House Oversight Committee (source: Concord Bridge).
Professor Martin Nowak used to refer to the Program for Evolutionary Dynamics as Nowakia, which he described as “an academic paradise” (Nowak & Highfield, 2012: 119–120). More like Paradise Lost.
There are writings on the wall that, now that the silicon savior has arrived, a new testament is going to be written. Although there will always be a small group of “rigorous” old-style mathematicians(e.g. [JQ]) who will insist that the true religion is theirs, and that the computer is a false Messiah, they may be viewed by future mainstream mathematicians as a fringe sect of harmless eccentrics, like mathematical physicists are viewed by regular physicists today.
Zeilberger’s prophecy seems to be upon us. But instead of a silicon savior, what seems to be happening is more like a searing Highlander Quickening, whereby we are receiving the knowledge and power that others have obtained throughout their lives.
A Quickening took place sometime between the end of 2025, when Claude Sonnet Opus 4.5 dropped, and February 5, 2026 when Codex 5.3 was released. On Tuesday February 3rd, I downloaded Claude Opus 4.5 for the first time. My first experiment was to try a port of Sleuth, which my former PhD student Harold Pimentel published in 2017. Sleuth was written in R, and I figured it would be useful to have it Python, since then it could be adapted to single-cell RNA-seq and work seamlessly with the popular Python-based single-cell ecosystem. Within an hour I was able to port Sleuth to Python and replicate exactly the results in R. I was stunned by the ability of Claude, so I figured I should try something even more ambitious. During a discussion with my former student Sina Booeshaghi, he suggested I tackle edgeR, which consists of 14,808 lines of code across 136 files (117 in R and 19 in C).
On Thursday February 5th I started working on edgePython. This was not just a port for the sake of performing a port. I had three specific goals in mind:
I wanted an edgePython that could interact seamlessly with AnnData objects, so that differential analysis could proceed without exporting the data and reading it back in from R. Seamless is a key word. Along with an AnnData interface, I resolved that the port should stick to Python (possibly with Numba), but not incorporate C.
Python has become the de facto standard for single-cell genomics with a huge ecosystem of methods and tools (to the extent that R users are converting Seurat objects to AnnData), so I wanted to use the edgeR port to develop a new single-cell differential analysis method. The approach for single-cell differential in edgePython seemed obvious: port the single-cell NEBULA method, and then incorporate Empirical Bayes, which has been a hallmark feature of edgeR.
To be useful, the port would have to be comprehensive, covering most of edgeR’s functionality, and it would have to be similar in speed. The latter did not seem to be a trivial goal; I checked and edgeR is 39% C code (so it is really edgeRC), and I wasn’t sure Python could match it.
The sport of predicting dystopian vs. utopian futures of human civilization in light of recent advances in AI is a strange one, where games inevitably result in a loss for both sides. Thus, I won’t play, but I will offer some observations on the impact of AI on my corner of science right now:
A Python port of edgeR has been attempted twice before, to my knowledge. One effort did not succeed. The other, part of the inMoose project, did result in a partial port after years of work by multiple individuals. The difficulty in porting edgeR stems from a complex codebase: Of the 14,808 lines of code, 39% are written in C, and a lot of the code relates to complex numerical statistical algorithms. A Python port of DESeq2 was published in 2023. This port, while functional, was also incomplete. I performed my initial port with Claude Opus 4.5 and 4.6, and more recently worked on finishing off several pieces with Codex GPT-5.3. I find it remarkable that these tools were able to produce an essentially complete Python port in a week. Remarkably, the running time of the Python port is comparable to that of edgeR (in some cases the port is faster). The port does use Numba, but it does not rely on any C code. While I had to work closely with the Claude and Codex tools to make the port work, and some of my contribution was non-trivial (thanks to Sina Booeshaghi for some major assists along the way), it seems to me that much of my involvement could be automated in the future. The acceleration of computational biology is increasing. I would say that even the snap is positive.
A Python implementation of edgeR is valuable because of the single-cell genomics ecosystem that has been developed in Python, largely around AnnData objects which is part of scverse. Now, with edgePython, differential analysis can be performed directly on AnnData objects without the need for expensive export / import, not to mention the hassle and burden of maintaining a full R/Bioconductor stack and its dependencies. For several of my own project, edgePython has lowered the barriers to adoption of some of the edgeR algorithms.
I was able to simultaneously extend to two different programs: NEBULA and edgeR, to enable single-cell differential analysis with Empirical Bayes. Of course, this extension could have been implemented in R, but the Python single-cell ecosystem makes this method much more likely to be used now that it’s part of edgePython. I find it remarkable that I was able to build on two programs in this way; while they are both open source, the “open” aspect of the codebases has, in many cases, until now been literally true but practically insufficient. To build on a tool such as edgeR, which is 16+ years in the making and consists of a complex codebase with thousands of lines of code, would previously have required years of study and effort to be able to confidently alter the codebase. That is why many bioinformatics tools have effectively “remained lab-bound. Automated programming democratizes all such projects.
I have many ideas for further extensions of edgePython. Isoform-level quantification is essential for accurate differential analysis, even for gene-level differential analysis (see Pimentel et al., 2017), and while edgeR recently implemented an approach for differential transcript usage based on inferential uncertainty of isoform quantifications, an idea developed in sleuth (Baldoni et al., 2024), there is much work to be done on further methods development. I was also able to fix minutiae in the edgeR implementation, albeit details with serious implications. For example, in reviewing the edgeR code I discovered that the authors handicapped kallisto but not implementing import of kallisto HDF5 files, which is essential for utilizing kallisto bootstraps for isoform quantification. edgePython fixes this.
Thinking about isoforms and their role in differential analysis led me to the following hypothesis: a major challenge that will have to be addressed (by humans) in the near term is how to avoid a proliferation of suboptimal tools (and therefore suboptimal results derived from them). Frequently in computational biology the majority opinion is wrong (for an example see the use of UMAPs). Pushing back against the majority can be difficult for scientists, but it is possible. The empowerment of the majority with accelerants may serve to massively amplify flawed approaches, making it much harder for the minority opinion to prevail, even if it is correct.
Working with Claude and Codex has been exhilarating. Being able to code up ideas quickly and effectively opens up wide vistas for everyone. This is already resulting in a deluge of publications (see, e.g. NeurIPS 2025 which received more than 21,000 submissions). I don’t believe that current publication systems, whether for preprints, conference proceedings, or journals, can withstand the onslaught of work that is already in progress, let alone the amount of material that is to come. This is an opportunity to think of novel ways to distribute and communicate science. I think that machine readability, at least for part of it, will be one aspect of a potential solution (see Booeshaghi, Luebbert, Pachter., 2026).
There is one barrier to exploring ideas with our newfound machine tools: cost. As Doron Zeilberger predicted, it will not be possible to try every idea, because even if costs are lowered, as long as they are nonzero, we will face the problem that diverges when c > 0. In other words, we will face not just the reality of what Zeilberger calls semi-rigorous mathematics, but more generally semi-rigorous science. This may require an adjustment in habits and expectations.
I said I wouldn’t play the prediction game but I will step onto the field just for a moment… while working on this project I thought of recent comments I’ve heard such as “Subject X will be the first to go”. This is folly. Nothing is going anywhere. Science is about to get better and it will proceed faster than at any time in history. But if someone is going to insist on looking up their integrals by hand in Abramowitz and Stegun, they may find that the scientific enterprise leaves them behind (although artisanal science may be a satisfying hobby for sure).
The Highlander Quickenings are not free. The knowledge and power gained by one immortal comes at a loss for another. Scientists should think long and hard about where the coding superpower they have just received comes from (tl;dr: the enormous corpus of computer code written by software engineers over the past 50+ years, much of it informed by hard work in the sciences). We should all strive to ensure that fruits from the labor of others are turned into meaningful profit for society and not for empty vanity.
Robinson MD, Smyth GK (2007). Moderated statistical tests for assessing differences in tag abundance. Bioinformatics, 23(21), 2881-2887. doi:10.1093/bioinformatics/btm453
Robinson MD, Smyth GK (2007). Small-sample estimation of negative binomial dispersion, with applications to SAGE data. Biostatistics, 9(2), 321-332. doi:10.1093/biostatistics/kxm030
Robinson MD, McCarthy DJ, Smyth GK (2010). edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics, 26(1), 139-140. doi:10.1093/bioinformatics/btp616
Robinson MD, Oshlack A (2010). A scaling normalization method for differential expression analysis of RNA-seq data. Genome Biology, 11(3), R25. doi:10.1186/gb-2010-11-3-r25
McCarthy DJ, Chen Y, Smyth GK (2012). Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. Nucleic Acids Research, 40(10), 4288-4297. doi:10.1093/nar/gks042
Chen Y, Lun ATL, Smyth GK (2014). Differential expression analysis of complex RNA-seq experiments using edgeR. In Statistical Analysis of Next Generation Sequencing Data, Springer, 51-74. doi:10.1007/978-3-319-07212-8_3
Zhou X, Lindsay H, Robinson MD (2014). Robustly detecting differential expression in RNA sequencing data using observation weights. Nucleic Acids Research, 42(11), e91. doi:10.1093/nar/gku310
Dai Z, Sheridan JM, Gearing LJ, Moore DL, Su S, Wormald S, Wilcox S, O’Connor L, Dickins RA, Blewitt ME, Ritchie ME (2014). edgeR: a versatile tool for the analysis of shRNA-seq and CRISPR-Cas9 genetic screens. F1000Research, 3, 95. doi:10.12688/f1000research.3928.2
Lun ATL, Chen Y, Smyth GK (2016). It’s DE-licious: A recipe for differential expression analyses of RNA-seq experiments using quasi-likelihood methods in edgeR. In Statistical Genomics, Springer, 391-416. doi:10.1007/978-1-4939-3578-9_19
Chen Y, Lun ATL, Smyth GK (2016). From reads to genes to pathways: differential expression analysis of RNA-Seq experiments using Rsubread and the edgeR quasi-likelihood pipeline. F1000Research, 5, 1438. doi:10.12688/f1000research.8987.2
Chen Y, Pal B, Visvader JE, Smyth GK (2018). Differential methylation analysis of reduced representation bisulfite sequencing experiments using edgeR. F1000Research, 6, 2055. doi:10.12688/f1000research.13196.2
Baldoni PL, Chen Y, Hediyeh-zadeh S, Liao Y, Dong X, Ritchie ME, Shi W, Smyth GK (2024). Dividing out quantification uncertainty allows efficient assessment of differential transcript expression with edgeR. Nucleic Acids Research, 52(3), e13. doi:10.1093/nar/gkad1167
Chen Y, Chen L, Lun ATL, Baldoni PL, Smyth GK (2025). edgeR v4: powerful differential analysis of sequencing data with expanded functionality and improved support for small counts and larger datasets. Nucleic Acids Research, 53(2), gkaf018. doi:10.1093/nar/gkaf018
NEBULAReference
He L, Davila-Velderrain J, Sumida TS, Hafler DA, Kellis M, Kulminski AM (2021). NEBULA is a fast negative binomial mixed model for differential or co-expression analysis of large-scale multi-subject single-cell data. Communications Biology, 4, 629. doi:10.1038/s42003-021-02146-6
Jim Watson made many contributions to science, education, public service, and especially Cold Spring Harbor Laboratory (CSHL).
In the realm of science, several of the contributions James Watson took credit for were not his (see below). In terms of education, his focus was on bringing a single Eton boy every year to Cold Spring Harbor Laboratory for a research experience during the boy’s gap year after high school (the boys would frequently stay at his house). Eton, an all-boy boarding school, is Britain’s most elite school (CSHL oral history, 1997). Insofar as public service, Watson’s public record was not one of civic engagement or humanitarian contribution. His tenure as the first head of the Human Genome Project ended quickly in conflict (see below) and his public statements regarding genetics and race, gender, and intelligence was widely condemned (source: Amy Harmon, 2019)
As a scientist, his and Francis Crick’s determination of the structure of DNA, based on data from Rosalind Franklin, Maurice Wilkins and their colleagues at King’s College London, was a pivotal moment in the life sciences.
Franklin did not just provide data that enabled Crick and Watson to determine the structure of DNA. Yes, with her student Raymond Gosling she generated high-quality X-ray images of DNA, most famously Photo 51, which provided clear evidence that DNA forms a helical structure. But Franklin did much more and was an equal scientific contributor to the elucidation of DNA’s structure, whose experimental rigor and insights were central to solving the double helix. Franklin’s X-ray diffraction work distinguished the A and B forms of DNA, resolving confusion. Her measurements revealed that DNA’s unit cell was huge and had a C2 symmetry, implying two antiparallel sugar-phosphate strands. She confirmed the 34 Å helical repeat in the B form and identified the phosphate backbone’s exterior location. Though she did not derive complementary base pairing, her late-stage notes show that she recognized DNA could encode biological specificity through any sequence of bases, anticipating the idea of informational coding (Cobb and Comfort, Nature, 2023).
Watson, along with Crick and Wilkins were awarded the 1962 Nobel Prize in Physiology or Medicine. Watson also received the Presidential Medal of Freedom from President Gerald Ford and the National Medal of Science from President Bill Clinton, among many other awards and prizes.
It is true that Watson received these awards. Coincidentally, William Shockley was also awarded the Nobel Prize around the same time as Watson (physics, 1956) and he promoted racist eugenics, arguing that people of African Ancestry posed a “dysgenic risk”, as well as advocating for sterilization. He used his Nobel prestige to advance his malicious and scientifically bankrupt ideas (source: Scott Rosenberg, 2017)
While at Cambridge, Watson also carried out pioneering research on the structure of small viruses. At Harvard, Watson’s laboratory demonstrated the existence of mRNA, in parallel with a group at Cambridge, UK, led by Sydney Brenner.
One of his colleagues at Harvard, E. O. Wilson, once called James Watson “the most unpleasant human being I have ever met” (source: Amanda Gefter, 2009).
His laboratory also discovered important bacterial proteins that control gene expression and contributed to understanding how mRNA is translated into proteins.
The discovery of important proteins that control gene expression in bacteria, notably the lac repressor, was made by Francois Jacob and Jaques Monod.
As an author, Watson wrote two books at Harvard that were and remain best sellers. The textbook Molecular Biology of the Gene, published in 1965 (7th edition, 2020), changed the nature of science textbooks, and its style was widely emulated.
In this textbook Watson got the central dogma wrong, presenting it in a profoundly misleading way. (source: Matthew Cobb, 2024).
The Double Helix (1968) was a sensation at the time of publication. Watson’s account of the events that resulted in the elucidation of the structure of DNA remains controversial, but still widely read.
Prior to the publication of The Double Helix, Francis Crick wrote that “If you publish your book now, in the teeth of my opposition, history will condemn you”. Watson published the book anyway (source: letter by Francis Crick, 1967) .
As a public servant, Watson successfully guided the first years of the Human Genome Project, persuading scientists to take part and politicians to provide funding.
Watson resigned from the Human Genome Project due to conflicts of interest related to holdings of his in biotechnology companies and due to his insistence that cDNA should not be sequenced leading to conflicts with NIH director Bernadine Healy, with whom he also clashed on patenting of expressed sequence tags (source: Christopher Anderson, 1992). Fortunately, thanks to the vision of Bernadine Healy, who was the first female director of the NIH, cDNA technology was pursued and led to RNA-seq which, along with DNA-seq, is today the most widely used genomics assay.
He created the Ethical, Legal and Social Issues (ELSI) program because of his concerns about misuse of the fruits of the project.
Watson reportedly configured ELSI so as to undermine its ability to interfere with the human genome project: “I wanted a group that would talk and talk and never get anything done and if they did do something, I wanted them to get it wrong. I wanted as its head Shirley Temple Black” (source: Lori Andrews, 1999, Dolan et al., 2022).
Watson’s association with Cold Spring Harbor Laboratory began in 1947 when he came as a graduate student with his supervisor, Salvador Luria. Luria, with Max Delbruck, was teaching the legendary Phage Course. Watson returned repeatedly to CSHL, most notably in 1953 when he gave the first public presentation of the DNA double helix at that year’s annual Symposium. He became a CSHL trustee in 1965.
James Watson did not credit Rosalind Franklin in his presentation of the DNA double helix; he did not even mention Rosalind Franklin in his Nobel, although he did admit that people found him unbearable (source:Nobel Banquet speech, 1962).
CSHL was created in 1964 by the merger of two institutes that existed in Cold Spring Harbor since 1890 and 1904, respectively. In 1968, Watson became the second director when he was 40 years old. John Cairns, the first director, had begun to revive the institute but it was still not far short of being destitute when Watson took charge. He immediately showed his great skills in choosing important topics for research, selecting scientists and raising funds.
On the matter of selecting scientists, Watson once remarked “Whenever you interview fat people, you feel bad, because you know you’re not going to hire them” (source: Tom Abate, 2000). On the matter of raising funds, it seems that James Watson’s network included Jeffrey Epstein, with whom he reportedly met within two years before Epstein’s arrest in 2019 (source: Business Insider).
Also in 1968, Watson married Elizabeth (Liz) Lewis, and they have lived on the CSHL campus their entire lives together. Jim and Liz have two sons, Rufus and Duncan. As with the former Directors, they fostered close relationships with the local Cold Spring Harbor community.
In 1969, Watson focused research at CSHL on cancer, specifically on DNA viruses that cause cancer. The study of these viruses resulted in many fundamental discoveries of important biological processes, including the Nobel prize-winning discovery of RNA splicing. Watson was the first Director of CSHL’s National Cancer Institute-designated Cancer Center, which remains today.
On the matter of cancer, James Watson delivered a lecture at UC Berkeley in 2000 where he talked about an experiment to protect against skin cancer. He claimed that in an experiment by scientists at the University of Arizona, who injected male patients with an extract of melanin to test whether they could chemically darken the men’s skin as a skin cancer protection, they observed an unusual side effect, namely that the men developed sustained and unprovoked erections (source: Tom Abate, 2000).
Watson was passionate about science education and promoting research through meetings and courses. Meetings began at CSHL in 1933 with the Symposium series, and the modern advanced courses started with the Phage course in 1945. Watson greatly expanded both programs, making CSHL the leading venue for learning the latest research in the life sciences. Publishing also increased, notably of laboratory manuals, epitomized by Molecular Cloning, and several journals began, led by Genes & Development and later Genome Research. He encouraged the creation of the DNA Learning Center, unique in providing hands-on genetic education for high-school students. There are now DNA Learning Centers throughout the world.
The DNA learning center page on Rosalind Franklin states “The X-ray crystallographic expert, hired for her skills, and known to be methodical. Don’t call her Rosy!” (source:DNA learning center)`
Through a substantial gift to CSHL in 1973 by Charles Robertson, Watson started the Banbury Center on the Robertsons’ 54-acre estate in nearby Lloyd Harbor. Today, this center functions as an important “think tank” for advancing research and policies on many issues related to life and medical sciences.
The Banbury Center was founded as an old boys club. Meetings are invitation only, with invites by the old boys for other old boys. I attended a meeting in 2004 on functional genomics which consisted of 33 invitees of which 33 were men, despite the fact that many of the leading genomics scientists at the time were women. At the meeting I had dinner with James Watson, during which he took the opportunity to denigrate Rosalind Franklin and the Irish (source: Banbury Center, 2004).
Watson remained in leadership roles at CSHL until 2000, and then continued as a member of the faculty. However, his remarks on race and IQ in 2008 led the CSHL Board of Trustees to remove him from all administrative roles and his appointment as a CSHL Trustee. When he made similar statements in 2020, the board revoked his Emeritus status and severed all connections with him.
Watson made racist and sexist remarks not only in 2008 and 2020 but throughout his life (source: James Watson in his own words).
Watson’s extraordinary contributions to Cold Spring Harbor Laboratory during his long tenure transformed a small, but important laboratory on the North Shore of Long Island into one of the world’s leading research institutes.
Watson was the director of CSHL from 1968 – 1994 but there have been many other individuals who were key in establishing CSHL as a leading research institute: Barbara McClintock (discovered transposons) was at Cold Spring Harbor from 1941 until her death in 1992 (source: Wikipedia). She was recruited by Milislav Demerec (director 1941 – 1960). Bruce Stillman (director since 1994) has been instrumental in establishing the CSHL graduate program, the Genome Research Center, and under his watch it became a top-10 biomedical research center (source: Wikipedia).
The mathematician Carl Friedrich Gauss invented the heliotrope for long-distance surveying in 1821. Just a year later, he proved that least squares regression provided the BLUE estimator for data with uncorrelated Gaussian errors (with mean zero and equal variance). The heliotrope tool and the least squares method allowed Gauss to perform a geodetic survey of the Kingdom of Hanover by triangulation between 1821 and 1825, an unprecedented accomplishment at the time.
Gauss’ approach was scaled up massively by Friedrich Wilhelm Bessel, who in 1838 published an epic triangulation based survey of East Prussia and parts of Russia in Gradmessung in Ostpreußen und ihre Verbindung mit Preußischen und Russischen Dreiecksketten. His work included understanding for the first time how to calculate the standard error of the mean (which led him to introduce Bessel’s correction), providing an important assessment of how errors in individual measurements during triangulation contributed to overall errors from the least squares procedure. The scale of his survey (~3x Hanover) was arguably also the first use of statistics for big data. Bessel may have been the first ̶m̶a̶c̶h̶i̶n̶e̶ ̶l̶e̶a̶r̶n̶i̶n̶g̶ AI disruptor in history. And he did it not from silicon valley but from Königsberg (now Kaliningrad).
Bessel’s work enabled the completion of the first cross-country rail line linking a Western European capital to an Eastern European capital. The rail line, between Berlin and Warsaw, was completed in 1848 and was a quantum leap after the first national long-distance rail between Leipzig and Dresden in the Kingdom of Saxony was completed in 1839. At ~500km, Berlin-Warsaw was much longer than any previous international crossings. The distance was long enough that Bessel was able to use his triangulation to show that the earth is an oblate spheroid (the method dates to 1825) and in 1841 he established the ellipsoid major and minor axes a = 6377397.155 m, b = 6356078.962822 m. His accuracy was amazing; the WGS84 modern world geodetic system from 1984 is almost the same with a = 6378137.0 m, b = 6356752.30 m. As of 2010 the Bessel ellipsoid was still the geodetic system of choice in Germany, Austria, and the Czech Republic.
The Berlin-Warsaw line of 1848 crossed the Prussian-Russian border near the Prosna river, west of the town of Łowicz, which is just under 100km west of Warsaw. Warsaw at the time was part of Congress Poland formed at the Congress of Vienna in 1815, although by the mid 19th century the territory was fully under Russian administration ruled directly by the Tsar. The border moved in 1919 as a result of the Treaty of Versailles after World War I, when a new Polish Republic was established. At that time the border crossing on the Berlin-Warsaw rail line shifted to the town of Zbąszyń (a medieval Polish name for the town, which the Germans had renamed to Bentschen when the town came under Prussian control in 1793).
Zbąszyń monument to Polish nationalism in 1918-1919 and World War II (source: photograph by me, taken September 13, 2025)
In the 19th century Zbąszyń was what today we might today call a “multicultural town”. Out of a population of ~1,300 in 1833, there were 336 Jews (~25%), about 35% Germans, and approximately 40% Poles. In other words, there were more Poles than Germans, but Poles were a minority with respect to Jews and Germans. The percent Jewish population in Zbąszyń, which dates back to at least 1437 when it is known that a Jew named Palto sued a nobleman for failure of repaying a loan, peaked in the early 19th century. The reasons for a decline in Jewish population over a century were manifold, but a major factor was antisemitism, including regular blood libels and pogroms in neighboring cities of which a partial list is the “forgotten” Warsaw pogrom in 1805, pogroms in Gdańsk in 1819 and 1821, the Kalisz pogrom of 1878, the Warsaw pogrom of 1881, the Łódź pogrom of 1892, the Częstochowa pogrom in 1902, and the Białystok and Siedlce pogroms in 1906. Many Jews, including in Zbąszyń, fled west, or even abroad. By 1919 when Zbąszyń became part of the Polish Republic (it was deemed Polish in the Versailles treaty on the basis of a majority Polish population), the Jewish population had declined to under 10%, part of a mass migration from Eastern Europe totaling around 3 million Jews.
One couple that emigrated was Zindel Grynszpan and Rivka Grynszpan, who lived in Dmenin near Radomsko just south of Łódź, and moved to the city of Hanover in the Kingdom of Hanover, Prussia in 1911. The Grynszpans started a family in Hanover and eventually had six children, although only three survived into adulthood. The youngest was a boy named Herschel Feibel born in 1921. The Grynszpans were never German citizens; after the Polish Republic was established in 1919 they were recognized as Polish citizens, since they had been born in what was deemed in 1919 to be Polish territory (although in the 19th century Radomsko was part of the Russian empire). Herschel was born in Hanover, but due to the jus sanguinis principle of the German Citizenship Law which came into effect in 1914 (citizenship by descent, not birthplace), he was a Polish, not German, citizen.
On the 28th of October, 1938, the members of the Grynszpan family, along with thousands of other Polish Jews living in Germany, were deported via train to Zbąszyń, the town that happened to be the border town on the Berlin-Warsaw line.
The conditions of this deportation were horrific. Deportation notices technically gave 24 hours to leave but many individuals were deported minutes after receiving a knock on the door, and were not allowed to bring much more than the clothes on their back. How this came about is a sad and sordid story. On March 31, 1938, a Polish law came into effect that revoked the citizenship of Poles who had lived abroad for more than five years. This law was enacted specifically to prevent Jewish Poles from returning to Poland after the Anschluss, i.e. the German annexation of Austria on March 12, 1938. On the 9th of October Polish authorities added a regulation that required passports issued outside of Poland to receive a special consular stamp in order to be valid. In other words, Polish Jews were being stripped of their citizenship. This served as a pretext for German authorities to kick Polish Ostjuden (eastern Jews) out of Germany, and in what has come to be termed “Polenaktion”, they proceeded to dump Polish Jews, many of whom had just been rendered stateless, at Zbąszyń. Below are two first-hand accounts of what the deportation and arrival in Zbąszyń was like:
Your parents were Ost Juden [East European Jews]. Was your family involved in the expulsion to Zbąszyń?
I am so surprised that nobody mentions this, which happened on the 28th of October, 1938, ten days before the pogrom. In our Jewish school the boys were praying in the morning. The girls didn’t have to, but they had to prepare breakfast for the boys to eat after praying. It was the turn of my friend and I to make the tea and we had to be at school at 7:00 instead of 8:00. I was already dressed and ready to leave the house when I heard knocking at our door. When I opened the door, two tall policemen were standing at the door. They asked, “Where are your parents?” I told them they were asleep. “Wake them up, you are going to Poland.” I answered, “What? What do I have to do with Poland? I was born in Germany.” He said, “Take me to the bedroom of your parents.” They both went into their bedroom and put on the light and said, “Get up. You are going to Poland.” My mother thought it was a bad dream. They said, “Don’t ask questions, you are going to Poland.”
My father thought there was a problem with his income tax returns. My mother told me to wake up my 16-year-old brother. My parents asked why they were going to Poland – could they take something with them? The policemen said, “You are going to such a cultural land.” (When Germans talked about Poland they said “dirty Poles”). “No,” he said, “don’t take anything with you.” My father phoned his brother and asked him to take the keys to our apartment. He asked my father, “What have you done?” My father answered, “Nothing.” My aunt came and took the keys but when she got home, the police were in her house, and they really were sent immediately to Poland.
I will never forget my neighbor. She was a widow. She was wearing her nightgown, had hastily put on her overcoat and was carrying a little handbag. When I asked her why she wasn’t dressed, she told me the police didn’t give her time.
We were taken and crammed in to the gymnastics hall of the Jewish school. We saw all the people we knew from the neighborhood who were of Polish origin. My father had very high blood pressure and he couldn’t breathe. He called a doctor who said, “This man has to go hospital, he can’t be transported.” My brother said, “We will never see you again. We will be in Poland and you will be in Germany.” Every half hour a bus came to take the people to the railway station. My mother didn’t want to push, so we took the next bus and as we got in people were pushed in with us. We were all standing and that way we arrived at the Leipzig railway station. On the way, one woman became crazy. They took us to a siding where they take animals, horses and cows. There were soldiers with bayonets standing every ten meters to make sure we didn’t run away. So you see, at 7:00 in the morning I was a student, and at 5:00, I was a criminal. It was terrible (source: Yad Vashem interview with Miriam Ron)
My dear ones!
You have probably already heard of my fate from Cilli. On October 27 of this year, on a Thursday evening at 9 o’clock, two men came from the Crime Police, demanded my passport, and then placed a deportation document before me to sign and ordered me to accompany them immediately. Cilli and Bernd were already in bed. I had just finished my work and was sitting down to eat, but had to get dressed immediately and go with them. I was so upset I could scarcely speak a word. In all my life I will never forget this moment. I was then immediately locked up in the Castle prison like a criminal. It was a bad night for me. On Friday at 4 o’clock in the afternoon we were taken to the main station under strict guard by Police and SS. Everybody was given two loaves of bread and margarine and was then loaded on the freight cars. It was a cruel picture. Weeping women and children, heart-breaking scenes. We were then taken to the border in sealed cars and under the strictest police guard. When we reached the border at 5 o’clock on Saturday afternoon we were put across. A new terrible scene was revealed here. We spent three days on the platform and in the waiting rooms, 8,000 people. Women and children fainted, went mad, people died, faces as yellow as wax. It was like a cemetery full of dead people. I was also among those who fainted. There was nothing to eat except the dry prison bread, without anything to drink. I never slept at all, for two nights on the platform and one in the waiting room, where I collapsed. There was no room even to stand. The air was pestilential. Women and children were half dead. On the fourth day help at last arrived. Doctors, nurses with medicine, butter and bread from the Jewish Committee in Warsaw. Then we were taken to barracks (military stables) where there was straw on the floor on which we could lie down….
H.J. Fliedner, Die Judenverfolgung in Mannheim 1933-1945 (“The Persecution of the Jews in Mannheim 1933-1945”), II, Stuttgart, 1971, pp. 72-73 (source: Yad Vashem).
In total, around 17,000 Polish Jews living in Germany were deported and left stateless on the Polish border; about 8,000 of those arrived in Zbąszyń. Herschel Grynzspan was not with the rest of his family when they arrived in Zbąszyń, as he had left for Paris in 1936. The story of how Herschel arrived in Paris is briefly as follows: his family had intended for him to emigrate to British Mandate Palestine, but he was refused entry by the British for being too young. His parents therefore decided that he ought to emigrate to Paris instead, where they expected he could find refuge with an uncle and aunt (he was 14 in 1935). He eventually left for Paris, albeit his entry to France was illegal because he had no financial support and it was illegal for Jews to take money out of Germany by the mid 1930s.
On the 3rd of November, 1938, just five days after the arrival of his family in Zbąszyń, Herschel received a postcard from his sister Berta detailing their plight. While the Germans had discarded thousands of Jews in Zbaszyn, Poland was unwilling to accept them into the country. The Jews at the border were therefore effectively stateless and unwanted, squeezed into a border town where they were dumped in horse stables, a flour mill, military barracks, or left to sleep outside in fields. Emanuel Ringer, a social worker who went to Zbąszyń to help wrote the following on December 6, 1938 (full letter here):
“Jews were humiliated to the level of lepers, to citizens of the third class, and as a result we are all visited by terrible tragedy. Zbąszyń was a heavy moral blow against the Jewish population of Poland. And it is for this reason that all the threads lead from the Jewish masses to Zbąszyń and to the Jews who suffer there.”
You will surely have heard of our great misfortune. Let me describe what happened. On Thursday evening, rumours were circulating that all Polish Jews were to be expelled from a city. Even so, we found them difficult to believe. On Thursday evening at 9, a policeman came to us and told us that we should go to the police station with our passports. All together, as we were, we went to the police station accompanied by the policeman. We found almost our entire district gathered there. A police car immediately took us to the city hall. Everyone was taken there. No-one told us what was going on. However, we could see what they had in mind.
Each one of us was handed an expulsion order. We were told we had to leave Germany before the 29th. We were no longer allowed to return home. I begged them to allow me to go home to at least collect a few things. I then left for home, accompanied by a policeman, and packed the most important items of clothing in a suitcase. That’s all that I was able to save.
We don’t have a single penny on us. […] I’ll tell you more next time
Love and kisses from us all
Berta
Zbąszyń, 2nd barracks, Grynszpan
Jews being housed in horse stables in Zbąszyń (photo by Roman Vishniac, source: ICP)
The letter left Herschel distraught and desperate, and his first instinct was to send all his savings to his family. He ended up arguing with his uncle over this plan, and he was persuaded not to do it, due to the low likelihood that the money would make it to his family. Just two days later, on the 6th of November, Herschel departed his uncle’s house announcing he would not return, slept in a hotel, and the next morning, at 9:30am on November 7th, 1938, managed to enter the German embassy in Paris by pretending to have to deliver an important message. Upon entering the office of diplomat Ernst vom Rath, he shot him. He had bought the gun that morning on his way to the embassy.
Photo of Herschel Grynzspan, age 17, taken in Paris, France on November 7, 1938, after his arrest for shooting Ernst vom Rath (source: United States Holocaust Museum)
Two days later, on November 9, 1938, vom Rath died of his wounds. At his funeral he was declared a “blood witness” (Blutzeuge), i.e. a martyr who shed blood for the Nazi cause. In his funeral oration, Joachim von Ribbentrop declared “We understand the challenge, and we accept it.”
The same evening that vom Rath died, the 9th of November, 1938, Joseph Goebbels gave a speech inciting violence against Jews and instructing party officials not to to restrict anti-Jewish riots. Later that night, Reinhard Heydrichsent a teletype from Berlin to Gestapo and police office instructing police not to interfere with demonstrators acting against Jews, telling them to target only Jewish businesses and synagogues, ordering the seizure of Jewish property and instructing fire brigades to let synagogues burn.
That night and the following day More than 1,200 synagogues and prayer halls were burned and/or destroyed across Germany and two hundred or so more in Austria. Jewish cemeteries and schools were desecrated. Thousands of Jewish businesses were looted and sacked. Around 30,000 Jewish men were sent to concentration camps. 91 Jews were murdered. The next day Goebbels wrote in his diary “This is one dead man who is costing the Jews dear. Our darling Jews will think twice in the future before simply gunning down German diplomats.”
Below is an example of what remains now of one of the destroyed synagogues (the one in Eisenach, Germany, shown smoldering after Kristallnacht above). This site and others like it are marked on Google Maps these days as “ehemalige Synagoge” (one time synagogue).
The synagogue in Eisenach, Germany prior to its burning and destruction on the night of November 9, 1938 (left, source: Eisenach city archives) and the site on April 22, 2023 (right, photo taken by me).
In Zbąszyń, the 1851 synagogue (see photos above) is now gone and in its stead there is an apartment building:
The site of the destroyed synagogue in Zbąszyń where an apartment building stands today (source: photograph taken by me, September 13, 2025)
The Jewish cemetery in Zbąszyń is today a patch of grass. It was desecrated in 1939, although some of the grave remnants were still there at the end of World War II. Those remains were razed to the ground and liquidated completely in the 1970s. Now, a small memorial stone from 1992 commemorates what the lawn once was. It asks that one honor the site. I visited the place last week on September 13, 2025 and found it to be littered with broken glass bottles and beer cans. While cleaning up the trash I wondered about the futility of the act.
The grave of Louis Kwilecki in the Jewish cemetery in Zbąszyń photographed in 1911 (left, photo by C. Sikorski) and the Jewish cemetery today (right, photograph by me taken on September 13, 2025).
The Gryzbowski brothers were Jews who owned a flour mill in Zbąszyń that they used to house some of the Polenaktion Jews. Two Stolper stones mark the last place they lived in the town. The Polish inscription underneath Rafał Grzybowski’s name translates to “helped the deported during the Polenaktion, deported to the Kutno-Konstancja camp, murdered”.
Stolpersteine commemorating Jakub and Rafał Grzybowski, Jewish brothers who owned a flour mill (left) in front of their last place of residence (right). Both photographs taken by me on September 14, 2025
In 1939, several months after their internment, some of the Jews in Zbąszyń started to be allowed to leave farther east into Poland. Like Rafał Grzybowski, the vast majority of them were almost certainly murdered. Kristallnacht was a prelude to the holocaust, which was a genocide in many acts with many actors. The Nazis found eager partners among the Poles, as evident already in the Białystock and Jedwabne pogroms of 1941 (this history is being erased today). 90% of the Jews in Poland were murdered in the holocaust, amounting to 3 million souls.
The Nazis relied on the high-precision geodetic surveys they had throughout the war. When US Army Major Floyd Hough entered Aachen, the first German city to fall to the Allies, on October 21st, 1944, he found a treasure trove of German maps bundled for evacuation, evidently left behind by German soldiers as they undertook a hasty retreat. The surveys were immediately useful to the Allies, helping artillery units on the front improve their targeting.
But it was the Red Army that liberated Zbąszyń during the Vistula-Oder offensive in January 1945. By the time they arrived there was no longer a Jewish community in Zbąszyń and there has not been one since. More than five centuries of a Jewish community erased by the German-Polish antisemitic vise that squeezed its Jews to death.
This building served as a temporary house of prayer (a bóżnica) for Polenaktion Jews prior to their expulsion from Zbąszyń in 1939. The circular window at the top, a common motif in European synagogues, is still visible. The building is now a pizzeria (source: the photograph was taken by me on September 13, 2025)
In the Nature paper “Spatial transcriptomics reveal neuron–astrocyte synergy in long-term memory” published on March 14th, 2024, authors Sun et al. claimed to identify cell-type specific transcriptional signatures of memory in mouse basolateral amygdala. These claims were refuted in a “Matters Arising” reply published on June 4th, 2025 by Eran Mukamel and Zhaoxia Yu titled “False positives in study of memory-related gene expression” in which Mukamel and Yu point out numerous errors in statistical methodology by Sun et al. The way that Nature handles post-publication critiques of papers is to allow the authors of the critiqued paper to reply, and that reply is published contemporaneously with the Matters Arising. Thus, along with the Mukamel and Yu’s Matters Arising, on June 4th, 2025 Nature also published “Reply to: False positives in the study of memory-related gene expression” by Wenfei Sun, Zhihui Liu, Xian Jiang, Michelle B. Chen, Hua Dong, Jonathan Liu, Thomas C. Südhof1 and Stephen R. Quake2, in which Sun et al. defend the methodology in their paper. The Sun et al. “Reply to:” paper is rife with false claims. In only two pages containing 2,235 words the authors state 11 untruths, which are quoted below in red (and one truth, which is quoted below in purple), each followed by an explanation and reply:
“Mukamel and Yu unfortunately misunderstood our methodology both in the details of the statistical analysis and in the purpose of the experiments.” An examination of Mukamel and Yu’s Matters Arising shows that they have understood the methodology and experimental details of Sun et al. perfectly well. More on this below.
“Moreover, they have taken a narrow and extreme position regarding the use of statistics that does not reflect established expert opinion in the field.” This is not true. It is Sun et al. who distort what constitutes “expert opinion in the field”. Furthermore, Mukamel and Yu’s position is neither narrow nor extreme; it is coherent and correct while Sun et al.‘s claims are incoherent and wrong. More on this below.
“Although corrections such as Bonferroni or Benjamini–Hochberg can increase confidence in results by reducing the likelihood of mistakenly identifying a result as significant, they also risk overlooking genuine effects by making the criteria for significance overly stringent. Thus, the choice to apply such corrections depends on the specific context.” This statement conflates two different multiple testing corrections. First, the Bonferroni correction controls for the family-wise error rate (FWER). In the genomics context, when conducting, say, 20,000 tests, Bonferroni replaces the “nominal” p=0.05 threshold with 0.0000025. At this much lower threshold, the probability of a single false positive among the 20,000 tests is 0.05 (under the null hypothesis), which provides a strong guarantee against type 1 error (false positives), but indeed can be viewed as overly stringent, i.e. type 2 error (false negatives) may be high. However, the Benjamini-Hochberg multiple testing correction is a procedure designed to control not the FWER, but the false discovery rate (FDR). With Benjamini Hochberg, using a 0.05 significance level will result in a set of predictions that among them contain only (in expectation) 5% false-positives. In other words, with Benjamini-Hochberg there will be a much higher probability of a single false positives: in fact there will be many (~5% of the results), but this allows for many fewer false negatives. To illustrate the vast differences between the two approaches, consider a simple experiment where the goal is to detect, from 85 samples from two Gaussians, whether the means are the same. A total of 350 tests are run: for 290 of them, the Gaussians are the same (mean = 1, variance = 1), but for 60 they are not (mean = 1, variance = 1 for one set of 85 numbers called x, and mean = 1.35, variance =1 for another set of 85 numbers called y). In other words, the data looks as follows:
The goal, using the frequentist hypothesis testing framework is to discover the blue cases, i.e. to figure out when to reject the null hypothesis that the data was generated from Gaussian distributions with the same means. In this setting, we can compute 350 “nominal” p-values, i.e. raw p-values, and select as “significant” all cases where p < 0.05. That strategy results in the null hypothesis being rejected 53 times. Of the 53 “significant” results, 34 are true positives, i.e. the null was rejected and indeed the numbers in y were generated from the Gaussian with mean = 1.35, not mean = 1. However there are 19 false positives results resulting in a high false discovery rate of 0.36 (= 19/53). More than a third of the predictions are wrong; moreover many true cases are missed (26 out of the 60, or 43%). In this case, the Bonferroni correction yields 6 predictions, among which there is no false positive for a false discovery rate of 0. This is, by design, because the probability of even a single false positive should be <5%, and therefore the expected number of false positives is 0.3. Most of the time Bonferroni, in this setting, yields 0 false positives. Of course, the false negative rate is high; with only 6 predictions (all correct), 54 are missed (90%). Bonferroni is indeed, arguably, overly stringent. But the Benjamini Hochberg correction yields 20 cases where significance is reached, and it does so controlling the FDR at 5%, with a single false positive. This illustrates that lumping Bonferroni together with Benjamini-Hochberg as “such corrections” is a false equivalence. Moreover, this example shows that with only 350 p-values, a multiple testing correction is already essential to avoid an extremely high false positive rate. In fact, with 3,500 tests the false discovery rate using nominal p-values is 85%. Clearly, in the context of genomics, control of FDR is essential.
“In studies such as ours, that seek to identify new hypotheses, it is accepted that a more relaxed approach is appropriate3,4.” This is not true. It is standard practice to perform multiple-testing corrections in genomics. The two references Sun et al. have provided (3,4) are two papers from 1990 written prior to the genomics era, and written five years before the Benjamini-Hochberg correction was published. In other words, Rothman, 1990 (reference 3) and Saville, 1990 (reference 4) were writing about the extremely conservative Bonferroni correction (published in 1935) that controls for FWER and not FDR. And they were considering the need, or lack thereof, for corrections in biological experiments with a handful of measurements. Today, there is no question that multiple testing corrections must be applied when analyzing genome-side single-cell RNA-seq data: “P values obtained with DGE tests over conditions must be corrected for multiple testing” (Heumos et al., 2023). Therefore, it is not surprising that the senior authors of Sun et al. have frequently used multiple testing correction in their papers. In fact, Südhoff and Quake published a single-cell RNA-seq analysis together in 2016 where the used what they now call the “overly stringent” Bonferroni correction (Gocke et al., 2016). In their most recent paper (Liu et al., 2024) which was published right after Sun et al., 2024, Südhof and Quake use the Benjamini-Hochberg correction for their single-cell RNA-seq analysis (Figure 5, Extended Data Figure 8) and their MERFISH analysis (Figure 4, Extended Data Figures 4 & 5, albeit in one case they refer to it as “Benjamini-Hochberg Methodor [sic]”). Dozens of other Südhof and Quake genomics papers use the Benjamini-Hochberg correction when testing for differential expression in single-cell RNA-seq experiments indicating that either Südhof and Quake do not accept that “a more relaxed approach is appropriate” or they haven’t read the Methods sections of their own papers.
“In reanalysing the statistical interpretation of our data, Mukamel and Yu1 assumed that we investigated engram genes for long-term memory by performing statistical analyses on 3,350 genes. However, we applied our statistical analyses on 56 genes that emerged from systematic unbiased filtering using multiple criteria”. The filtering criteria used by Sun et al. were not unbiased. To the contrary, Sun et al. perform the statistical error of double dipping by filtering on log-fold change prior to performing hypothesis testing. Specifically, they applied a log-fold change filter, requiring genes to have a log-fold change of at least 1.75, a filter that Sun et al. refer to as selecting for genes with “biologically meaningful regulation”. To illustrate the effects of such a selection procedure, we return to the example of the Gaussians above. When first applying a fold-change threshold of 1.3 prior to testing, the resulting Benjamini-Hochberg corrected results yield 44 discoveries of which 12 are false positives yielding a false discovery rate of 0.27. In other words, selection prior to testing, i.e. double dipping, leads to a more than 5-fold increase in the number of false positives. This is no surprise. Without selection, p-values are uniformly distributed under the null, with an excess of small p-values due to the alternative hypotheses:
However, after selection, the p-values are biased:
This issue is well known. The abstract of the classic Kriegeskorte et al., 2009 paper on double dipping, or circular analysis, explained it well on the pages of Nature Neuroscience more than 15 years ago: “In particular, ‘double dipping’, the use of the same dataset for selection and selective analysis, will give distorted descriptive statistics and invalid statistical inference whenever the results statistics are not inherently independent of the selection criteria under the null hypothesis.”
“Third, to ensure biological significance, we further restricted the analysis to DEGs that were expressed in at least one-quarter of the cells and with a biologically meaningful regulation (for example, a fold change of at least 1.75). These procedures are not “post hoc criteria”, as fold change and memory-specific filtering are not an adjustment or reinterpretation of prior statistical analyses.” Yes, these procedures are post-hoc criteria. Applying a log-fold change filter of at least 1.75 as Sun et al. did after seeing the data prior to performing multiple-testing correction constitutes application of a post-hoc filtering criterion. The paragraph by Sun et al. should state: “Third, to ensure statistical significance, we further restricted the analysis to DEGs …”
“This approach resulted in a list of 56 putative DEGs in Gpr88 engram neurons (Fig. 2e of our Article2) that were evaluated statistically with the Mann–Whitney–Wilcoxon test, a method that is well-suited for differential gene expression analysis in single-cell RNA-sequencing data5. This non-parametric method is robust to outliers and is effective with small sample sizes. It resulted in 32 DEGs that were predicted to be statistically significant. This statistical approach has been adopted in many papers6–10.” The “statistical approach” described here, namely of filtering the list of genes to be tested to 56 prior to performing statistical testing, and then not applying multiple testing correction is not an approach that has been adopted “in many papers”. I reviewed references 6-10 and none of them adopted this approach: Wang, B. et al., 2023 (reference 6) did not apply post-hoc filtering and also applied the Bonferroni correction. Wang, F. et al., 2023 (Reference 7) used the Seurat FindAllMakers function which performs Bonferroni correction. In the case of Yeo et al., 2022 (Reference 8) the Methods description is poor so it is unclear if multiple testing correction was performed for selecting markers (although it was performed for their GO analysis), but it is clear that there was no selection prior to testing (effect sizes were examined after testing). Ma et al., 2020 (Reference 9) presents a method called iDEA for joint DE and GSE analysis and has no relevance for the claim of Sun et al.McFarland et al., 2020 (Reference 10) performed multiple testing correction using the Benjamini-Hochberg method, and did not select based on fold-change prior to testing. The only relevance of these papers to Sun et al. is that they utilized the Wilcoxon test at some point, but it is not true that “[The Sun et al.] statistical approach has been adopted in many papers.”
“Given the application of 56 statistical tests, we calculated that approximately 5% (or 2.8 genes) could be false positives.” The statement, as written, does not even make sense. The percent of false positives resulting from a hypothesis test will depend on the significance thresholds applied, the test used, and on the number of instances derived from the null vs. the alternative. Sun et al. seem to have misunderstood that that while it’s true that in a hypothesis testing framework, under the null hypothesis the expected fraction of p-values less than some threshold X will be X, the nature of type I (false positive) error in a particular test is not just “5%”. Moreover, Sun et al. performed post-hoc filtering (see above) so Sun et al. were not operating in the standard hypothesis testing framework.
“Mukamel and Yu1 argue that the false discovery rate (FDR) is a standard practice and should be applied to all genomics data. However, this perspective has been debated3,4,…” Again, as pointed out above, references 3 and 4 were written in 1990 prior to the genomics era and the availability of genomics data. The first organism to be sequenced was the bacteria H. influenzae in 1995. Mukamel and Yu are correct that control of the FDR is standard practice in hypothesis testing in the context of genomics.
“Nonetheless, in pursuing the suggestion by Mukamel and Yu, we have now also applied the q-value test and Benjamini–Hochberg correction to these 56 genes (Fig. 1b,c). The q-value method is more appropriate for genomics data than the FDR test using the Benjamini–Hochberg correction11, which can be overly conservative and result in a substantial loss of power.” There is no such thing as an “FDR test”. The Benjamini-Hochberg correction is a multiple testing correction for controlling the FDR, not a test. There is also no such thing as “the q-value test”; it doesn’t exist. I recommend Storey and Tibshirani’s classic paper on q-values to understand how q-values relate to the Benjamini-Hochberg correction.
“Moreover, Mukamel and Yu1 suggest that our engram single-cell data should be analysed as pseudobulk expression profiles for each individual animal—that is, that the statistical n value should refer not the cell but to the animal. This question has long been debated in the field. Although the vast majority of papers published in the field12–14 use the cell as the statistical n, it has also been argued that pseudobulk approaches are preferable15.” The question of whether it is legitimate to use the cell as the statistical n has not “long been debated in the field”. It’s incorrect to use individual cells as a proxy for biological, experimental replicates. The authors know this. Here is an excerpt from the Tabula Sapiens preprint posted on December 4, 2024, authored by Stephen R Quake and The Tabula Sapiens Consortium: “To assess differential gene expression between male and female samples at the tissue-cell type level, we employed a pseudo-bulk approach with edgeR (v-4.0.1)149,150, which has been recommended as an effective method to prevent false discoveries in datasets with covariates151” Proposed reading: Zimmerman et al., 2021.
“Nevertheless, we have reanalysed our data treating the number of mice as n and found that several of the most important genes investigated in our paper also show nominal statistical significance, including Penk, Ramp1, Egr1, Hnrnph1 and Aptx (Fig. 1d,e).” In this sentence the phrase “nominal statistical significance” just means that in Fig. 1d,e, Sun et al. are reporting the uncorrected raw p-values. Amazingly, they barely pass the p = 0.05 statistical significance threshold, which means they likely do not pass the significance threshold after multiple testing correction.
“Finally, although statistical methods are powerful tools for interpreting results, independent measurements to validate findings are often the strongest evidence for ground truth.” This is true.
The @Stanford biosciences affirmation taken by all of our PhD grads is inspiring, with pledges to: 1) Do science with rigor, integrity, and uncompromising respect for truth 2) Show kindness and compassion to colleagues 3) Show honesty and respect to the public 4) Place public… pic.twitter.com/CHbUnNdt9E
It also appears to violate elements of the Inclusion and Ethics declaration in the Sun et al. paper:
Moreover, this is not the first time problems have been identified in a paper of senior co-corresponding author Thomas Südhof. His Wikipedia page states that “Südhof retracted [the] Lin et al. 2023 research paper published in PNAS from his lab due to falsified data, and since mid-2022, PubPeer[7] commenters including Elisabeth Bik have flagged 46 of Südhof’s papers, which explore how neurons communicate across synapses.[8]” . The Südhof lab maintains a website where they claim to “discuss these [PubPeer] comments transparently“. This is what the Südhof lab has to say about the PubPeer comment linking to the Mukamel and Yu critique in its preprint form:
Accusation: None – simply cites an unreviewed preprint that alleges that our study is ‘unlikely to replicate in future’ because the authors of that preprint did not agree with some of the statistical analyses in our paper.
Resolution: It is impossible to respond to a reference to an unpublished paper, except to say that this post obviously ‘questions’ yet another of our papers to add to the success list of PubPeer claiming to have identified ‘questionable’ papers from our lab. However, we applaud the BioRxiv paper authors for initiating an open scientific discussion in which we will engage instead of the anonymous PubPeer allegations that are impossible to discuss fairly.
I have highlighted in red the statements that are false. It is possible to respond to a BioRxiv preprint; in fact Stephen Quake, one of the two senior co-corresponding authors of the Sun et al. paper disagrees with his coauthor. He has been very vocal and supportive of preprints, stating that “… people have to agree to use BiorXiv or a preprint server to share results… and the hope is that this is going to accelerate science because you’ll learn about things sooner and be able to work on them.” Furthermore, it is possible to discuss PubPeer allegations fairly; authors can respond to any comment about one of their papers via the PubPeer website. Finally, the “accusation” characterization misrepresents the allegations in the Mukamel and Yu critique, in which they stated clearly in their abstract that “This [the Mukamel and Yu analysis] suggests the [Sun et al.] data do not support the author’s claim to have identified cell type-specific transcriptional signatures of memory in the mouse basolateral amygdala.”
Code availability
Code to reproduce the results in this blogpost is available as a Google Colaboratory notebook here.
Professor of Molecular and Cellular Physiology at Stanford University. Senior co-corresponding author ↩︎
Professor of Bioengineering and Professor of Applied Physics at Stanford University. Senior co-corresponding author ↩︎
Earlier this year I came across this interesting post by mathematician Daniel Litt, where he pondered how many intrinsically true results in mathematics have been published with erroneous proofs:
How many *true* theorems have plausibly written but erroneous proofs? These are much, much harder to catch. My guess is that it is a not insubstantial portion of the literature. 15/n, n=15.
Does it even matter if a proof is erroneous when the result is true? Does someone reading a wrong proof of a true theorem truly know the result? One line of thought, going back to Plato, is that as long as the (wrong) proof (1) provides justified true belief in the theorem (i.e., there is some justification for believing it is true), (2) that the reader believes the theorem, and (3) that the theorem is indeed intrinsically true, then yes, the reader knows the result. This is the justified-true-belief (JTB) tripartite analysis of knowledge. In a landmark 1963 paper philosopher Edmund Gettier disagreed. He provided two thought experiments that he argued show that while JTB analysis may be necessary for knowledge, it is not sufficient. In particular, Gettier’s examples implied that if belief in a theorem stems from a falsehood, then the theorem is not known. This is not the only issue raised by Gettier’s counterexamples, but it is the one that Litt’s question implicitly touches on.
So how many true results in mathematics are currently believed to be true based on false proofs? I thinking about this question, I think it’s helpful to discuss an example of such an instance from my own work. In 2007, in a paper with Radu Mihaescu and Dan Levy, we settled a conjecture in phylogenetics, known as Atteson’s conjecture, in the affirmative:
Atteson’s Conjecture (1999): Let be a phylogenetic tree with tree metric and let be an estimated dissimilarity map. Let be the length of an internal edge Then if , the edge is recovered by the Neighbor-Joining algorithm.
Our proof of Atteson’s conjecture follows from Theorem 25 in our paper Mihaescu, Levy, Pachter, 2005. In fact, Theorem 25 proves a stronger result. But our paper also established something else, namely that another “proof” of Atteson’s conjecture, in the paper Dai, Xu, and Zhu, 2005, was false. In our Theorem 34 we showed that the Dai et al. paper contained an argument that is fundamentally flawed. Thus, while Atteson’s conjecture is true, i.e. it is a theorem, belief in its truth prior to our work was based on an erroneous proof. This is a single, yet compelling example, of Litt’s concern.
We considered whether it is worthwhile, or important, to correct the literature, i.e. to ask the journal Theroetical Computer Science, where Dai et al. published their work, to post a note explaining that the Dai et al. proof was incorrect. At the end we decided that it is important to correct the literature, because proof techniques can in some cases turn out to be more valuable than the results they are used for. And we thought we could reduce noise in mathematics so that a future mathematician would not be led astray using the arguments of Dai et al. So we wrote a note to the journal (Theoretical Computer Science) asking to publish the following erratum (reproduced below in its entirety):
=======
ERRATUM TO “ON THE EDGE RADIUS OF SAITOU AND NEI’S METHOD FOR PHYLOGENETIC RECONSTRUCTION” [Theoret. Comput. Sci. 369(1–3) (2006) 448–455]
Radu Mihaescu, Dan Levy, and Lior Pachter
Theorem 2 of [2], which claims to settle Atteson’s edge radius conjecture [1], is invalidly proven. The argument in [2] is inductive, and is based on the assumption that if initially
then as the algorithm proceeds, the intermediate distance matrices obtained by agglomeration are within distance from some tree metric (this point is in fact not explicitly stated in [2]).
Such a result holds in the case where one is proving the standard radius bound (Lemma 7 in [1]), since in that case there is a guarantee of only collapsing pairs of nodes forming a cherry in the correct model tree.
However, the analysis fails for the edge radius argument because the agglomerated leaves at any step may not form a cherry. In this case, it is not at all obvious how to find a reduced model topology that is consistent in some weak sense with the initial model tree and that allows the continuation of the induction argument by satisfying
(where is the result of the collapsing step on the distance matrix ). In fact, in Theorem 34 in [3] we provide an example in which such a tree does not exist. Our example shows that the intermediate distance matrices may be further than from any tree metric. This presents a problem not only for the proof of Theorem 2 of [2], but also for Theorem 4 of [4].
The edge radius theorem can be proved inductively by relaxing the hypothesis, as is done in Theorem 25 of [3].
References
[1] K. Atteson, The performance of neighbor-joining methods of phylogenetic reconstruction, Algorithmica 25 (1999), 251–278. [2] W. Dai, Y. Xu, and B. Zhu, On the edge radius of Saitou and Nei’s method for phylogenetic reconstruction, Theor. Comput. Sci. 369 (2006), no. 1–3, 448–455. [3] R. Mihaescu, D. Levy, and L. Pachter, Why neighbor-joining works, Algorithmica, in press. [4] Y. Xu, W. Dai, and B. Zhu, A lower bound on the edge radius of Saitou and Nei’s method for phylogenetic reconstruction, Information Processing Letters 94 (2005), 225–230.
Department of Mathematics and Computer Science, UC Berkeley E-mail address: {mihaescu,levyd,lpachter}@math.berkeley.edu
=======
The journal editor at the time sent our note to the authors of the Dai et al. manuscript, who said that they disagreed with us and refused to do anything about it. The journal didn’t publish our erratum. I sent a follow up email a few years later asking again that the erratum be published (although frankly the Dai et al. paper should have been retracted in its entirety). The editor of Theoretical Computer Science refused again.
The Gettier problem above highlights an epistemic failure of epistemic failure: the following two points are rarely acknowledged and not well understood (including by scientists!):
Neither being right for the wrong reason nor wrong for the right reason constitutes knowledge. As discussed above, knowledge requires more than just truth, and more than justified true belief.
Inscience being wrong for the right reason does frequently constitute progress, whereas being right for the wrong reason rarely does. Yet, it is common for science to be hailed as fundamentally flawed when results are shown to be wrong and for it to be hailed as successful when beliefs held for wrong reasons turn out to be right.
For instance, Newton’s theory of gravity is incomplete in the sense that while it holds in many regimes of physics, it has limitations. However, Newton’s derivation of his laws were based on right reasoning, which is why they were successful in explaining many physical phenomena, such as Kepler’s laws of planetary motion. Indeed, Newton’s theory constituted progress, and played a direct role in our current more complete picture of gravity described by the theory of general relativity. Similarly, in biology, the initial characterization of the DSCAM gene (Yamakawa et al., 1998) was incomplete; it took place at a time when biology was understood through the lens of the one gene, one protein, one function theory. The characterization had its limitations (Schmucker et al., 2000). But the discovery of DSCAM was based on right reasoning. And its discovery constituted progress: we now have a more complete picture of the tremendous isoform diversity of genes, which has forced us to revisit the one gene, one protein, one function paradigm. The word “wrong” is often used to describe scientific work that is better characterized as “incomplete”.
In regards to erroneous proofs of true statements in mathematics, I doubt they are common and disagree with Litt that they constitute a “not insubstantial portion of the literature”. The example of Atteson’s conjecture was, in my opinion, a rare case brought about partly due to mathematical phylogenetics being a small field and not as well-trodden as mainstream fields of mathematics. The typical trajectory for erroneous proofs of true theorems is perhaps better exemplified by the Fundamental Theorem of Algebra. The true theorem with initially published with incomplete and non-rigorous proofs, but revisited frequently, until it was eventually formalized properly (first by Jean Robert Argand).
In other words, most published research results are trueand replete with correct proofs and/or evidence of their truth. Sometimes published results are incomplete, or even wrong, and it is certainly a worthwhile endeavor to work on correcting the scientific record. I’ve spent a not insubstantial portion of my career doing exactly that. However, even when published results are wrong, they usually turn out to be useful.
Four years ago, during the first year of my PhD at Caltech, I participated in a journal club organized by the lab I was rotating in. I was assigned two classic papers on the honeybee waggle dance: “Visually Mediated Odometry in Honeybees” (Srinivasan et al., JEB 1997)1 and “Honeybee Navigation: Nature and Calibration of the ‘Odometer’” (Srinivasan et al., Science 2000)2. Since I was not familiar with honeybee behavior, I decided to expand my literature review to other papers on the topic, including “Honeybee Navigation En Route to the Goal: Visual Flight Control and Odometry” (Srinivasan et al., JEB 1996)3 and “How honeybees make grazing landings on flat surfaces” (Srinivasan et al., Biological Cybernetics 2000)4. While reading these papers, I sensed something strange; I had the feeling that I was looking at the same data over and over again.
I decided to examine the figures and results in the papers carefully, and upon further examination, I found inconsistencies in the results and instances of identical data being reported for different experiments in distinct papers. I was deeply concerned by these findings and presented them at the journal club meeting using animations and overlays to show that the data was indeed identical; the original slides from my journal club presentation on April 9, 2020 are shown below for one example of identical data reported for different experimental conditions and replicate numbers in (Srinivasan et al., JEB 1997)1.
I had imagined that the response to my presentation would be concern and advice on how to report my findings. Instead, both within and outside of Caltech, the response amounted to little more than a collective shrug.
My rotation advisor, a tenured professor, told me that they did not know what to do, so I turned to the instructor of a class I had just finished on “Responsible Conduct of Research”—a required class for first-year PhD students at Caltech. They told me that “it is a question of how you want to spend your time” and “a lot of the scientific literature has problems […], science is an imperfect process”. What was an aspiring scientist supposed to take away from this? I had assumed that ensuring scientific integrity would be a top priority for scientists, not an afterthought.
Eventually, on May 28, 2020, I took my frustrations to Twitter, hoping that someone would follow up on my discovery. After help from Dr. Elisabeth Bik, I ended up posting two PubPeer articles, available here and here. However, both posts have been ignored by journal editors and authors for the past four years.
I was thinking about the collective silence that afflicts scientists when uncomfortable allegations of misconduct are brought forth when, earlier this year, Bill Ackman’s plagiarism accusations against Prof. Claudine Gay were seized upon not only by academics but the public at large. Why is it that accusations of misconduct against men are routinely dismissed as inconsequential while a high-standing female academic missing citations for sub-sentences in the methods and acknowledgments of her PhD thesis is worthy of a New York Times article?
As I regained some faith in scientific integrity, I decided to revisit the honeybee waggle dance papers, and I ended up writing a detailed report in collaboration with my advisor, Lior Pachter. The report, titled “The miscalibration of the honeybee odometer” is now posted on the arXiv. We discuss its implications below.
The miscalibration of the honeybee odometer (by LL and LP)
After reading several papers on honeybee navigation coauthored by Srinivasan, we determined that there were numerous instances of apparent misconduct in addition to the cases of data duplication first presented by Laura in the journal club four years ago. In our report, we discuss numerous papers published over the course of a decade, that are part of the foundation of the field of honeybee behavior, and continue to be cited today. We provide clear evidence that several of these papers contained erroneous information, and many of them contain duplicated and manipulated data. Importantly, the report became a critique not of a single paper, but of a large body of work. We decided it was of the utmost importance for researchers working on honeybee navigation to learn that classic experiments on which the field was based ought to be repeated to verify that the claims made are correct. However, we soon found that there was no medium to publish our manuscript.
We started with bioRxiv, which promptly rejected the manuscript on screening, telling us that we should “reformat it as a research paper presenting new results along with appropriate methods used, rather than simply a critique of existing literature.” Moreover, we were told that our manuscript contained “content with ad hominem attacks,” even though it was merely a factual report of the issues we observed with appropriate citations of the affected papers, with no attack on any people or specific persons.
Faced with rejection from bioRxiv, we decided to submit the manuscript to the Journal of Experimental Biology (JEB), which had published several of the problematic papers. JEB rejected our manuscript for publication but told us they were “investigating the issues raised.” They also said that they “can only investigate issues on papers published in our journals, so you will also need to contact individually each of the other journals that published the papers with which you have concerns.” (Note the use of the word “you“, which we interpreted as an abdication of responsibility by the journal to ensure scientific integrity at large.) Again, we found ourselves without a venue in which to describe problematic results across several papers. We also felt that contacting individual journals for corrections would not serve the community well. Our point was about an entire body of work, not nitpicks regarding individual articles.
The importance of reporting the problems at large across papers is exemplified by the Expressions of Concern published by JEB on June 25, 2024, in response to our manuscript submission to their journal, which can be found here and here. In their Expressions of Concern, JEB states that “the 1996 paper is likely to contain the correct values of the width of the narrow tunnel (11 cm)”, “It appears that Fig. 7 does not contain the correct graph for the searching distribution with the landmark positioned at Unit 9,” and “M. V. Srinivasan believes that the length of the tunnel was not 3.35 m as reported in the 1996 paper, but 3.20 m as indicated in Srinivasan et al. (1997).” Readers are supposed to rest assured that these “issues do not alter the overall results and conclusions of the paper.” We were surprised that the bar for publication at JEB is “belief” and that results are only “likely” to be correct.
Perhaps, if the issues in the two JEB papers were the only issues with this body of work, one could excuse them as human error—which should still cast doubt on the conclusions of the paper. However, JEB was made aware of the issues with this body of work at large, which includes many more instances of data duplication and apparent manipulation across a total of six papers (as far as we found), yet JEB still decided to dismiss their occurrence in their papers. JEB stated in their Expressions of Concern that they “are publishing this Expression of Concern to make readers aware of the issues and our efforts to resolve them,” though no such efforts are described in either Expression, and readers are misled by JEB’s failure to mention that these are not isolated instances pertaining only to the two papers in question. We found the response from “the leading primary research journal in comparative physiology” to be disappointing.
We finally submitted our manuscript to the arXiv, where it was published after being placed on hold for two weeks without any explanation. All of our findings regarding the scientific misconduct in honeybee papers are described in detail here5: https://arxiv.org/pdf/2405.12998 Notably, JEB failed to cite our arXiv manuscript in their Expressions of Concern (our manuscript appeared on May 8, 2024—over a month before their Expressions) and instead mentions only the limited PubPeer articles from 2020, which they had ignored for four years.
Leaving aside the specifics of honeybee, our experience with correcting the literature made us realize that there seems to be no venue right now for critiquing a body of work by an author. Comments on PubPeer are great; and corrections or retractions in journals are useful, but neither serve to alert a community to problematic behavior across numerous articles by an author. We think that our paper would have benefitted from peer review and a mechanism for commentary. For the latter, we decided to write this blog post. For the former, we suggest a Journal of Scientific Integrity.
In a Tablet Magazine article titled “How the Gaza Ministry of Health Fakes Casualty Numbers” posted on March 6, 2024, Professor of Statistics and Data Science Abraham Wyner from the Wharton School at the University of Pennsylvania argues that statistical analysis of the casualty numbers reported by the Gaza Ministry of Health is “highly suggestive that a process unconnected or loosely connected to reality was used to report the numbers”.
In the post, he shows the following plot
which he describes as revealing “an extremely regular increase in casualties over the period” and from which he concludes that “this regularity is almost surely not real.”
Wyner’s plot shows cumulative reported deaths over a period of 15 days from October 26, 2023 to November 10, 2023. The individual reported deaths per day are plotted below. These numbers have a mean of 270 and a standard deviation of 42.25:
The coefficient of determination for the points in this plot is R2 = 0.233. However, the coefficient of determination for the points shown in Wyner’s plot is R2 = 0.999. Why does the same data look “extremely regular” one way, and much less regular another way?
If we denote the deaths per day by , then the plot Wyner shows is of the cumulative deaths . The coefficient of determination R2, which is the proportion of variation in the dependent variable (reported deaths) predictable from the independent variable (day), is formally defined as where is the sum of squares of the residuals and and is the variance of the dependent variable. Intuitively, R2 is a numerical proxy for what one perceives as “regular increase”.
In the plots above, the are roughly the same, however is much, much, higher for the yi in comparison to the xi. This is always true when transforming data into cumulative sums, and is such a strong effect, that simulating reported deaths with a mean of 270 but increasing the variance ten-fold to 17,850, still yields an “extremely regular increase”, with R2 = 0.99:
The All of Us Research Program, whose mission is to “to accelerate health research and medical breakthroughs, enabling individualized prevention, treatment, and care for all of us”, recently published a flagship paper in Nature on “Genomic Data in the All of Us Research Program“. This is a review of Figure 2 from the paper (referred to below as AoURFig2).
Background
The first U.S. Census that commenced on August 2, 1790 included a record of the race of individuals. It used three categories: “free whites”, “all other free persons”, and “slaves”. Since that time, racial categories as defined for the U.S. Census have been a recurring controversial topic, with categories changing many times over the years. The category “Mulatto”, which was introduced in 1850, shockingly remained in place until 1930. Mulatto, which comes from the Spanish word for mule (the hybrid offspring of a horse and a donkey), was used for multiracial individuals of African and European descent. In the most recent decennial census in 2020, the race categories used were determined by the Office of Management and Budget (OMB) and were “White”, “Black or African American”, “American Indian or Alaska Native”, “Asian”, “Native Hawaiian” or “Other Pacific Islander”, and a sixth category “Some Other Race” for people who do not identify with any of the aforementioned five races. Separately, the 2020 census included standards for ethnicity which were first introduced in 1977 as part of OMB Directive No. 15. Two ethnicity categories were introduced: “Hispanic or Latino” and “Not Hispanic or Latino”. The OMB was specific that race and ethnicity are distinct concepts: an ethnically Hispanic or Latino person can be of any race.
While race and ethnicity are social constructs, ancestry is defined in terms of geography, genealogy, or genetics. The relationship between these three types of ancestry is complex, and can be nonintuitive. Graham Coop has a great series of blog posts illustrating the subtleties around the different types of ancestry. For example, in “How many genetic ancestors do I have?” he illustrates the distinction between the number of genetic vs. genealogical ancestors:
AoURFig2 utilizes the concept of genetic ancestry groups. These do not have a precise accepted definition, but analysis of how the term is used reveals that genetic ancestries labels such as “European” are based on genetic similarity between present day individuals. This is explained carefully and clearly in an important paper by Coop: Genetic similarity versus genetic ancestry groups in as sample descriptors in human genetics.
In AoURFig2 the ancestry groups used are “African”, “East Asian”, “South Asian”, “West Asian”, “European” and “American”. In their Methods section, the authors claim these are based on labels used for the Human Genome Diversity Project, and 1000 Genomes, which specifically they explain in the methods are: African, East Asian, European, Middle Eastern, Latino/admixed American and South Asian (in the figure legend they have renamed “Latino/admixed American” as “American” and “Middle Eastern” as “West Asian”). For each of these labels, obtained via self identified race and ethnicity by participants in the 1000 genomes project, the authors collated their genetic data to obtain genetic ancestry groups. Inherent in these groupings is an assumption of homogeneity, which is of course not true, because the individuals may vary in their genetics and their self identified race and ethnicity may be based on genealogy or geography, which could be at odds with their genetic relatedness to other individuals in their artificially constructed “genetic ancestry group”. Coop makes this point eloquently in his summarizing a key point of his paper:
In summary, there are three notions crucial to understanding AoURFig2: race, ethnicity, and genetic ancestry, each of which is distinct from the others. Individuals who self identify with a particular ethnicity, for example Hispanic or Latino, can self identify with any race. Individuals self identifying with a specific race, e.g. “Black or African American” can be genetically related to a different extent with the six groups of genetic ancestry, and a genetic ancestry group is neither a race nor an ethnicity, but rather a genetic average computed over a set of (mostly genetically similar but also somewhat arbitrarily defined) individuals.
AoURFig2 is shown below. In the following sections we discuss each of the panels in detail.
The figure legend
We begin with the figure legend, which lists Race, Ethnicity and Ancestry. Race and Ethnicity refer to the self identified race choices for participants (based on the OMB categories). Ancestry refers to the genetic ancestry groups discussed above. While these three concepts are distinct, the Ancestry colors are the same as some of the Race and Ethnicity colors:
This is problematic because the coloring suggests a 1-1 identification between certain races and ethnicities, and genetic ancestry groups. In reality, there is no such clear cut relationship, as shown in the admixture panels in AoURFig2 (more on this below). Ideally, the distinct nature of the concepts of race, ethnicity, and genetic ancestry, would be represented by distinct color palettes. The authors may have been confused on this point, because in the paper they write “Of the participants with genomic data in All of Us, 45.92% self-identified as a non-European race or ethnicity.” This makes no sense, because none of the race categories are “European”, and “European” is also not an ethnicity category. Therefore “non-European” does not make sense as either a race or ethnicity category. The authors seem to have assumed that White = European as indicated by their color scheme, and therefore “non-European race” is non-“White”. But by that logic “Hispanic or Latino” = “American” would mean that “Hispanic or Latino” is not “European” which implies that “Hispanic or Latino” is not White, contradicting the specific definition of race and ethnicity categories by the OMB. An individual’s ethnic self identification is independent of their race self identification, and someone may self identify as White and Hispanic or Latino. Clearly the authors would benefit from reading the NASEM report on the use of population descriptors in genetics and genomics research and the NIH style guide on race and national origin.
The ancestry analysis
Panel c) of AoURFig2 presents an ancestry analysis consisting of running a program called Rye on to assign, to each individual, a fraction of each of the genetic ancestry groups. The panel with its subfigures is shown below:
There are several problems with this figure. First, it has no x- or y- axes. The caption describes it as showing “Proportion of genetic ancestry per individual in six distinct and coherent ancestry groups defined by Human Genome Diversity Project and 1000 Genomes samples” from which it can be inferred that each row in each panel corresponds to an individual, and the horizontal axis divides an interval (width of the plot) into proportions of the six ancestry groups. In principle the panels could be in the transpose, with columns corresponding to individuals, but a clue that this is not the case is, for example, the ancestry assignment for Black or African American individuals, presumably none of which turn out to have an assignment 100% to European. That’s just a guess though. It’s best to label axes.
A second problem with the figure is that the height of each panel is the same, thereby not reflecting the number of individuals of each self-reported race and ethnicity. For instance, there are only 237 Native Hawaiian or Other Pacific Islander individuals versus 125,843 Whites. The numbers are there, but the height of the panels suggest otherwise. Below is a bar plot showing the number of people self identifying with each race in the data used for panel c) of AoURFig2:
The All of Us Research Program (henceforth referred to as All of Us) lists as a Diversity and Inclusion goal: “Health care is more effective when people from all backgrounds are part of health research. All of Us is committed to recruiting a diverse participant pool that includes members of groups that have been left out of research in the past.” That is an admirable goal, and while All of Us is to be commended on the relatively large number of self identifying Black or African American participants recruited in comparison to previous cohorts, it’s worth noting that in this analysis White still wins (by a lot).
A third problem with the figure is the placement of the “Hispanic or Latino” ethnicity in the middle of panels assigning ancestry groups to individuals by race. As discussed previously, self identification of ethnicity is orthogonal to race. There is therefore ambiguity in the figure, namely it is unclear whether some of the individuals represented in the Hispanic or Latino plot appear in other panels corresponding to race. The juxtaposition of an ethnicity category with race categories also muddles the distinction between the two.
The ancestry analysis is based on a program called Rye, which was published in Conley et al., 2023. The point of Rye is runtime performance: unlike previous tools, the software scales to UK Biobank sized projects. Indeed, it’s runtime performance is impressive when compared to the standard in the field, the program ADMIXTURE:
However, while Rye is faster than ADMIXTURE, its results differ considerably from those of ADMIXTURE, as shown in Supplementary Figure S5 of the paper:
I haven’t benchmarked these programs myself, but geneticists have some experience with ADMIXTURE which was published in 2009 and has been cited more than 7,000 times. The Rye program, from two groups associated with All of Us, has been cited twice (both times by the authors of Rye who are members of the All of Us consortium; one of the two citations is the paper being discussed here). Of course, one shouldn’t judge the quality of a paper by the number of citations. A paper cited twice could be describing a method superior to a paper cited more than 7,000 times. But I was discomfited by the repeated appearance of a p-value = 0 in the paper (see below for one example among many). It reminded me of pondering p-values before breakfast.
Also R2 is the wrong measure here. The correct assessment is to examine the concordance correlation coefficient. Finally, and importantly, the Rye paper describes results based on inference not with the high-dimensional datatypes but rather a projection to the first 20 principal components. Notably the All of Us paper, and in particular the results reported in AoURFig2, use 16 principal components. There is no justification provided for the use of 16 principal components, no description of how results may differ when using 20 principal components, nor is there a general analysis describing robustness of results to this parameter.
In any case, setting aside feelings of being left Rye and dry and taking the admixture results at face value, it is evident that individuals self reporting ethnicity as “Hispanic or Latino” are highly admixed between European and American (the latter label meaning Latino/Admixed American). This stands in contrast to the coloring scheme chosen, with Hispanic or Latino colored purely “American” implying individuals self identifying with that ethnicity are not European. It also is at odds with the UMAP displays in panels a) and b) of AoURFig2.
UMAP nonsense
The AoURFig2 presents two UMAP figures, shown below. The UMAP is the same in both figures; in the top subplot (a) it is colored by race, and in the bottom subplot (b) it is colored by ethnicity.
The first thing to note about this plot is that it has axes when it shouldn’t. There is no meaning to UMAP 1 and UMAP 2, and the tick marks (-20, -10, 0, 10, 20) on the y axis and (-10, 0, 10, 20) on the x-axis are meaningless because UMAP arbitrarily distorts distances. Somehow the authors managed to put axes on plots which shouldn’t have them, and omitted axes on plots that should. Furthermore, by virtue of plotting points by color resulting in an overlay of one color over another, it’s difficult to see mixture of colors where it exists. This can be very misleading as to the nature of the data.
More concerning than the axes (which really just show that the authors don’t understand UMAP), are the plots themselves. The UMAP transform distorts distances, and in particular, as a result of this distortion, is terrible at representing admixture. The following illustrative example was constructed by Sasha Gusev:
But one doesn’t have to examine simulations to see the issue. This problem is evident in panel c) of AoURFig2. Consider, for example, the Hispanic or Latino ancestry assignments shown below:
The admixture stands in start contrast to the UMAP in b), which suggests that the Hispanic or Latino ethnicity is almost completely disjoint from European (which the authors identify with White via the color scheme). This shows that UMAP can and does collapse admixed individuals onto populations, while creating a hallucination of separation where it doesn’t exist.
I recently published a paper with Tara Chari on UMAP titled “The specious art of single-cell genomics“. It methodically examines UMAP and shows that the transform distorts distances, local structure (via different definitions), and global structure (again via several definitions). There is no theory associated to the UMAP method. No guarantees of performance of any kind. No understanding of what it is doing, or why. Our paper is one of several demonstrating these shortcomings of the UMAP heuristic (Wang, Sontag and Lauffenberger, 2023). It is therefore unclear to me why the All of Us consortium chose to use UMAP, especially considering that they (in particular one of the authors of Rye and a member of the All of Us consortium) were warned of the shortcomings of UMAP a year ago.
Scientific racism
The misuse of the concepts of race, ethnicity and genetic ancestry, and the misrepresentation of genetic data to create a false narrative, is a serious matter. I say this because such misrepresentations have been linked to terror. The Buffalo terrorist who murdered 10 black people in a racist rampage in 2022 wrote that
Included in his manifesto, from which this text is excerpted, was the following figure:
This plot is eerily similar to one made by Razib Khan, in which he used the term “Quadroon-Jews” (Khan’s figure was published in the Unz Review, which is a website published by far-right activist and holocaust denier Ron Unz). The term “Quadroon” appeared in the 1890 U.S. Census as a refinement of “Mulatto” (see the first at the top of the post).
These plots show the projection of genotypes to two dimensions via principal component analysis (PCA), a procedure that unlike UMAP provides an image that is interpretable. The two-dimensional PCA projections maximize the retained variance in the data. However PCA, and its associated interpretability, is not a panacea. While theory provides an understanding of the PCA projection, and therefore the limitations of interpretability of the projection, the potential for misuse makes it imperative to include with such plots the rationale for showing them, and appropriate caveats. One of the main reasons not to use UMAP is that it is impossible to explain what the heuristic transform achieves and what it doesn’t, since there is no understanding of the properties of the transform, only empirical evidence that it can and does routinely fail to achieve what it claims to do.
The pseudoscientific belief that humans can be genetically separated into distinct racial groups is part of scientific racism. Such pseudoscience, and its spawn of racist policy, has roots in many places, but it must be acknowledged that some of them are in academia. A few years ago I wrote about the depravity of James Watson’s scientific racism, but while his (scientific) racism has been publicly documented due to his fame, scientific racism is omnipresent and frequently overlooked. The ideas that the Buffalo terrorist and that Watson promulgated are reinforced by sloppy use of terms such as “race” and “ethnicity” in academia, along with misrepresentations of the genetic similarity between individuals. Many of the concepts in population genetics today are problematic. Coop’s eloquent critique of genetic ancestry groups is but one example. The concept of admixture is also rooted in racism and relies on unscientific notions of purity. With this in mind, I believe it is insufficient to merely relegate AoURFig2 to Karl Broman’s list of worst graphs. The numerous implications of AoURFig2, among them the authors’ claim that individuals identifying ethnically as Hispanic or Latino are genetically not European and therefore not racially White (see section on ancestry analysis above for an explanation of why this is incorrect), are scientific racism. The All of Us authors should therefore immediately post a correction to AoURFig2 that includes a clarification of its purpose, and corrections to the text so the paper properly utilizes terms such as race, ethnicity and ancestry. All of us need to work harder to sharpen the rigor in human genetics, and to develop sound ways to interpret and represent genetic data.
Recent Comments