Locklin on science

Coding assistant experience

Posted in tools by Scott Locklin on February 18, 2026

I’m a modest LLM skeptic. It’s not that I don’t believe in LLMs, I am aware that they exist, I just know that they’re not doing what people do when we think, and that they’re not going to hockey stick up and replace everybody. If it helps people, they should use them: I do. ask.brave.com is my first stop for answering transient questions or software configuration issues. It produces useful results and cites its sources; a great search API. It also doesn’t remember what I asked it (Brave is privacy first), which is what you want most of the time. Grok gives OK answers too, but I don’t like the answers as much, and I have no idea what their privacy policies are. Qwen has been OK for answering coding questions and small code fragments.

I have a few jobs I’ve been putting off; fiddly and annoying translations from Python to R, updating APIs, etc. I also have a couple of challenge problems I have asked AI chatbots to gauge where we’re at for things which I care about. Qwen is by far the best free and open chatbot I’ve used, and it had gotten good enough I decided to fork out for claude-code and take it for a spin. Also inspired by asciilifeform’s comments; dude’s grouchier and more skeptical than I am, so I took his statements on the utility of claude-code very seriously. People who use LLMs at work already can probably skip to the end for this, as you already know more than I do about using these things, though maybe some of the observations are of use.

Mostly the type of work I do is numeric, and numeric coding is significantly different from what most do. I never had any doubts that an LLM could do Javascript plumbing, or even back end plumbing code. Lots of examples of this to train on, along with complicated regular expressions, SQL queries and so on. I figured they’d eventually do something with numeric stuff, though it was less clear when it would happen for my favorite programming languages.

Some claude-code notes:

0) You need to pay for the $200/month one to get anything useful done with claude-code. This is annoying as it’s difficult to burn all your tokens, but the cheap plans run out almost immediately. Jerks. I should be able to pay as I go without talking to some salesdork or signing up for a subscription.

1) Claude code has access to your hard drive, and you have to invoke lucifer and kernel modules to keep it from ruining your life. Yah, in principle you can trust the thing. Back in the 90s you could in principle have an RPC daemon on your Sun workstation which executes arbitrary code, and most of the time nothing bad would happen. Anyone who trusts this thing with sensitive code is fucking retarded. You need to run local for this.

2) One of my unpleasant tasks is translation from the lost souls who think Python is an adequate mode of scientific communication to something less insane (in my case, R, though I still hold Matlab is best tool for scientific communication) is the first task. That’s something an LLM should be great at. Mostly the chatbots haven’t been, but recently they seem to have acquired the skill. This was my most pressing reason for trying claude code, which I assumed would be better than a chatbot. Claude managed to achieve the task in maybe something like twice the time it would have taken me, in a fashion quite a bit more code complete than I would have done. Of course it forgot to add a predict method for a bunch of algorithms that people basically only use to predict things, but once I told it to do so, it did. The first go-round it reproduced every python class in the old repo and made them public, which is exactly what you’d expect from a machine that doesn’t understand anything: the actual algorithm is “fit model, predict model” so you need exactly two public functions, with the other functions being called as options inside the create function. Once I yelled at it enough, hollered at it to update the manual pages to match what’s inside the functions and so on, it did a reasonable job.  Another thing I find extremely painful in R: making a vignette and festooning the source with inline documentation using rmarkdown. I’ve always found this onerous, but the LLM don’t seem to mind. I prompted it to use a google style guide for R packages, so the style isn’t horrible. Beating it into shape was a fairly high attention process, though it was my first time using claude code. All told I put much more time into it than I would have fooling around on my own. This is because it’s low effort work, where writing it yourself is high effort work. There’s a problem here: since it’s low effort to generate a lot of code, now you have a lot of code. Code that has to get maintained if you’re actually using it.

3) Another major unpleasant task I have is turning a paper I read into code. For simple things, LLMs should be able to do this. For more complicated things, I assume there is a limit based on prompt windows. Indeed Claude code was able to turn this paper (my go-to challenge problem) into reasonable working R code; Bernoulli Naive Bayes with EM semisupervised updates. This is something I had done myself for a project, but never checked into any remote repo, so I knew there would be no cheating. I also looked fairly extensively for an example on github and didn’t find any (albeit some years ago now, but people are retarded and would rather fiddle with neural nets than this most excellent trick). Claude was considerably slower at this than the translation job, and made what I consider fairly poor code quality, though I didn’t prompt it with any style guides. Still, actually doing the damn thing is pretty good, and I’ll be testing this type of “read the paper, geev mee code,” job further with more difficult problems. For those of you not in the know, Bernoulli Naive Bayes is basically column means, and the EM algorithm is awfully simple: maybe around the complexity of Newton’s method. Someone like me can do it in an hour if you point a gun at me and give me an espresso enema, or a couple of hours if I’m taking my time and being careful. If I can get algorithms from papers on non-trivial problems, this is a nice application for me; I have an enormous backlog of interesting looking ideas with no public code associated with them. Understanding the papers in enough detail to write code is a pain in the ass, especially if you don’t have good building blocks.

4) The final category of unpleasant “I will likely defer this job forever” task is glueing an API into R (or J, which I have ambitions of getting back to), then using that to implement an algorithm. I asked claude to fill out some of the missing functionality from mlpack. Looked OK, I didn’t test them. I also had it code up an API for mlpack for J, which it appeared to do (it’s been so long since I used J, testing it was painful; sorry about all the sub dependencies it put in the repo).

Task 2 and 3 are my most common use cases. Mostly it doesn’t matter if the results are slop. 4 is an occasional dreary task as well, though R has a decent ecosystem of people who have done this for everyone. Telling the thing how to do my daily tasks is probably also automatable to some extent, but it would mostly be a waste of time. Interactive work is interactive, and Captain Kirking it with a LLM agent is just going to piss me off. I don’t even like using R notebooks, so making an LLM R notebook is no good.

qwen3-coder-next:

I also ran qwen3-coder-next on my threadripper. It’s slow, but can be used if the threadripper isn’t chugging on any other serious tasks. The motivation isn’t to avoid the $200 a month subscription fees; it’s the fact that I don’t trust Claude with anything actually sensitive, like things which produce money for me. It was a pain in the ass to get this stood up and functioning. I did it like this:

numactl --interleave=all ./build/bin/llama-server \
-hf unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M \
--numa distribute \
--threads 32 \
-c 262144 \
--no-mmap \
--jinja \
--host 0.0.0.0 --port 8080

ollama basically doesn’t work. In this case, for the first round, I ended up using a python tool called aider to run it (claude-code-agent in emacs for the claude-code interactions). I think aider is a little clunky; it couldn’t figure out how to make a subdirectory from where I invoked it. Probably choking on context. Might be user error somehow; I went back to emacs (gptel-agent) later and fixed it. TPS appeared to be on the order of 20, very slow prompt processing though. Claude is roughly twice this speed, though it feels faster because it’s running on someone else’s hardware and doesn’t choke as badly on context. I was able to reproduce the semisupervised Bernoulli Naive Bayes with EM updates example that claude-code did as well as a simple Python translation example (a novel fast fitting method for logistic regression). Took about as long for the first round, wasn’t as smooth an interaction. Fed it exactly the same prompt. Got the algorithm right in the first shot, but the NB R package was all borked up, which is the kind of thing I noticed in the qwen chatbot. This required a fairly long context window, so I’m a bit dubious pointing qwen-code-agent at a more involved paper until I upgrade my hardware. I actually like the code qwen produces a little better. Not bad for 3 billion active connections, thank you based Chinese frens. Oddly the python translation seemed to give it more trouble, again I think because of the slowness of parsing context windows on the threadripper.

There are a couple of reasonably cheap potential hardware solutions to run this qwen3 thing without heating up the threadripper or spending 10k on a big video card and a new power supply; Strix Halo from AMD and NVIDIA GB10 Grace Blackwell. Both are small boxes running Linux with 128G of shared memory with a medium-beefy GPU. Neither seems to have any huge performance advantages over the threadripper or each other (real world experiences welcome, supposedly NVIDIA is faster on context), but they’d allow me to do vibe coding while using the threadripper cores for other tasks. Nice airgap as well. If anyone owns such a shoebox machine and had good experiences, feel free to pipe up. I ordered the AMD gizmo so I wouldn’t have to deal with maintaining a development environment for ARM chips. I’ll probably run the claude stuff from this machine as well for the airgap benefits.

While qwen3 did an OK job, it was no fun to work with. The slow context parsing speed of the thing makes the tooling even more clunky,  though emacs (gptel-agent) it was a better experience than aider. The agentic part of the mechanism and differences in how something like claude-code works (a NPM package) isn’t fully clear to me yet. “Thing that runs machine generated shell scripts” seems to be about the size of it. How the LLM knows when it’s hooked up to something with agency isn’t clear. I suppose I can ask an LLM for an explanation here.

random unconnected thoughts:

A fun and actually useful thing to try would be to get one of these things to make Lush 64 bit clean. If I could do that without bothering the authors, that would be amazing. Maybe I can burn up some Claude tokens on this when I’m not using it for other tasks.

The chatbot part: I don’t think Claude Opus 4.6 is anything special. Like all the other ones, it speaks authoritatively, talks in circles, contradicts itself and is generally full of shit. Makes a decent coding assistant though. Asking it for advice on buying a machine for running qwen3 locally, for example: actual search engines (including ask.brave.com) produce better results that don’t contradict each other every other line.

Fun thing I didn’t fully realize until performing this exercise: LLMs don’t have state. It keeps state by feeding the prompt (in most cases the entire prompt, including the entire codebase you’re working on, all the search results, etc, every time there is an update) back to the LLM, along with the most recent results. This is, of course insane. It is particularly insane that people think this kind of Rube Goldberg contraption is sentient somehow. LSTMs are more sentient.

Complexity: R packages implementing an algorithm are a decent sweet spot for something like this. The R packaging system is designed to insulate the REPL from shitty coders who understand things about statistics. The context window is never going to be enormous, it’s generally going to be a couple hundred to a thousand lines of code that accomplishes a well defined numeric task.

Productivity thoughts:

One thing which is for certain: Claude code isn’t replacing anyone’s job. Anthropic’s headcount isn’t getting smaller. The good thing about using a tool like this is that it has low cognitive overhead; I have to figure out how to constrain a mildly retarded computard helper and make it do the things I actually care about. Once I’ve read the paper or glanced at the original source I have a fair idea of what I want the result to look like, and I have to break the task down to something a retard could understand. This is something I do for myself already (being retarded  👍), though the degree and quality of my personal retardation is considerably different. I also have to debug the result afterwords: there will be a lot of bugs, where writing code interactively is kind of online debugging. But, it is useful enough and does things I find onerous and unpleasant in a relatively painless manner, so I’m gonna use it. Sort of like an employee, yes: but a bad employee. One you can’t trust with anything important, and who takes longer at accomplishing tasks than doing it yourself. People who trust vibed code with important things, well, rotsa ruck to you.

There’s a hidden cost to this sort of thing. Because you can write a bunch of code without burning up your precious brain-sugars, you will write a bunch of code. Now you have a bunch of code of dubious utility. In my case, I’ve been very careful to not engage in writing code from papers or translating from python or whatever unless I was pretty sure there was paydirt. Now I’m gonna do it more often. While it feels non-tiring to do this sort of thing, it still takes a nontrivial amount of time, and an even more nontrivial amount of time to evaluate the algorithms the LLM made for me. Maybe I should be working on something else?

For a trivial example, I just spent a couple weeks fooling around with this nonsense. I have one machine generated R package of marginal utility to my actual project to show for my troubles, as well as a much better understanding of the abilities of LLM coding assistants. This is absolutely abysmal from a productivity point of view. Lines of code generated looks amazing, but I don’t get paid for lines of code. “Maybe it will pay off in future productivity,” but that sounds an awful lot like the sales bilge on the tin for vendors of these things. The real world results indicate otherwise. They’re even starting to notice the Solow paradox, aka the fact that ladies with a rolodex, telephone and filing cabinet are as economically efficient as putting everything online and in databases.

Consider my likely trajectory with this crap: I’ve already dumped $2200 into a Claude membership and a new piece of hardware to run qwen3-coder for me. I’ll have to configure and maintain that piece of hardware, burning more real world time, and the ongoing cost of claude if I continue the membership. I’ll also burn real world time coding up random ideas I would have ignored in the past, or only approached cautiously. Just like putting the internet on my computard, it will  open up vast new avenues for wasting time, rather than keeping focused in my pursuit of actually economically productive goals. Is it a win or a loss? I can’t tell. Still gonna use it, but cautiously.

https://github.com/locklin/vibe-coding-experiments

Conditional probability: an educational defect in Physics didactics

Posted in physics by Scott Locklin on January 16, 2026

Conditional probability is something physicists have a hard time with. There are a number of reasons I know this is true. Primarily I know it is true from my own experience: I had a high-middling to excellent didactics experience in physics, and was basically never exposed to the idea. When I got out into the “real world” of, say, calculating probable ad impressions this concept became of towering importance. It took me a while to grasp it, and I still occasionally struggle with the idea, but it’s actually pretty simple.

What is the probability a man is over 6′ tall? Well, in the US, you look at the normal distribution and find it’s about 14%. If you know both his parents are 6′ tall, the number is higher. If both his parents are 5′ tall, the number is lower. That’s a practical example of conditional probability. Making it super concrete, imagine you have a deck of cards. Probability of drawing an ace is 4/52. Probability of drawing an ace if (conditionally) 10 cards have been drawn with no aces is 4/42. Probability of drawing an ace if you pulled 10 cards and two of them are aces (conditionally) is 2/42.  You can do it with urns or dice or whatever; make yourself happy with your favorite example.

Statistical mechanics seems like this is where you should learn such things in physics, since we have no independent probability theory classes. I looked in Reif and Ma, the two books I learned statistical mechanics from. Reif doesn’t have the concept in the index, though it mentions Markoff and Fokker-Planck (he does mention conditional probability here). Ma only mentions it to argue that he doesn’t need it to teach statistical mechanics (later bringing it back in various places in a sort of ad-hoc way: I shouldn’t have slept in so much in that class). Ma even manages to avoid mentioning conditional probability in his treatment of Fokker-Planck, a considerable intellectual achievement for a set of equations for calculation of a conditional probability. As such, most physicists end up thinking of probabilities as funny sorts of ratios that must add up to one, which is right for a lot of cases in physics, but which is not correct in the general sense. Most of the classical statistical physics done with canonical ensembles (aka most of it) assume we can ignore conditional probability. Stuff like non-equilibrium thermodynamics is going to contain a lot of conditional probability, since it is dynamic and one-way in the same sense as the above card game. Our one example of a non-equilibrium thermodynamic relation which rises to the level of a law, the Onsager relations, certainly uses conditional probability, though Onsager himself never mentions it explicitly. The fact that he never uses the words, nor are they used in didactic explanations probably keeps physicists from having a good think about the implications of conditional probability in this and in other places. Out of sight, out of mind.

There are more pedestrian examples of physicists missing out on conditional probability; I’ll list a couple below:

Jung/Pauli synchronicity. When I was a young pot smoking man, I read with great interest a book on the correspondence between Jung and Wolfgang Pauli on the subject of synchronicity. If you’re unfamiliar with the topic, the following clip from Repo Man explains it well; lots of weird coincidences happen, and our brains ascribe meaning to them. Feels a lot like psychic powers or something. The reality is, the otherwise incredibly meticulous Pauli didn’t know enough about conditional probability, even to the level of understanding the trivial Birthday Paradox. It’s all conditional probability: it’s only surprising because our brains don’t intuitively grasp how conditional probability works. The brain observes many things in a short period of time; if some of them happen to overlap in a conditional way over a human consciousness tier period of time (minutes, hours, a day or two), the brain flags it as something significant, even when it’s entirely expected, like a group of 23 people being 50% likely to have a shared birthday. Pauli is a lot smarter than me; arguably smarter than any living current year physicist whose name isn’t Roger Penrose, yet he missed this obvious thing. Probably because his life was a mess and he was drinking too much, but also because he was probably never exposed to the idea in school or anyplace else.

Fermi Paradox is a case where a Nobel prize winning physicist kind of left out important conditional probability aspects of a model. As we all know it is a calculation of there being other forms of intelligent life in the universe based on approximated probabilities. The Drake Equation lists number of stars in the universe, approximate probability of a planet in the habitable zone, age of solar systems, probability of life, intelligent life, civilizations, civilizations with space travel,  etc. In the end he sums things up by multiplying all the numbers together, and comes to the conclusion that there must be intelligent life which we should be able to observe or which have visited us, or there are hidden and depressing dangers which wiped out all these space faring alien cultures. If you look carefully at what he did,  you might never notice he didn’t use any conditional probability. Probably he elided over some important conditional probability. For example, most species go extinct in a way that fits the Survival model; there’s no reason to think intelligent ones have any special advantages, and lots of reasons to think any sort of megafauna, intelligent or otherwise is going to be at least as likely as any other species of megafauna to go extinct over time. This is just one of the conditional probability factors at work here. Though maybe earths are just rare, or intelligent life is unlikely in conditions where they might discover electricity (aka aquatic life). Conditional probability isn’t necessarily the right tool here for a quick look at orders of magnitude, but it is conspicuous for its continued absence in a calculation which heavily implies it might be useful.

The thermodynamic arrow of time. The arrow of time is considered a root problem in physics. Microscopic classical physics, there is no obvious arrow of time. The equations work the same way backwards as forwards. Yet you can assemble the microscopic equations into large ensembles and get the very irreversible laws of thermodynamics. Watanabe wrote an important paper on this subject in 1965 where he noticed that we leave out the conditional probabilities when formulating the statistical mechanical ensembles we use to calculate things and derive thermodynamic relations which make things like steam engines possible. Watanabe’s paper is influential with people with good taste, but mostly has been ignored. Certainly ignored in didactics, and often disputed for reasons which remain obscure to me. Rovelli and friends for example (linked above) think it’s a bad argument for various fiddley reasons which make no sense to me, but the idea of using conditional probability to ascertain where the arrow of time is coming from seems obvious. Of course I don’t know how to do it; I’m a mere statistical dabbler. Physicists resist this with all their might; you can find otherwise obviously intelligent people saying, effectively, “it just isn’t, OK.”

My favorite potential example of this is ET Jaynes idea that the mysteries of quantum entanglement go away when you think about conditional probability. I like this one a lot. Mostly because it dispenses with all the psychic powers quantum mysticism that has sprung up around the ideas of quantum mechanics. Also because it dispenses with quantum computers, which are both obviously fake and retarded. But mostly because Jaynes is the patron saint of physicists who make the jump to data science, and so, was uniquely qualified to bring this sort of thing up. Data science people have to know all about conditional probability: that’s pretty much what they’re doing, all day, every day. If nothing else, the fact that the main engagement with this idea in the literature ends up agreeing with it, rather than deboonking it kind of indicates that the conditional probability is weak among physicists. That’s not to say Jaynes was right, but the lack of informed argument against it indicates a weakness in the topic of conditional probability. If indeed the ideas of Jaynes turn out to be true (I’m in no position to adjudicate), this example will be held up by some future Thomas Kuhn type of thinker to be a spectacular example of a field of very smart people deluding themselves with didactic deficiencies, mathematical ignorance and group-think. As Mencken put it:

The liberation of the human mind has never been furthered by such learned (pedant) dunderheads; it has been furthered by gay fellows who heaved dead cats into sanctuaries and then went roistering down the highways of the world, proving to all men that doubt, after all, was safe – that the god in the sanctuary was finite in his power and hence a fraud. One horse-laugh is worth ten thousand syllogisms. It is not only more effective; it is also vastly more intelligent.

As an aside, I found another contemporary researcher who seems to take the conditional probability approach to get rid of quantum woo. I haven’t read his papers in detail, but they seem to be thoughts along the same lines as Kracklauer and others mentioned in the previous article. It’s entirely possible that entanglement is exactly what Scott Aaronson thinks it is, but the fact that its one application is only useful for pumping up fraudulent penny stocks thus far, I mean, I dunno considering the above it wouldn’t surprise me if the big wrinkly brains got this one wrong.

I suppose statisticians also have a hard time with conditional probability with Simpson’s “paradox” being a prime example, and Berkson’s paradox being a less known one. Contemporary statistical practitioners aren’t supposed to be deep thinkers though, so they get a pass.

Optimizing for old battles

Posted in econo-blasphemy, health by Scott Locklin on January 7, 2026

About 3/4 of our management expertocracy is optimizing for old battles. It’s a pattern which is pervasive in Western Civilization, which is one of the reasons everything is so weird right now. Gather together a group of bureaucrats to solve a real problem, it’s still there 50 years later doing …. things. Things which are probably not important or even helpful. New people get hired to work on new things, and the old fungoid bureaucracy is still there doing things which may or may not be helpful.

As an example, it is bizarre to me that people want to genetically engineer rice to produce vitamin-A. Also that USDA approved rice is required by law to be “fortified” with a bunch of crap nobody needs. Dealing with the latter: nobody in the US needs “fortification” in their goddamned rice or anything else. Most people in the US are over-provisioned with nutrients, and those who aren’t can take a goddamned vitamin pill. In particular, adding iron to rice is fucking insane. Men do not need iron in their diet. They get enough from meat, eggs or legumes that they eat. There’s reason to believe iron in particular in USDA fortificants is dangerous. It’s not something humans evolved to eat, and it’s not the same chemical as exists in actual food. The other shit: vitamin-A and some B vitamins: vitamin-A might also be …. suboptimal, and I don’t want that crap in my food. Wash your rice, fellow Americans. It removes some of the arsenic, but mostly it removes the slop the vendors are required to add to the rice. Genetically engineering rice to produce vitamin-A; what could go wrong? Considering recent track record of “muh scientists” it seems like a lot could go wrong. These substances were added to rice and flour back in the day because people didn’t eat much of anything else. It’s an 80 year old health intervention; literally something we did in WW-2 to help the soldiers and imposed on the colonies afterwords. Can we revisit this idea? I don’t think it’s helping, and it might be hurting people.

Folic acid is another, possibly even more alarming nutritional example. The US government mandates (since 1998) it be put in stuff like cereal and bread. The idea is to prevent folate deficiency, which can cause neurological issues, especially in infants; folate deficiencies can cause neural tube defects in infants, a rare and awful condition. The problem is folic acid and folate are different substances, and they behave differently in the human body. Folic acid does not exist in nature, at all; only in the test tube and in “fortified” american grains. It’s so different from natural folates, it is used to induce kidney damage in animal experiments. Folic acid needs to be metabolized in the body into folate, and one can develop actual antibodies against it, which causes problems with the folate receptor. Autistic kids have a lot of these antibodies fiddling with their folate receptors. This supplement came about because of experiments on rats, who process folic acid differently from humans. A fact which wasn’t figured out until 2009, 11 years after the mandates (which have spread worldwide). It was seen as a harmless addition which was an unambiguous public health win, but nobody has bothered thinking about whether there might be problems with this chemical, despite all the behavioral and health problems that have sprung up since the stuff was mandated in the food supply. This isn’t something I’ve fully figured out, and I wouldn’t stake my life on the idea, but it looks like it could be bad and it is unambiguously clear that the public health organizations are determined to put this bullshit in everyone’s flour, with no thought for whether this might actually be harming more people than it helps. Concerned citizen scientists have a website you can look at. There’s also a video including Covid Grandpa which made me aware of it.

Cholesterol: there is a fairly strong correlation between heart disease and high levels of LDL cholesterol.  Unfortunately, there is also a fairly strong correlation between long life and high levels of LDL cholesterol when the patient is older. What means? The standard doctor thing is cholesterol bad, giving a number of interventions which may or may not marginally increase lifespan, while having terrible side effects. They tried another intervention recently: crispr gene therapy to reduce cholesterol. That one is unambiguously bad; one of the participants in the trial died already. The reality is, various bureaucrats have decided cholesterol bad, and are managing the number. Actual scientists driven by truth-seeking are still puzzled by this correlation, and notice other things are better predictors of cardiovascular disease. For example, the ratio of HDL to triglycerides; lots of HDL is good, lots of triglycerides is bad. Most people with high LDL have a lot of triglycerides because they’re sustained on a diet of sugar and grease, so this correlation could be measuring the same thing. I sometimes have high LDL (mostly when doing something keto-like with low fiber, which is a known phenotype which also doesn’t have increased CVD risk), always high HDL and never high triglycerides. Also no heart disease in my family. Other scientists notice a particular kind of heart disease is anti-correlated with cholesterol. Also, dementia, which ought to be disturbing to anti-cholesterol bureaucrats, but somehow isn’t. Others notice CVD’s biggest risk factor is actually insulin resistance. There are other ideas; APO-B is another one which people take drugs to control. Same problem as LDL: you’re controlling a number correlated with a risk, not the risk. When you look at the risk after you control the number, not so much. Yet we still have imbeciles talking about putting statins in the goddamned water. All public health officials talking about putting anything in the food or water should be machine gunned into a ditch, and the remaining ones need to look at the current state of the research with some consequences (perhaps shipping them to El Salvador to aid with their public health problems) if they get it wrong. Of course this will never happen, as the unseeing bureaucracy is dedicated to number go down. The reality is, LDL is correlated with a whole bunch of other stuff, and the metabolic dysfunction that causes heart disease isn’t caused by the presence of LDL. They need to go find the discriminating factor here, and treat that. Dispensing statins to everybody isn’t useful.

Consider another example: pollution from cars. Particulates, unburned hydrocarbons, nitrogen dioxides, carbon monoxide: 60s cars were farting out some nasty shit (including lead vapor). You could kill yourself idling a car in the garage back in the day. Car exhaust is now pretty clean; even smoky diesels are now barely smoky. The bureaucracies continue to drive these numbers down: new US standards coming again in 2027, this despite car exhaust being quite breathable now (don’t try this at home). The relentless pressure to build more electric vehicles is also related to this. Meanwhile, tires and braking material leave obvious layers of dirt everyplace near cars being used. You breathe that shit; it’s not good. Car tires are probably the biggest source of microplastics in people’s lives. Braking material is basically asbestos (ceramic brakes are floated as a longer lasting alternative, but nobody knows if the dust they make is worse or better -there’s less of it anyway).  If you live in a city in southern Europe you’re also surrounded by mopeds which have no emission laws associated with them: or if they do I don’t know how they manage to smell like 1960s era car exhaust. Yet, the car makers are required every couple of years to reduce their pollutant levels: they’re not doing anything about the big problem, but making everyone’s lives worse optimizing on the old problem.

Chemicals in the environment: I think it’s great we stopped pumping heavy metal and other chemical waste into rivers to make newspaper or whatever. This is a real achievement and has had tremendous long term health benefits. Unfortunately, other regulatory agencies allow companies to put nasty stuff in your clothes and on your skin; in food containers, on frying pans: they even require manufacturers to put “fire retardant” chemicals in your furniture and in children’s clothing. You can’t put it in the ground or in the water, but you have to put it in furniture and children’s clothing; mostly because of an old California law. This stuff is dangerous; it’s probably a big chunk of why men’s testoterone and sperm count has been declining. Back in the 70s when California dipshits forced manufacturers to start adding this crap to furniture, it probably seemed like a good idea. It’s not a good idea. Of course like all shitty ideas from California it’s now a federal standard: U.S. Consumer Product Safety Commission (CPSC) is in charge. Supposedly they’re investigating the flame retardants, but I’m not optimistic they’ll be removed from our lives. The bureaucracy is concerned with burning furniture, which as far as I can tell only happens when dipshits fall asleep smoking on flammable furniture. Why not just ban flammable furniture? You could dump hot coals on any of my furniture and pretty much nothing would happen: no weird chemicals needed.

These problems all have their origins in bureaucratic heat death. When bureaucracies were created, they were innovative and productive organizations. I know it’s hard to believe, but USDA, the FDA and the EPA were once as innovative and productive as early years NASA. Now … not so much. People have been complaining about PFUAs and stuff like fire retardants to the EPA for decades. But the squirreley numskulls who warm the chairs there are too busy doing the crap they’ve been doing since Nixon created them by fiat in 1970. Optimized for old battles. Most of which are already won.

Winter Q4 2025 books

Posted in Book reviews by Scott Locklin on December 27, 2025

The Kaufmann Protocol Sandra Kaufmann. I saw this lady on youtube somewhere, and she seemed half nuts, so I bought her book. Mostly it’s a rough explanation of some of the human biological system and a list of supplements that good for stuff that breaks down as you get older. It listed some things I didn’t know about, but weren’t very convincing anyway. Astaxanthin, carnosine, curcumin, green tea (EGCG) are all familiar and things I put in my gob on a regular basis. Apigenin, aka Chamomile tea perhaps was the most interesting thing she was touting. Quercetin: great for when you have a cold. Alpha lipoic acid is something I used to take regularly when I was bulking to avoid getting too fat: might be worth tossing in the supplement stack on occasion. A lot of longevity nuts take metformin, and ALA has similar effects without the side effects. I am not a fan of resveratrol; remember how this was a big thing a couple of years ago, then it went away? Pepperidge farms remembers. Anyway she’s big on this and I am not, so this made a lot of the other suggestions rather less interesting.

Plutarch’s Lives v2 (Dryden translation). I think I mentioned reading volume 1, but think I just said “read this book.” Either way I regret not writing notes about the individual lives. If you read Homer’s Competition you’ll get an idea of why you should, but you should probably just read Plutarch. Because I’m a foppish literary man, I have an ancient leather bound 3 volume set, the translation of which dates from 1683. I think Dryden has its charms, but probably you should read something else. Clough came maybe 70 years later and has a decent reputation. North was a generation earlier: Shakespeare’s Plutarch: probably best avoided as it was a translation of a bad French translation. People go into transports about the recent Penguin and Oxford translations. I’ve sampled the Penguin edition, which is incomplete. Both of these are broken down by Greek and Latin and leaves out the comparison passages, and the contrast of having the parallel lives one after the other, which kind of defeats the purpose of the book, though they contain extensive footnotes. Anyway, Pyrrhus: a very ferocious man, though a bit of a cypher. He was almost the next Alexander. He’s compared to Caius Marius, who was a very strange man from Rome. Plutarch’s reading of his life doesn’t differ much from Sallust’s accounts. He was very political; elected to the Consul position 7 times -more than anyone else up until then (I assume the Caesars may have beat him). He was also kind of a communist who ruined the country by allowing poor men into the army (BTW the Wiki page on Marian reforms claiming it never happened is horse shit: Plutarch talks about it for chrissakes). I don’t know why Plutarch compared these two men, as his comparison text didn’t make it to current year. Lysander was a great general of Sparta during the Peloponnesian war, probably most responsible for Spartan victory. He was master of the world, outside of Sparta, where jealous and lesser men who were kings treated him like a manservant. He set up fierce and nightmarish oligarchies of his friends in conquered cities; kept no war booty for himself -a man both corrupt and extremely honest and honorable. Sulla, rival of Marius was compared to him. Sulla’s the guy who broke the republic, using the weapon of poorfag soldiers loyal only to their commanders against Marius himself. Kind of funny he was on the Optimate side, but used Populares against themselves; poorfag Romans were apparently retards. It’s a bit of a chaotic history as Sulla was also fighting various foreign wars; Mithridates, Jugertha, etc. When he marched on Rome, he bathed it in rivers of blood, then retired to hang out with his actor and cross dresser frens, before dying of a horrible disease. They were compared by Plutarch as they were both self made men. Cimon of Athens; reputed to be a degenerate when he was younger, he grew to be a leader of high virtue and generosity, donating much of his wealth to the good of the state and dying relatively young in its service. Lucullus started out sober and later became more lavish in his indulgences as he got older, but also a skilled general and tireless worker for the good of Rome. Both men fought in the orient, and left their conquests unfinished, though for different reasons.

The Young Girls by Henry de Montherlant. Montherlant was in Ernst Junger’s circles during the Paris occupation; his pappy was so conservative he wouldn’t allow a telephone in the house. I basically picked the book because of this, and because wakipedia said it was the world’s most offensively misogynistic series of novels (all four novels were in this book). I was disappointed in this; it’s more like Dangerous Liaisons for sleazy 1920s novelists, except more psychologically astute for modern people. The protagonist is a womanizing writer, so many assume it is a stand-in for the author (it seems unlikely). There are four major “girls” in the story. One is an aging blue stocking from the provinces who writes insane obsessive letters to the protagonist, despite his insistence he just wants to be friends. Almost certainly an example taken in part from life; I’ve known women like this. Another is a religious woman from the provinces who seemingly confuses her religious ecstasies with an obsession with the protagonist (who politely suggests she join a nunnery, despite his being mostly atheistic in temperament). Their interaction with the protagonist is mostly epistolary; hence the comparison to Dangerous Liaisons. The other two: an empty headed beautiful bourgeois woman who takes up most of the drama, and his Moroccan mistress. This is a painful book in that a lot of it is ridiculously awkward and realistic, including the internal dialog of the protagonist. The protagonist is a cad, in a way any man who has had a sex life can uncomfortably relate to. He’s also kind of a narcissistic imbecile, but at least is hilariously misanthropic, which is more amusing in the scenes where he takes his women out on “dates” -his takes on the normies around him are hilarious. It’s a very impressive novel for its psychological depth and as a result, somewhat painful reading. I went and looked at what the dimwit Simone de Beauvoir said about all this in her Second Sex book; I thought she’d be triggered by the psychological nudism, but she simply didn’t get it. The entire essay is just her making hen-like outraged noises, and showing that she had at least thumbed through his books enough to name some of the characters (I don’t think she read any of his books to completion). Orange man bad! While the women don’t come off well in the thing, the male protagonist is … far from a hero or stand in for the author. Montherlant is obviously an aristocratic misanthrope. Everyone in it is vile, and everyone in it is a believable and ordinary human character: quite a neat trick to draw me through 650 pages of people being cringe.

A History of Venice by John Julius Norwich. I read this book about 30 years ago after my first visit to the Lagoon. I read it again at least once since then; it’s an eminently readable history book; like reading a novel or watching an engaging documentary. Anyway, it’s been 30 years and I visited again, so I figured I’d give it another go-through. La Serinissima had its origins in the Visigoth invasions of Italy: in many ways it was as much a continuation of the Roman Empire as Byzantium was. By 800 or so it was a regional power. Its story is a long litany of ambition and trade: one of the great things about the Venetians was their ability to combine trade with cunning and conquest. It was the archetype of the seagoing Merchant Republic, longer lived than Athens, and their system of government is probably the largest influence on the American system. This is a book that rewards the re-reads; I remember the problems with the council of 10 and various ad-hoc councils and their secret police during covid times. This read I was particularly struck at how quickly the decline of Venice happened. While they began the economic slide down with the Portuguese discovery of the Southern Passage around 1500, they were still conquering territory very late in the game in the 17th century in the Captain-General and Dogeship of Francesco Morosini (ending in 1694). This despite out of date naval technology, poor economic prospects compared to their glory days, but still punching well above their weight with skilled leadership and diplomacy, though their final leadership was completely worthless. For centuries they had been renting foreign shipping rather than developing modern armed merchant ships in the Arsenal as a thriving power would have. The leadership figured, after all, they were still making money (shades of the 90s deindustrialization of America) and enjoying the partying which became the national passtime. The leadership class at the end were almost as clown like as current year American leadership. Silvestro Valier who followed Morosini was still a wartime Doge, but he was elected mostly because he was rich and knew how to throw a good party. Others who followed him presided over territorial loss, scandals, and Mosque building in the city. One of the late Doges was a scholar-poet and fellow of the Royal Society and friend to Isaac Newton: admirable qualities, but not leadership qualities. This was the era of Casanova who probably embodied the degenerate morals of the era; adventures, scams, womanizing and partying preferred to mercantile conquest. The penultimate Doge was moderately competent and an actual noble, was married to a ridiculous Greek acrobat he met on a trade mission to Istanbul: something unimaginable in previous eras. The final Doge was a peasant (nobleman on paper through bribery in the previous century), and while not married to an acrobat (his wife was of appropriate station, though literally insane) and a reasonably competent administrator, he had the leadership skills of fermenting cabbage, and so when Bonaparte showed up in the 1790s, it was all over.

Armor Building Formula II by Dan John. He wrote another one while I was finishing the last one. Mostly this one is answering the numerous questions which come up on his podcast and in his coaching practice. Lots of good ideas for things to mix up with the ABC complex, also various ways of getting through it, programming it with other workouts, assistance workouts, using it for fat loss, using it 5x a week instead of 3x a week and so on. Also some interesting charts for what your max should be at various press weights; useful for buying a new kettlebell without too much guesswork. Worth it if you’re doing ABC complex training; dude gives a lot to the community, give him a few bucks. He rambles on a bit, but all of his insights are worthwhile.

The Victorian Amateur Astronomer by Alan Chapman. I know it’s hard to believe, but there was a time when science was not an elaborate welfare scheme for PhDs looking for government baksheesh. Oddly enough, people made a hell of a lot more technological and scientific progress when the government wasn’t paying for it. Chapman takes us from around 1820 to 1920 in Great Britain, where an odd assembly of landed gentlemen, beer brewers, street lecturers, gentleman astronomer’s gentlemen, iron mongers, blacksmiths and even humble working class people (and in one case a homeless bum who was pals with George Orwell) did a ton of important astronomical research and discovery, including innovations in optics. This was in the UK mind you, where it is cloudy and rainy most of the time, but the gentlemen amateurs still thought it was an activity worthy of devoting a lot of time and money to. FWIIW there were a few professional astronomers during this era; for example George Airy. Most of these guys relied on other incomes as well, also donations from well to do enthusiasts. The UK used to run a tight ship. Professional astronomy was also oriented towards navigation rather than discovery. The amateurs were generally the great discoverers of the era. It’s an interesting group of people; most interesting, almost unbelievable, were the working class people doing it as a hobby. British of the 19th century were a different species from the present inhabitants.

Titus Andronicus Shakespeare. I’ve seen the Anthony Hopkins movie like 10 times; very fun movie with excellent aesthetics. Everyone hates on this play, down to denying Shakespeare actually wrote such a stinker, mostly because it is ridiculously bloody and violent. This was a common and crowd pleasing genre of the time; more or less a Senecan tragedy. I found it a fun read, I think the right way to think about it is it’s like an Arnold action movie from the 80s. Popcorn Shakespeare. Better than most of the comedies anyway. Apparently he got the story from an old Latin collection of tall tales, the Gesta Romanorum which makes me curious about obtaining the book. Existing translations are a bit steep at 100 bucks where I live, so maybe I’ll hold off for learning Latin or digging up a used copy. Seems like Decameron or Arabian Knights collection of stories. Such things are often useful in decoding more recent cultural production (which often copies such old ideas wholesale).

A Short History of Naval and Marine Engineering by Eng. Capt. Edgar C. Smith, O.B.E., R.N. More or less a history of powered ships from their inception of steam ships to the 1920s or so. A lot of engineering autism involving boiler design, various kinds of prime movers, how they were supposed to be operated, substances used to lubricate them, coaling techniques, armor, tables of ship capabilities, cutaway diagrams. Lots of great diagrams. You can find it online somewhere: I did.

Letters from a Stoic Seneca. Reading this in preparation for reading a couple of Seneca’s plays, which heavily influenced Elizabethan playwrights. I’m broadly sympathetic to the stoicism of the ancients, but confess I find Seneca to be a moralizing twatwaffle about half the time. I haven’t actually finished, as a lot of it is so painful as to negate any of the good stuff.

Invention and Innovation: A Brief History of Hype and Failure by Vaclav Smil. Smil takes us through a couple of historical examples of Inventions; ones which helped but later turned into problems (leaded gasoline, CFCs, DDT), ones which were supposed to set the world on fire but didn’t (airships, supersonic flight, fission reactors) and ones which sound amazing but we just plain can’t figure out how to do (nitrogen fixing wheat, fusion reactors, hyperloop). Smil, because he actually studies the history of real technologies and how they work makes some disapproving sounds at the AI hypewagon spinning up at the time he wrote this (2023) and reiterates that the idea of continual progress is a total myth. There was a time of enormous technological progress which actually changes how people live: it’s mostly been over for decades. People still act as if we’re still being given marvels like refrigeration and gas turbines, when all we get is twitter (founded 2006) and radiophones with longer battery life that you can use to call a taxi without talking to anybody.