the combination of social media, a highly networked and concentrated depositor base, and technology may have fundamentally changed the
speed of bank runs.
It was considered instantaneous in nature, well, compared to how such large-scale withdrawals happened in the past. And the correlation
between social media and the bank run was so strong that an academic paper confirmed it with data:
During the SVB run period, banks with high pre-existing exposure to Twitter lost 4.3 percentage points more stock market value.
ref: Social Media as a Bank Run Catalyst
But with the pervasive access to agentic AI, especially in the financial sector, which has been innovating with automated trading for years,
people are rightly pointing out that the next financial disaster could happen in a fraction of the time it took for SVB to collapse.
I’d been hardcoding width and height attributes in my Hugo templates to prevent layout shift. It worked fine, but it was
tedious — every time I changed an image, I had to look up the new dimensions and update the template by hand.
Today, I had to add a new image to the sidebar, and I felt lazy enough to ask copilot to find the dimensions for me and
insert them into the template. It instead did something unexpected. It used an odd new Hugo function called
imageConfig instead.
Curious, I looked it up. I haven’t kept myself up to date with Hugo’s latest features the last few years. The
embarrassing part is that, as it turns out, this isn’t a new function. It was added in 2017 (!), but I hadn’t heard of
it until now. I had no idea that Hugo could discover image dimensions at build time. Seems I really should read up more
about what all Hugo can do.
I have wasted too many tokens getting AI editors to work well with Terraform code. The authoritative JavaScript-heavy
provider documentation website makes it impossible to provide as a suitable reference to AI editors. Even adding those
links to Cursor doc index doesn’t work. So you get hallucinations and completely wrong code from even the best of the
models.
I have been using Helm charts, like everyone else, for the Kubernetes cluster in my homelab. Until a few months back, I
never gave a thought to the reliability of the Helm chart repositories I was using. And then the Bitnami news
dropped where they announced that they were going to stop supporting their Helm chart repositories.
Everyone has been scrambling to handle this situation, and most are settling on one of two options:
Vendor the sources of existing charts in the Git repositories.
Use a Helm chart repository, paid or free, to mirror them in a more scalable way.
I normally install Ubuntu on my Raspberry Pi machines because I am comfortable with its ecosystem. Most of the time
though, I have been using these boxes connected to my network via Ethernet.
Recently, I got a new Raspberry Pi 5, and as usual, I installed Ubuntu on it. This time I used the official HAT
to install the OS on an NVMe drive. The Raspberry Pi imager tool does a great job of setting up the machine with Wi-Fi
enabled. What I never paid attention to was how much the country setting in the options affects the Wi-Fi band.
After booting up the machine, I noticed that the Wi-Fi band was set to 2.4GHz. I found it odd. What followed was a lot of detail that, as usual, I wish I didn’t need to know, but now had to. :(
I like to use Homebrew on my Linux development machine as well, instead of random apt packages which
may or may not be up to date for common tools.
One annoyance I found a solution for, is getting the bash completion for Homebrew commands to work on Linux. The
problem is that Ubuntu (and other Linux distributions) have their own bash completion scripts for system commands. But
the way most bash completion scripts work is that they have a check to see if completions are already loaded.
So if I have a line in my bashrc to load homebrew completions, it won’t load, because it detects the completions from
the system already loaded. Specifically, it looks for the environment variable BASH_COMPLETION_VERSINFO which is set
by any bash completion script that is already loaded.
Reddit used a testing technique called “tap compare” for read migrations. The concept is straightforward:
A small percentage of traffic gets routed to the new Go microservice.
The new service generates its response internally.
Before returning anything, it calls the old Python endpoint to get that response too.
The system compares both responses and logs any differences.
The old endpoint’s response is what actually gets returned to users.
This approach meant that if the new service had bugs, users never saw them. The team got to validate their new code in
production with real traffic while maintaining zero risk to user experience.
I was working from outside home today, trying to push out changes to a bunch of my homelab servers. As usual I was using
Ansible, but I was connected over tailscale to the home network.
Now normally I would just create a socks/http proxy to one of my home machines and set the proxy environment variable
like HTTP_PROXY and most apps would just work. But Ansible doesn’t seem to respect that environment variable.
There is an environment keyword that lets you set http_proxy
variables, but that is for tasks
executing remotely. They can use that environment variable for commands they are executing which need to call over the Internet. But what we need is a way to reach the target host in the first place.