A hand holding a battery at the fore front. In the background is the topside of an Eaton UPS. The case is open. The battery label says "LEOCH DJW12-9.0(12V9.0AH)". The specs of the battery are as follows: - Standby use: 13.5-13.8 V - Cycle use: 14.4-15.0 V - Initial current: Less than 2.7 A The UPS label says “Eaton (Catalog No) 5E850iUSB-AU”. The specs of the UPS are as follows: - MFG ID: 9C00-53222 Rev: F0P - INPUT: 220-240V~50/60Hz, 5.6A Ph+N+Philippines - OUTPUT: 220V-240V~50/60Hz, 3.8A Ph+N+PE" - 850VA/480W - Protective class I - ICC: 3KA

A few years ago, I bought (with some mis-adventures) an Eaton 5e UPS. Three years later, after a power cut, it started beeping, and sending alarms that the battery needed replacement.

Working on the assumption that LiFePO4 batteries are better, I decided to investigate whether I could determine a suitable replacement for the original. A few helpful fedinauts fedinarians fedatories people from the fediverse joined in with advice (thanks!).

tl;dr:

  • The biggest risk of LiFePO4 is that of fire if the cells drop below a threshold voltage, then get re-charged.
  • There exist LiFePO4 batteries with a BMS designed for their use as a drop-in replacement for Pb batteries.
  • The advantages of LiFePO4 are higher charge cycles, and lower weight.
  • None of those really matter in the case of a static UPS, particularly at the increased cost and fire hazard.
  • I’ll use another Pb battery as a replacement.
Continue reading
Screenshot of a browser window. Under the URL bar, the UI of an Office suite is visible (menu bar, toolbar at the top, right sidebar, both with formatting options). The text in the document reads “Hey Look at me! I’m editing a document in my browser.”

In a gradual attempt to offer de-Googled services to people around me, I recently have been looking at options to replace GSuite. One requirement was for the system to integrate with my existing Nextcloud instance. A quick look around turned up ONLYOFFICE, which I started trying to setup. However, a bit of additional reading (and the fact that it now seems to be the default for Nextcloud Office) led me to switch to Collabora Online instead.

It was really easy, and it’s surprisingly snappy even running from an 12+ years-old server and a 14+ years-old laptop!

tl;dr:

  • Use the default docker image in the compose stack.
  • Copy the nginx config in the reverse-proxy.
  • …?
  • Take a modicum of profit back from GAFAM!
Continue reading
Screenshot of a terminal showing HTTP/3 support: ``` $ /usr/sbin/nginx -V 2>&1 | grep http_v3 ... --with-http_v3_module ... $ curl -V | grep 'http3' ... nghttp3/1.12.0 ... $ curl --http3 -kI https://localhost HTTP/3 200 server: nginx ... ```

I had an hour to lazily spare yesterday, and noticed with horror that my home server was using HTTP/2 like it was from the last decade. Huh!

Enabling HTTP/3 and QUIC in nginx is relatively straight forwards, and well documented. This was also an opportunity to update my basic configuration.

tl;dr:

  • After turning http3 on, a new listen [::]:443 quic reuseport default_server directive needs to be added.
    • It can’t be the same as the ssl one, but a single IPv6 entry will also serve IPv4.
    • reuseport is necessary (once) to allow different workers to share the port. Otherwise the subsequent requests will fail intermitently to use HTTP/3.
  • QUIC is over UDP, so the port needs to be opened in the firewall. It is recommended to use the same port as for HTTP/2.
  • User-agents need to be told that the alternative service is available. One way to do it is with an add_header Alt-Svc 'h3=":443"; ma=86400'.
  • HTTP/3 requires ssl_protocols TLSv1.3.
Continue reading
A chart from Munin plotting the number of recent backup-files. From almost nothing, a spike up happened before week 37, but went back down as the system struggled to maintain sufficiently fresh backups. After week 37 (and an update to the plugin's config), a sustained number of fresh backup files is reported.

I’ve gone through a number of backup solutions over the years, from plain rsync, to rdiff-backup. With periodic syncs of the backups to remote locations, this saved (most of) my bacon a couple of times. Until now, I was using Backup Manager, because it has the massive advantages of being available in Debian, supporting database backups out of the box, and having the ability to push the backups to remote storage. It also allows to run custom scripts to cover additional data stores.

With a simple approach of making incremental tarballs periodically, it has worked well for many years. However, as my data got bigger (ca. 1 TiB), building the tarballs, particularly the masters, started taking a long time, of the order of multiple days, pegging the CPU to 100%. The backups also ended up using a lot of disk space, so a similar issue then followed for the remote uploads. As most of the data is rarely changing, this seemed like a lot of redundant work.

As I heard good things about restic, and it is also available by default in Debian and ArchLinux, I thought I’d give it a go. I wasn’t disappointed, and have now migrated to it! I have a nice 3-2-1 setup where backups are stored to a separate hard-disk in the same server, then synced out to S3, while remaining lightweight on the host system.

tl;dr:

  • Despite not being written in Rust (but Go), restic is blazingly fast.
  • Incremental backups don’t take a lot more space.
  • While it has native S3 support, it is also possible to simply aws s3 sync the local store, which saves on having to reread the indices remotely.
  • A bit of scripting is needed to get everything covered, but it’s not a lot of work.
  • It can still use Munin to monitor the local backup directory for freshness.
Continue reading
Screenshot of a terminal showing a failed pytest run. ``` /usr/lib/python3.13/pathlib/_local.py:537: FileNotFoundError =========================== short test summary info ============================ FAILED test.py::test_lazy_writer[aaa-bbb] - AssertionError: assert 'aaa' == 'bbb' FAILED test.py::test_lazy_writer[None-bbb] - FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pytest-of-sht... ========================= 2 failed, 1 passed in 0.18s ========================== ```

Testing your code is advisable. I generally start with end-to-end integration tests, to ensure whatever I’m writing does whatever it’s supposed to do. However, as you get deeper into the details, more specific unit testing become necessary. Test doubles such as mocks and stubs can also help in testing the desired behaviours happen in corner cases that are harder to reproduce. Sometimes, though, integration and unit tests end up looking a lot like each other, which always makes me feel like they could be de-duplicated.

I recently came across a situation where I wanted to test some caching behaviour, to ensure that a call only happened if necessary. While it would have been easy to write a separate, dedicated unit test with a stub faking the state of the system, and a mock of the method to check whether it got called or not, it was easier to leverage the existing integration test with additional assertions.

In Python, the unittest.mock.MagicMock can easily replace a function or an object’s method for the sake of checking call counts. However, it prevents the original behaviour from happening, which limits the ability of using mocks in integration tests. It is possible to provide a side effect to the mock, so it does something useful. With some additional boilerplate, the original method getting mocked can be added as a side_effect to the mock, so whatever it does still happens.

tl;dr: I ended up writing a small fixture to avoid too much boilerplate.

from collections.abc import Callable
from typing import Any
from unittest.mock import MagicMock

@pytest.fixture
def active_mock() -> Callable:
    def active_mock(obj: object, method: str) -> MagicMock:
        """Mock a method without preventing its side-effect from happening."""
        original = getattr(obj, method)
        mock_method = MagicMock()
        mock_method.side_effect = original
        setattr(obj, method, mock_method)
        return mock_method

    return active_mock
Continue reading
Screenshot of the Firefox Javascript console. >> (new TextDecoder()).decode(new Uint8Array(digest)); "\u0006C]�\u001c�X����^��.\u0011

For reasons, I was writing a partial OAuth client in VanillaJS. I assumed, and saw it pleasantly confirmed, that modern JavaScript had all that was needed for hashing and byte-string manipulations.

I was following the logic of another implementation I previously did of the same Authorisation Grant Flow. However, as I was nearing completion of the code, I was not able to successfully obtain my access_token, receiving 401s instead, telling me that my code_verifier was incorrect.

The problem was due to a sequence of string decoding and re-encoding with mismatched charsets. I was using newer, cleaner APIs, except for the Base64 encoding, where I chose to use the venerable btoa function over the not-yet-widely available Uint8Array.toBase64.

tl;dr:

Continue reading
A box with an ESP32-POE2 and an messy adapter connecting to a black cable cominf from the top. From the bottom is blue Ethernet cable, coiled, and joining up to another cable via a weatherproof connector.

Our house water comes from a rain water tank. In dry weather, we need to get it topped up. While I enjoy tapping the side of the tank, I’d rather have a smoother system to know when to order a water delivery.

I built an ESPHome device to interface a liquid pressure sensor. It works nicely with Home Assistant, and integrates well in the Energy dashboard (along with precipitation measurements from a nearby weather station).

tl;dr:

  • I used a 4–20mA current loop pressure sensor (and I suppose the same design would work for other current loop sensor).
  • The device is based around a Olimex ESP32-PoE2, which has a configurable 12/24V line allowing me to power the pressure sensor directly and requiring nothing more than the CAT6 cable I already had going to the water tank.
  • ESPHome made it trivial to get the sensor up and running with just a bit of YAML. It’s also flexible enough to support my iterative tests on finding the best filtering/smoothing trade-off for the noisy data (spoiler: I haven’t yet)
  • With a bit of templating magic in Home Assistant, I can then add devices to calculate how much water got into the tank from the rain or from deliveries, and how much got out from usage. This also include alerting in case the level gets too low, or the device is unavailable.
  • Suggestions about a better way to filter the sensor data are most welcome!
Continue reading

When working in bare-bone containers, not many tools are available. I had an issue the other day that required packing up the state of a work directory, and sending it out to someone more knowledgeable than me for investigation. But how to get it out when most of the common tools aren’t present, and no authenticated context exists?

I ended up creating a pre-signed upload URL in S3, and uploading the data with cURL (which was, fortunately, present).

tl;dr:

  • python -c 'import boto3; print(boto3.client("s3").generate_presigned_url("put_object", Params={"Bucket": "<BUCKET_NAME>", "Key": "<OBJECT_KEY>"}))'
  • curl --request PUT --upload-file <FILE> '<PRE_SIGNED_URL>'
Continue reading

Python’s re module allows to apply regular expressions to their classical use: seek and destr^W^W^Wsearch and replace. I ran into an odd situation at work yesterday, which made me become aware of “empty matches”.

Empty matches can happen when regexps are allowed to match no character. The simplest one of them is /.*/. In this case, two matches can be found in a single string: 1) the full string, then 2) an empty string.

This is not the behaviour of sed(1), but re.sub behaves differently, and somewhat confusingly.

Empty matches for the pattern are replaced […]

For Python, this leads to duplicates of the replacement appearing in the final string.

$ echo 'o' | sed 's/.*/bob/g'
bob
$ python -c 'import re; print(re.sub(".*", "bob", "o"))'
bobbob

tl;dr:

  • This seems to be expected behaviour, but can cause issues.
  • In my case I was using the regexps for both matching and replacement.
    • Too wide a match led to odd duplications in the replacement phase.
  • The fix is to tighten the regexp used by re.sub to prevent those empty matches.
    • Using /.+/ instead of /.*/ is the simplest fix.
Continue reading