It’s common, in a Unix shell, to pause a foreground process with Ctrl+Z. However, today I needed to pause a _background_ process.
tl;dr: SIGTSTP and SIGCONT
It’s common, in a Unix shell, to pause a foreground process with Ctrl+Z. However, today I needed to pause a _background_ process.
tl;dr: SIGTSTP and SIGCONT
A few years ago, I bought (with some mis-adventures) an Eaton 5e UPS. Three years later, after a power cut, it started beeping, and sending alarms that the battery needed replacement.
Working on the assumption that LiFePO4 batteries are better, I decided to investigate whether I could determine a suitable replacement for the original. A few helpful fedinauts fedinarians fedatories people from the fediverse joined in with advice (thanks!).
tl;dr:
In a gradual attempt to offer de-Googled services to people around me, I recently have been looking at options to replace GSuite. One requirement was for the system to integrate with my existing Nextcloud instance. A quick look around turned up ONLYOFFICE, which I started trying to setup. However, a bit of additional reading (and the fact that it now seems to be the default for Nextcloud Office) led me to switch to Collabora Online instead.
It was really easy, and it’s surprisingly snappy even running from an 12+ years-old server and a 14+ years-old laptop!
tl;dr:
nginx config in the reverse-proxy.
I had an hour to lazily spare yesterday, and noticed with horror that my home server was using HTTP/2 like it was from the last decade. Huh!
Enabling HTTP/3 and QUIC in nginx is relatively straight forwards, and well documented. This was also an opportunity to update my basic configuration.
tl;dr:
http3 on, a new listen [::]:443 quic reuseport default_server directive needs to be added.
ssl one, but a single IPv6 entry will also serve IPv4. reuseport is necessary (once) to allow different workers to share the port. Otherwise the subsequent requests will fail intermitently to use HTTP/3.add_header Alt-Svc 'h3=":443"; ma=86400'.ssl_protocols TLSv1.3.
I’ve gone through a number of backup solutions over the years, from plain rsync, to rdiff-backup. With periodic syncs of the backups to remote locations, this saved (most of) my bacon a couple of times. Until now, I was using Backup Manager, because it has the massive advantages of being available in Debian, supporting database backups out of the box, and having the ability to push the backups to remote storage. It also allows to run custom scripts to cover additional data stores.
With a simple approach of making incremental tarballs periodically, it has worked well for many years. However, as my data got bigger (ca. 1 TiB), building the tarballs, particularly the masters, started taking a long time, of the order of multiple days, pegging the CPU to 100%. The backups also ended up using a lot of disk space, so a similar issue then followed for the remote uploads. As most of the data is rarely changing, this seemed like a lot of redundant work.
As I heard good things about restic, and it is also available by default in Debian and ArchLinux, I thought I’d give it a go. I wasn’t disappointed, and have now migrated to it! I have a nice 3-2-1 setup where backups are stored to a separate hard-disk in the same server, then synced out to S3, while remaining lightweight on the host system.
tl;dr:
aws s3 sync the local store, which saves on having to reread the indices remotely.
Testing your code is advisable. I generally start with end-to-end integration tests, to ensure whatever I’m writing does whatever it’s supposed to do. However, as you get deeper into the details, more specific unit testing become necessary. Test doubles such as mocks and stubs can also help in testing the desired behaviours happen in corner cases that are harder to reproduce. Sometimes, though, integration and unit tests end up looking a lot like each other, which always makes me feel like they could be de-duplicated.
I recently came across a situation where I wanted to test some caching behaviour, to ensure that a call only happened if necessary. While it would have been easy to write a separate, dedicated unit test with a stub faking the state of the system, and a mock of the method to check whether it got called or not, it was easier to leverage the existing integration test with additional assertions.
In Python, the unittest.mock.MagicMock can easily replace a function or an object’s method for the sake of checking call counts. However, it prevents the original behaviour from happening, which limits the ability of using mocks in integration tests. It is possible to provide a side effect to the mock, so it does something useful. With some additional boilerplate, the original method getting mocked can be added as a side_effect to the mock, so whatever it does still happens.
tl;dr: I ended up writing a small fixture to avoid too much boilerplate.
from collections.abc import Callable
from typing import Any
from unittest.mock import MagicMock
@pytest.fixture
def active_mock() -> Callable:
def active_mock(obj: object, method: str) -> MagicMock:
"""Mock a method without preventing its side-effect from happening."""
original = getattr(obj, method)
mock_method = MagicMock()
mock_method.side_effect = original
setattr(obj, method, mock_method)
return mock_method
return active_mock
Continue reading
For reasons, I was writing a partial OAuth client in VanillaJS. I assumed, and saw it pleasantly confirmed, that modern JavaScript had all that was needed for hashing and byte-string manipulations.
I was following the logic of another implementation I previously did of the same Authorisation Grant Flow. However, as I was nearing completion of the code, I was not able to successfully obtain my access_token, receiving 401s instead, telling me that my code_verifier was incorrect.
The problem was due to a sequence of string decoding and re-encoding with mismatched charsets. I was using newer, cleaner APIs, except for the Base64 encoding, where I chose to use the venerable btoa function over the not-yet-widely available Uint8Array.toBase64.
tl;dr:
TextDecoder constructor accepts an optional label to specify the encoding to use; however, none of the encodings work for subsequent Base64 encoding!String class should be used instead: String.fromCharCode(…new Uint8Array(digest))
Our house water comes from a rain water tank. In dry weather, we need to get it topped up. While I enjoy tapping the side of the tank, I’d rather have a smoother system to know when to order a water delivery.
I built an ESPHome device to interface a liquid pressure sensor. It works nicely with Home Assistant, and integrates well in the Energy dashboard (along with precipitation measurements from a nearby weather station).
tl;dr:
When working in bare-bone containers, not many tools are available. I had an issue the other day that required packing up the state of a work directory, and sending it out to someone more knowledgeable than me for investigation. But how to get it out when most of the common tools aren’t present, and no authenticated context exists?
I ended up creating a pre-signed upload URL in S3, and uploading the data with cURL (which was, fortunately, present).
tl;dr:
python -c 'import boto3; print(boto3.client("s3").generate_presigned_url("put_object", Params={"Bucket": "<BUCKET_NAME>", "Key": "<OBJECT_KEY>"}))'curl --request PUT --upload-file <FILE> '<PRE_SIGNED_URL>'Python’s re module allows to apply regular expressions to their classical use: seek and destr^W^W^Wsearch and replace. I ran into an odd situation at work yesterday, which made me become aware of “empty matches”.
Empty matches can happen when regexps are allowed to match no character. The simplest one of them is /.*/. In this case, two matches can be found in a single string: 1) the full string, then 2) an empty string.
This is not the behaviour of sed(1), but re.sub behaves differently, and somewhat confusingly.
Empty matches for the pattern are replaced […]
For Python, this leads to duplicates of the replacement appearing in the final string.
$ echo 'o' | sed 's/.*/bob/g'
bob
$ python -c 'import re; print(re.sub(".*", "bob", "o"))'
bobbob
tl;dr:
re.sub to prevent those empty matches.
/.+/ instead of /.*/ is the simplest fix.