I used to actually host a Mastodon server. And it was then I first started to look into S3 object storage.
For those not in the know, Mastodon—unlike, e.g., your average self-hosted RSS reader—stores both statuses and media of near the whole dang Fediverse … locally. (I mean, there are various reasons to want to do this.) Either way, as a result, an affordable VPS will quickly run out of disk space.
The solution (other than regularly purging old statuses and files)? Object storage, where files are offloaded to a “sort of file server” that’s generally way cheaper.
Anyway. So, I’ve been “thinking” about offering “IndieWeb as a service” sites, based off WordPress Multisite, that’d come pre-installed with a simple theme and very few plugins and not much more, but that would support IndieAuth/Micropub/Webmention out of the box.
What if, however, a couple folks on it eventually start to fill up the disk with their photologs? (I know, once again getting way ahead of myself.)
I’d previously heard of WP Offload Media, but it didn’t seem to support Scaleway (which I’d used in the past).
So I tried Media Cloud instead. That plugin’s just huge, though! Nearly 80 MB unpacked. And proper Multisite support requires a paid add-on, and so on.
I was then told of S3 Uploads. Still large, at no less than 35 MB. Most of that, however, seems to be the AWS SDK for PHP. Installation’s slightly less straightforward, but I eventually got it to work.
By now, I’d already set up a CNAME record to point to my storage bucket. Unfortunately, Scaleway doesn’t support SSL if you do so … So I set up a reverse proxy instead. And then, while I was at it, did the same for WordPress’ default wp-content/uploads
URLs. Much cleaner than having to point to https://s3.fr-par.scw.cloud/and-so-on
.
In the end, I added the following to wp-config.php
:
define( 'S3_UPLOADS_BUCKET', 'media.example.org' );
define( 'S3_UPLOADS_REGION', 'fr-par' );
define( 'S3_UPLOADS_KEY', 'my-api-key' );
define( 'S3_UPLOADS_SECRET', 'my-api-key-secret' );
define( 'S3_UPLOADS_BUCKET_URL', 'https://' . ( ! empty( $_SERVER['HTTP_HOST'] ) ? "{$_SERVER['HTTP_HOST']}/wp-content" : 'media.example.org' ) ); // The fallback value should never be used, but it doesn't hurt either.
define( 'S3_UPLOADS_DISABLE_REPLACE_UPLOAD_URL', true );
The $_SERVER['HTTP_HOST']
bit is there because I obviously use quite a few different domains, and you can’t not set it. (Omitting it or setting it to an empty string yields an error.)
As said, I worked around the SSL thingy by going through my own reverse proxy rather than point a CNAME record directly to Scaleway. Because I’m also rewriting wp-content/uploads*
URLs, I no longer have to dynamically search-replace media URLs and can disable that part.
I probably should share my Caddyfile someplace in the near future!
Since I’m not using AWS but Scaleway, though, I also had to add a “must-use plugin”:
<?php
add_filter( 's3_uploads_s3_client_params', function ( $params ) {
$params['endpoint'] = 'https://s3.fr-par.scw.cloud';
$params['use_path_style_endpoint'] = true;
$params['debug'] = false; // Set to true if uploads are failing.
return $params;
} );
Quick note: Every time I wanted to switch debug
to true
, I ran into memory exhaustion errors and would eventually have to restart the PHP-FPM daemon.
So, yay! All uploads are now copied to a Scaleway bucket, and get served from it (and you can’t even see it).
Copying Over Existing Media Files
On your main host or VPS, you’ll want to install the AWS CLI and set it up to work with an alternative S3 provider.
Configure your access key and secret, and copy over the uploads folder. You’ll want to use the --acl public-read
flag, or the files won’t be publicly accessible! (You should probably still keep the bucket itself private, so that it can’t be browsed from outside.)
Of course, I had just drag-and-dropped my uploads folder onto my bucket, on Scaleway’s admin page, and only found out after that all my objects were private. Also of course, I came across a Stack Overflow answer that helped me straighten that out:
aws s3 cp --recursive --acl public-read s3://bucket/folder s3://bucket/folder --metadata-directive REPLACE
Why I Likely Won’t Move Forward With This Setup
Wait, what? All this trouble, for nought? Well, hear me out:
- I have nothing against the use of Composer in WordPress plugins, quite the contrary, but the way it is used here, there’s a slight risk of future plugin conflicts. (Probably not an issue if you either manage the whole of WordPress via Composer, or if this is the only such plugin.)
- The plugin isn’t as easy to keep up to date as “regular” WordPress plugins. (I probably could script something, though. Or rely on the occasional
git fetch
andcomposer install
.) - It’s not very likely that I’ll run out of disk space in the first place, so I wouldn’t be saving any money!
Looks like I did get carried away after all. Well, at least I learned a thing or two, like how to use Caddy to set up a reverse proxy for certain paths only.