Can't believe it's been three years ... Thanks everyone for joining community and making it great one :) #BSD matrix room https://matrix.to/#/#bsd:matrix.org #FreeBSD #DragonflyBSD #NetBSD #OpenBSD
@jaypatelani @david_chisnall this is not ready to be used in production. Just use the GCC from ports.
My #AdaLang build tool was built with GCC 15.2 on #OpenBSD :)
https://github.com/tomekw/tada
@david_chisnall @jaypatelani this is not true. FSF builds are GPL with linking exception. You can create commercial applications with #AdaLang. The compiler changes you made, would have to be covered by GPL.
Give Ada a shot :)
The only Ada toolchains are GPL’d or proprietary, which is a shame. A company called Ada Labs (makers of Ada on Rails) had plans to make an LLVM Ada front end, but it was never finished.
A Libreboot contributor added ThinkPad X280 support to Libreboot a while ago, but I never got round to setting up mine until today. No idea wtf I did wrong when *I* tried adding it, but hey, it works.
Thank you "AlguienSasaki" for adding it, and thank you Johann C. Rode for porting this wonderful ThinkPad to coreboot!
And I installed OpenBSD on mine. Because of course I did. Why the hell would I *not* install OpenBSD on every computer that I own? OpenBSD is the best thing since the telephone.
If you program, you should read this piece.
"Ada's successes — the aircraft that have not crashed, the railway signalling systems that have not failed, the missile guidance software that has not misguided — are invisible precisely because they are successes. The languages that failed visibly, in buffer overflows and null pointer exceptions and data races and security vulnerabilities, generated the discourse. [Ada did not]"
A few notes about the massive hype surrounding Claude Mythos:
The old hype strategy of 'we made a thing and it's too dangerous to release' has been done since GPT-2. Anyone who still falls for it should not be trusted to have sensible opinions on any subject.
Even their public (cherry picked to look impressive) numbers for the cost per vulnerability are high. The problem with static analysis of any kind is that the false positive rates are high. Dynamic analysis can be sound but not complete, static analysis can be complete but not sound. That's the tradeoff. Coverity is free for open source projects and finds large numbers of things that might be bugs, including a lot that really are. Very few projects have the resources to triage all of these. If the money spent on Mythos had been invested in triaging the reports from existing tools, it would have done a lot more good for the ecosystem.
I recently received a 'comprehensive code audit' on one of my projects from an Anthropic user. Of the top ten bugs it reported, only one was important to fix (and should have been caught in code review, but was 15-year-old code from back when I was the only contributor and so there was no code review). Of the rest, a small number were technically bugs but were almost impossible to trigger (even deliberately). Half were false positives and two were not bugs and came with proposed 'fixes' that would have introduced performance regressions on performance-critical paths. But all of them looked plausible. And, unless you understood the environment in which the code runs and the things for which it's optimised very well, I can well imaging you'd just deploy those 'fixes' and wonder why performance was worse. Possibly Mythos is orders of magnitude better, but I doubt it.
This mirrors what we've seen with the public Mythos disclosures. One, for example, was complaining about a missing bounds check, yet every caller of the function did the bounds check and so introducing it just cost performance and didn't fix a bug. And, once again, remember that this is from the cherry-picked list that Anthropic chose to make their tool look good.
I don't doubt that LLMs can find some bugs other tools don't find, but that isn't new in the industry. Coverity, when it launched, found a lot of bugs nothing else found. When fuzzing became cheap and easy, it found a load of bugs. Valgrind and address sanitiser both caused spikes in bug discovery when they were released and deployed for the first time.
The one thing where Mythos is better than existing static analysers is that it can (if you burn enough money) generate test cases that trigger the bug. This is possible and cheaper with guided fuzzing but no one does it because burning 10% of the money that Mythos would cost is too expensive for most projects.
The source code for Claude Code was leaked a couple of weeks ago. It is staggeringly bad. I have never seen such low-quality code in production before. It contained things I'd have failed a first-year undergrad for writing. And, apparently, most of this is written with Claude Code itself.
But the most relevant part is that it contained three critical command-injection vulnerabilities.
These are the kind of things that static analysis should be catching. And, apparently at least one of the following is true:
TL;DR: If you're willing to spend half as much money Mythos costs to operate, you can probably do a lot better with existing tools.
@joel disk or block device encryption vs file system encryption solve different problems and allow delegation of key management to potentially different people/layers in an enterprise deployment. This is why I added ZFS dataset level encryption to Solaris even though we already had encryption in the lofi block shim.
@Enalys I just wanted to have a NAS at home and some small programs (like search engine/frontend 4get: https://git.lolcat.ca/lolcat/4get), which were made with selfhosting in mind and don't require high-end i9 DDR5 64 Gb computer to work.
Then, I installed #NetBSD to a small Intel Atom based computer, setup Bind9 to have a nice hostname and a local zone for my home networks, setup DHCPD, etc — aaand, looks like I'm a selfhoster, lol ![]()
Probably the best AI policy I’ve seen so far: https://github.com/libsdl-org/SDL/pull/15353/changes
@netbsd So I’m not the only person running an Amiga 1200 on the public Internet? Awesome!
Looks like not only backups but also my obsession^Wpassion to write detailed entries to my "selfhosting journal" pays back. Any change, I made in my main home server, has a date and a detailed description of changes made. Also, the process of #NetBSD installation and service installation is documented too, alongside with documented list of running services, opened ports, cronjobs, etc.
At one bad day, my main server started to hangup at near 18:00 and at nea 08:00. There weren't any cron (or any another) jobs at this time. In the logs and monitoring the problems with mosquitto (MQTT server) were visible — somehow it eats at near 100% of CPU, then monit restart it, then things become working, then (after some time) the server hangs completely. Stopped it to see if the problem disappear. But the same problem happens with Prosody. At the end, the root cause of processes slowdown was my PostgreSQL. Investigation showed that write to my second ZFS disk (where the PostgreSQL DB lives) were extremely slowed, so ZFS panicked, crashed and crashes the kernel ![]()
[ 204836.661198] wd0d: device timeout writing fsbn 123148477 of 123148477-123148478 (wd0 bn 123148477; cn 122171 tn 1 sn 46), xfer 38, retry 1
[ 204863.837664] wd0: soft error (corrected) xfer 38
[ 206810.672323] wd0: autoconfiguration error: wd_flushcache: status=0x5128<TIMEOU>
[ 212327.420695] SLOW IO: zio timestamp 211326864412007ns, delta 1000556283358ns, last io 211280726737075ns
[ 212327.420695] panic: I/O to pool 'zfs' appears to be hung on vdev guid 1299234741086050345 at '/dev/wd0'.
[ 212327.420695] cpu0: Begin traceback...
[ 212327.420695] vpanic() at netbsd:vpanic+0x183
[ 212327.420695] panic() at netbsd:panic+0x3c
[ 212327.420695] vdev_deadman() at zfs:vdev_deadman+0x15e
[ 212327.420695] vdev_deadman() at zfs:vdev_deadman+0x31
[ 212327.420695] spa_deadman_wq() at zfs:spa_deadman_wq+0xe0
[ 212327.430704] workqueue_worker() at netbsd:workqueue_worker+0xef
[ 212327.430704] cpu0: End traceback...
At the same time, I hear a strange metal noises from server at near 08:00 too, so the destiny of second drive was specified.
The server restoration will take some time, but since anything were written in the log file, I'm able just to replay some actions and get all systems up as soon as possible ![]()
@matthew How is #netBSD going to deal with https://social.coop/@cwebber/116408556882122186 do you think?
The Linux developers have started removing 486 support.
I really need to get netbsd installed on a laptop...
@aru not obscure but chromeboxes, with the mrchromebox.tech firmware, are perfect little netbsd boxes, especially the older generations. I have 2 (HP and ASUS) with an 8th gen i7 and everything works out of the box.
The BSDCan 2026 Schedule has been posted. 30 regular talks, one set of lightning talks, and one Audio BoF: https://www.bsdcan.org/2026/timetable/timetable-all.html
Both FreeBSD and NetBSD will be holding two day Dev Summits across the hall from each other in DMS.
https://wiki.freebsd.org/DevSummit/202606
https://www.netbsd.org/gallery/events.html#bsdcan2026
Just like last year, the reception on Saturday night is free if you register early. This year you must register before May 1, 2026: https://www.bsdcan.org/2026/registration.html