A grim reaper knocking on a door labelled "open source"

What About The Droid Attack On The Repos?

You might not have noticed, but we here at Hackaday are pretty big fans of Open Source — software, hardware, you name it. We’ve also spilled our fair share of electronic ink on things people are doing with AI. So naturally when [Jeff Geerling] declares on his blog (and in a video embedded below) that AI is destroying open source, well, we had to take a look.

[Jeff]’s article highlights a problem he and many others who manage open source projects have noticed: they’re getting flooded with agenetic slop pull requests (PRs). It’s now to the point that GitHub will let you turn off PRs completely, at which point you’ve given up a key piece of the ‘hub’s functionality. That ability to share openly with everyone seemed like a big source of strength for open source projects, but [Jeff] here is joining his voice with others like [Daniel Stenberg] of curl fame, who has dropped bug bounties over a flood of spurious AI-generated PRs.

It’s a problem for maintainers, to be sure, but it’s as much a human problem as an AI one. After all, someone set up that AI agent and pointed at your PRs. While changing the incentive structure– like removing bug bounties– might discourage such actions, [Jeff] has no bounties and the same problem. Ultimately it may be necessary for open source projects to become a little less open, only allowing invited collaborators to submit PRs, which is also now an option on GitHub.

Combine invitation-only access with a strong policy against agenetic AI and LLM code, and you can still run a quality project. The cost of such actions is that the random user with no connection to the project can no longer find and squash bugs. As unlikely as that sounds, it happens! Rather, it did. If the random user is just going to throw their AI agent at the problem, it’s not doing anybody any good.

First they came for our RAM, now they’re here for our repos. If it wasn’t for getting distracted by the cute cat pictures we might just start to think vibe coding could kill open source. Extra bugs was bad enough, but now we can’t even trust the PRs to help us squash them!

Continue reading “What About The Droid Attack On The Repos?”

Bruteforcing Accidental Antenna Designs

Antenna design is often referred to as a black art or witchcraft, even by those experienced in the space. To that end, [Janne] wondered—could years of honed skill be replaced by bruteforcing the problem with the aid of some GPUs? Iterative experiments ensued.

[Janne]’s experience in antenna design was virtually non-existent prior to starting, having a VNA on hand but no other knowledge of the craft. Formerly, this was worked around by simply copying vendor reference designs when putting antennas on PCBs. However, knowing that sometimes a need for something specific arises, they wanted a tool that could help in these regards.

The root of the project came from a research paper using an FDTD tool running on GPUs to inversely design photonic nanostructures. Since light is just another form of radio frequency energy, [Janne] realized this could be tweaked into service as an RF antenna design tool. The core simulation engine of the FDTD tool, along with its gradient solver, were hammered into working as an antenna simulator, with [Janne] using LLMs to also tack on a validation system using openEMS, an open-source electromagnetic field solver. The aim was to ensure the results had some validity to real-world physics, particularly important given [Janne] left most of the coding up to large language models. A reward function development system was then implemented to create antenna designs, rank them on fitness, and then iterate further.

The designs produced by this arcane system are… a little odd, and perhaps not what a human might have created. They also didn’t particularly impress in the performance stakes when [Janne] produced a few on real PCBs. However, they do more-or-less line up with their predicted modelled performance, which was promising. Code is on Github if you want to dive into experimenting yourself. Experienced hands may like to explore the nitty gritty details to see if the LLMs got the basics right.

We’ve featured similar “evolutionary” techniques before, including one project that aimed to develop a radio. If you’ve found ways to creatively generate functional hardware from boatloads of mathematics, be sure to let us know on the tipsline!

Living In The (LLM) Past

In the early days of AI, a common example program was the hexapawn game. This extremely simplified version of a chess program learned to play with your help. When the computer made a bad move, you’d punish it. However, people quickly realized they could punish good moves to ensure they always won against the computer. Large language models (LLMs) seem to know “everything,” but everything is whatever happens to be on the Internet, seahorse emojis and all. That got [Hayk Grigorian] thinking, so he built TimeCapsule LLM to have AI with only historical data.

Sure, you could tell a modern chatbot to pretend it was in, say, 1875 London and answer accordingly. However, you have to remember that chatbots are statistical in nature, so they could easily slip in modern knowledge. Since TimeCapsule only knows data from 1875 and earlier, it will be happy to tell you that travel to the moon is impossible, for example. If you ask a traditional LLM to roleplay, it will often hint at things you know to be true, but would not have been known by anyone of that particular time period.

Chatting with ChatGPT and telling it that it was a person living in Glasgow in 1200 limited its knowledge somewhat. Yet it was also able to hint about North America and the existence of the atom. Granted, the Norse apparently found North America around the year 1000, and Democritus wrote about indivisible matter in the fifth century. But that knowledge would not have been widespread among common people in the year 1200. Training on period texts would surely give a better representation of a historical person.

The model uses texts from 1800 to 1875 published in London. In total, there is about 90 GB of text files in the training corpus. Is this practical? There is academic interest in recreating period-accurate models to study history. Some also see it as a way to track both biases of the period and contrast them with biases found in data today. Of course, unlike the Internet, surviving documents from the 1800s are less likely to have trivialities in them, so it isn’t clear just how accurate a model like this would be for that sort of purpose.

Instead of reading the news, LLMs can write it. Just remember that the statistical nature of LLMs makes them easy to manipulate during training, too.


Featured Art: Royal Courts of Justice in London about 1870, Public Domain

Habit Detection For Home Assistant

Computers are very good at doing exactly what they’re told. They’re still not very good at coming up with helpful suggestions of their own. They’re very much more about following instructions than using intuition; we still don’t have a digital version of Jeeves to aid our bumbling Wooster selves. [Sherrin] has developed something a little bit intelligent, though, in the form of a habit detector for use with Home Assistant.

In [Sherrin]’s smart home setup, there are lots of things that they wanted to fully automate, but they never got around to implementing proper automations in Home Assistant. Their wife also wanted to automate things without having to get into writing YAML directly. Thus, they implemented a sidecar which watches the actions taken in Home Assistant.

The resulting tool is named TaraHome. When it detects repetitive actions that happen with a certain regularity, it pops up and suggests automating the task. For example, if it detects lights always being dimmed when media is playing, or doors always being locked at night, it will ask if that task should be set to happen automatically and can whip up YAML to suit. The system is hosted on the local Home Assistant instance. It can be paired with an LLM to handle more complicated automations or specific requests, though this does require inviting cloud services into the equation.

We’ve featured lots of great Home Assistant hacks over the years, like this project that bridges 433 MHz gear to the smart home system. If you’ve found your own ways to make your DIY smart home more intelligent, don’t hesitate to notify the tipsline!

The CURL Project Drops Bug Bounties Due To AI Slop

Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug reports filed due to LLM chatbot-induced confabulations, also known as ‘AI slop’. This has now led the project to suspend its bug bounty program starting February 1, 2026.

Examples of such slop are provided by [Daniel] in a GitHub gist, which covers a wide range of very intimidating-looking vulnerabilities and seemingly clear exploits. Except that none of them are vulnerabilities when actually examined by a knowledgeable developer. Each is a lengthy word salad that an LLM churned out in seconds, yet which takes a human significantly longer to parse before dealing with the typical diatribe from the submitter.

Although there are undoubtedly still valid reports coming in, the truth of the matter is that the ease with which bogus reports can be generated by anyone who has access to an LLM chatbot and some spare time has completely flooded the bug bounty system and is overwhelming the very human developers who have to dig through the proverbial midden to find that one diamond ring.

We have mentioned before how troubled bounty programs are for open source, and how projects like Mesa have already had to fight off AI slop incidents from people with zero understanding of software development.

... does this count as fake news?

LLM-Generated Newspaper Provides Ultimate In Niche Publications

If you’re reading this, you probably have some fondness for human-crafted language. After all, you’ve taken the time to navigate to Hackaday and read this, rather than ask your favoured LLM to trawl the web and summarize what it finds for you. Perhaps you have no such pro-biological bias, and you just don’t know how to set up the stochastic parrot feed. If that’s the case, buckle up, because [Rafael Ben-Ari] has an article on how you can replace us with a suite of LLM agents.

The AI-focused paper has a more serious aesthetic, but it’s still seriously retro.

He actually has two: a tech news feed, focused on the AI industry, and a retrocomputing paper based on SimCity 2000’s internal newspaper. Everything in both those papers is AI-generated; specifically, he’s using opencode to manage a whole dogpen of AI agents that serve as both reporters and editors, each in their own little sandbox.

Using opencode like this lets him vary the model by agent, potentially handing some tasks to small, locally-run models to save tokens for the more computationally-intensive tasks. It also allows each task to be assigned to a different model if so desired. With the right prompting, you could produce a niche publication with exactly the topics that interest you, and none of the ones that don’t.  In theory, you could take this toolkit — the implementation of which [Rafael] has shared on GitHub — to replace your daily dose of Hackaday, but we really hope you don’t. We’d miss you.

That’s news covered, and we’ve already seen the weather reported by “AI”— now we just need an automatically-written sports section and some AI-generated funny papers.  That’d be the whole newspaper. If only you could trust it.

Story via reddit.

Can Skynet Be A Statesman?

There’s been a lot of virtual ink spilled about LLMs and their coding ability. Some people swear by the vibes, while others, like the  FreeBSD devs have sworn them off completely. What we don’t often think about is the bigger picture: What does AI do to our civilization? That’s the thrust of a recent paper from the Boston University School of Law, “How AI Destroys Institutions”. Yes, Betteridge strikes again.

We’ve talked before about LLMs and coding productivity, but [Harzog] and [Sibly] from the school of law take a different approach. They don’t care how well Claude or Gemini can code; they care what having them around is doing to the sinews of civilization. As you can guess from the title, it’s nothing good.

"A computer must never make a management decision."
Somehow the tl;dr was written decades before the paper was.

The paper a bit of a slog, but worth reading in full, even if the language is slightly laywer-y. To summarize in brief, the authors try and identify the key things that make our institutions work, and then show one by one how each of these pillars is subtly corroded by use of LLMs. The argument isn’t that your local government clerk using ChatGPT is going to immediately result in anarchy; rather it will facilitate a slow transformation of the democratic structures we in the West take for granted. There’s also a jeremiad about LLMs ruining higher education buried in there, a problem we’ve talked about before.

If you agree with the paper, you may find yourself wishing we could launch the clankers into orbit… and turn off the downlink. If not, you’ll probably let us know in the comments. Please keep the flaming limited to below gas mark 2.