Why we'll all be fine with open models
Some thoughts about why the fears of open models are blown out of proportion and why it's in our best interest to have them
I recently watched this interview with Mustafa Suleyman (co-founder of Deep Mind), where he shared his deeply held belief that we should limit AI models from being open source or easy to run locally .
The main fear being that bad actors could use them with malicious intent, such as creating bio-weapons.
This is just one of the arguments (be it an extreme case) used by AI doomers to push for adding limits to AI models and even though this is not a new subject. I’ll go through some of those arguments here and why I don’t think these fears are founded.
Main arguments against open models
Bad actors would use them
The first argument that always comes up is that bad actors could use models to create weapons, plan attacks or disrupt society.
In my opinion, the strongest counterargument to this is that there is a huge difference between having access to information and acting on it.
There is nothing stopping a bad actor today from using Google to create a bio-weapon and then acting on that information. An LLM does not change that equation.
We’re overestimating the danger of a bad actor that gets information directly from ChatGPT instead of needing to click 5 links. If someone is motivated to do harm they will spend years to learn how to fly a plane to implement a terrorist attack.
Bio-weapons specifically
The interview mentions bio-weapons as the prime example of using information repositories like an LLM to become a danger to society.
This argument is the easiest to debunk because having access to information on how to create bio-weapons is just 1% of the way there. Getting access to lab equipment, raw materials, and having years of experience in biology and chemistry is more important than having easy access to information.
There have been recent studies conducted by OpenAI and RAND organization that mention LLMs do not aid much with creating bio weapons.
In an ideal world
It’s a bit idealistic, but I wish we’d fight more to address the underlying issues (poverty, traumas, etc) of why someone would want to use an AI agent in a destructive way. Yes, these are much harder and bigger problems to solve but they also address most other issues that might appear from bad actors.
AI would become sentient and go rogue
I think there was initially concern when models like ChatGPT gained popularity and people began uncovering their abilities. But as we started to understand more about the limitations I think this fear diminished. At the end of the day yes, LLMs are just statistical machines but if the strings that come out have meaning for us, then that is the danger.
Resources
But let’s take the worst case scenario where we find a better architecture than transformers and we create a better model and the model becomes self-aware with the intentions of wrecking havoc.
Where will it run?
We barely have enough chips and servers to run and train current ML models. A new super model that would be so powerful that we can call AGI would simply not be able to hide in plain sight. The resources needed are surely noticeable and we would be able to shut it off.
Real world
But let’s say the system does find enough computing power and energy resources to run indefinitely. It’s biggest limitation is still that it won’t be able to interact with the real world.
It would still need to go through people to create real damage. I think that's our weakest link and our strongest defense.
It can still make a lot of damage being a free agent in the online world (disruptions, hacking, etc) but nothing out of what we are currently seeing with nation state backed groups hacking into companies, utilities and government agencies.
I think this is nowhere near the risk of loosing democracy or starting nuclear wars because those systems are still based on interactions between people.
Why we should
Innovation
Giving people access to the most powerful tools would always create more good than bad. Opening up iOS to third parties created NSO but also created millions of new companies.
I think we should not brush off the amount of innovation and new ideas that can come when everyone has access to the most powerful tools.
Not allowing few actors to limit the access to these models
It's already very expensive to collect the data, train and run the models. There are only a handful of companies that can sponsor that so there’s already a high bar of entry for new models.
Fortunately, you can go a long way with fine tuning and the models that are being released openly are very good as a starting point.
I think the legislative capture from companies of Google and OpenAI is not surprising but it feels so much eerie coming from a company like Google.
The case against Google
If we think of this new way of consuming information (through an LLM and not a search engine) as the next version of the internet, a company like Google would do everything they can to capture as much of that as possible.
They already tried to capture internet 1.0 with AMP, where you would discover and consume content without leaving Google’s systems. Today, search results are mostly snippets or Google products that heavily encourage you to stay on a Google owned domain.
Imagine if the internet grew inside Google’s walled garden since 1999. The world would have been very different. And even without that walled garden we are feeling the influence they have on the web from their work on Chrome and web standards.
We are also seeing this “walled garden effect” and how it limits innovation on mobile devices. The limitations that Apple and Google are adding to the operating systems have serious side effects on the type of businesses you can build there.
Now imagine a world where in order to get your news, read a recipe, find the capital of Thailand, find an image of an astronaut on a unicorn you do not leave Google and get all your information from that one system.
And we are seeing a light version of that right now. The only way to interact with your phone (and the online world on that device) is to go through Google or Apple.
Now, I’m not trying to single out Google, but we should not let any one company be the only option to experience the internet.
Some good news
Now Mustafa does mention that he believes we won't be able to capture this technology and the march of open source will continue.
But more so, I believe our focus should be more on how the tool is used. Yes, the hammer was invented for nails but it can also be used as a weapon.
Limiting the way people act on the information from LLMs is already covered by law. I don't think it should be illegal for it to create an example of a phishing email. If you actually send it, then that should be illegal. But it already is.
Adding legal limitations can pretty much create an AT&T 2.0 moment where the law grants soft monopolies to some companies.
Overall I think we are moving in the right direction.
Also, I'll be starting to take a shot every time someone in AI answers a tough question with "I think we as a society need to decide on that".

