The secure open source fallacy

Hi there,

A recent discussion in this forum prompted me to write a post on the topic of “open source” software, and why just using the “open source” release model as an argument to support claims regarding the security of an application are often misguided.

The way I see it, yes, open source can be more secure than proprietary software. Especially those applications that are thoroughly tested and audited.

However, as it turns out, critical pieces of open source software like OpenSSL or Log4j often suffer grave vulnerabilities that in turn affect every software that depends on them. Not only other open source software, but sometimes even proprietary software from international megacorporations.

This is not to say proprietary software is better than open source, and in fact, open source truly is a wonderful thing for learning, sharing, and building a better world.

I’d just like people to be a bit more critical and understand that the release model of an application does not significantly affect its security. Security audits are expensive and require experienced professionals, and people don’t just conduct rigorous audits for fun.

2 Likes

is this geniunely still a fallacy? how?

I would think many rational people already know that open source doesnt always mean secure (in fact it is way more likely to be private than secure)

2 Likes

Here, for example.

Funny, I was going to answer here. It seems you did not understand my arguments if you also post it here. My arguments are completely different to what the linked blog article is telling. For example I never said “you can audit the code yourself”, I was arguing that many eyes see many things.

I would recommend to actually read my post carefully instead of trying to prove another article that attacks other arguments than what I wrote. The truth is, open source is more secure in average while many of the arguments of your linked blog post are also true.

Just to be clear, I did not write the post thinking about your specific message exclusively. For years, I’ve seen the same argument repeated over and over. Things like “the XZ utils backdoor actually proves that open source is more secure” when in fact it was caught without having looked at the source code of xz utils in a completely different place (benchmarking SSH startup times). If the backdoor had been implemented in a way that it did not delay the starting time of OpenSSH, nobody would have known.

It also seems people forgot about heartbleed, which, for years, stayed undetected, despite OpenSSL being completely open source. I sometimes wonder if the NSA knew about heartbleed before it was fixed, the same way they knew about Eternal Blue on Windows SMB.

Your post prompted me to write that blog post, but I have seen this argument used a million times before, and I kind of wanted to collect my thoughts in one single place (that’s why I started my blog anyway).

And how many backdoors do you know in proprietary software? On open source everyone makes a huge “look, open source is not secure” while nobody is recognizing the proprietary vulnerabilities in the same manner. I do not say that open source has no critical vulnerabilities. I just say it is a difference in quantity and sometimes even quality. And I say “in average” that includes total catastrophic open source software in terms of security.

If you understand it as “just go open source and you are save”, that is not what I am saying.

I also can bring up other arguments as:
Open source software is often smaller, because it does not include features nobody asked for (as advertisment). As more code a software has as more bugs it can contain. As example I read an user that reverse engineered Microsoft Authendicator and he found out that data is send to Facebook and other companies. On a FOSS project with same main functionality, such code would not exist, so it would contain less code and therefor potential less bugs that also can be security related.

1 Like

all I can say is make it make sense.

Do you think open source means more secure or do you not?

Sends what data? Do you have a source for this? Because it is quite a serious thing you are proposing here for a security app.

to give a benefit of the doubt they might mean metadata.

Potential and in average it is more secure, I’m sure. But average does not mean that it automatically applies to every project. You still should choose your software carefully. And I am also sure that there are super secure proprietary projects - you just cannot check easily (without reverse engineering) if they are or if they are not.

The best is you read it by your own. Here is the original source.

1 Like

Alright that confirms my suspicion

You are using “Secure” here very loosely and I’m moslty leaning towards @ulveon.net’s side

I was also going to say in the example of Apple, there’s a reason GrapheneOS actually highlights them as another option because even though Apple is fully propriatery, you can’t argue that they take security seriously

In fact they’re so serious they made a guide that might as well be considered a book:

And i am corroborating with the XZ Backdoor, even if a code has eyes, things like this can go unnoticed, proprietary or open source

Security and Open Source IS NOT BINARY

An Open Source Project can be very secure (GrapheneOS)
A Proprietary Project can be very secure (iOS)
A Propriatery Project can be very insecure (Windows)
An Open Source Project Can be very insecure (Linux)

[Unlike MacOS and ChromeOS, Both windows and linux lack things like sandboxing out of the box, on linux things do get slowly addressed like Wayland screen permission and Flatpak but it is up to the user to utilize them, if a user say uses X11 and there is no form of sandboxing in apps or even control of sandboxing, might as well be useless, for windows you do need to use the Microsoft Store and utilize UWP for sandboxing to work and the screen compositor for windows is kinda like X11 on linux]

OK, so I went over this post and the claims, here is my takeaway:

  • libfb.so and fbjni are open-source libraries. React Native was originally created by Facebook, and these libraries are used to provide Java interface helpers.
  • react-native-telemetry is an analytics wrapper, but it does not necessarily send data to Facebook. In fact, this telemetry API is used to send data to InfluxDB. InfluxDB can be self-hosted, and I’m quite sure Microsoft would self-host any telemetry data they’d want to collect. Thus, it does not really “phone home” to Facebook.
  • The telemetry data collected can be anonymous, but without a MITM analysis, potentially bypassing certificate pinning, and exhaustive traffic analysis of the application, we can’t really know.
  • react-native-telemetry is optional in any case, its presence does not indicate its usage, although it is reasonable to expect that Microsoft is likely collecting telemetry, but (hopefully) only app usage analytics, and not any kind of security data.
  • Telemetry allows developers to improve their application behaviour by automatically collecting information whenever the application does something unexpected, or by analysing usage patterns.
  • With that being said, I personally always try to turn off every telemetry option possible, while being aware that those “telemetry” toggles likely don’t do anything. But that’s my paranoia more than common sense.

I think security and privacy are essential, but this author makes a number of argumentation flaws across their post, and it reads to me like they do not actually have a threat model built, and are merely rejecting things they have not seen or don’t know how they work out of fear, rather than technical merit.

I have said before, and will continue saying it: If you don’t know what you are defending yourself against, you won’t be very effective in achieving security.

For you to be able to deploy measures to ensure your safety, security, and privacy on the Internet, you must know what exactly you’re defending against, and why you’re doing it. You must also be willing to give up some luxuries you used to take for granted. Finally, you must have deep technical expertise (or at least an intention to get informed and educated on the topics you are not an expert in) rather than refusing anything new because it “looks like” dystopian malware.

Actually, I live in a similar situation to this person. The difference is my work provided me with a cellphone and a data plan, where I keep all corporate crap in: Slack, Email, my authenticator, video calls, calendar, intranet, VPN, and everything else.

I could have used my personal phone for this, but I refused to install anything from work on my phone. I know not all employees have the luxury of asking their employer for a free phone for their corporate crap, but, in a worst case scenario, you can just buy some cheap Xiaomi whatever, and once you finish working, you turn it off and throw it into a faraday cage. Guaranteed that it will not be spying on you or sending any kind of information anywhere.

Also I appreciate you said Linux is insecure, because Linux fanatics often hate me when I say that Linux is probably less secure than macOS.

Actually, I believe Microsoft Windows has come a LONG way. I think Windows XP was a total security disaster. They got serious around SP2 with the integrated firewall but it wasn’t very good.

Windows Vista was a massive security improvement mandating signed device drivers, UAC, and other things. And Windows 7 added the final polish layer with highly granular permissions, SmartScreen, and more.

Then around Windows 8, Microsoft tried an experiment to see if they could make a version of Windows that could not run any .exe apps and could only install apps from the store, this was Windows RT, and it didn’t really pan out very well. People use Windows because they want to run .exe apps, but it would have been an incredibly secure operating system as every application and software would have been signed and delivered securely.

Windows 10 then introduced something called Device Guard, along with Memory Integrity and a bunch of other Hyper-V based robust security features that Linux likely will never have due to its development fragmentation.

And Windows 11 introduced mandatory TPM and other security features that make the Windows platform really secure. Probably not as secure as macOS, because Apple maintains a monopoly and complete vertical integration between hardware and software.

And to be fair to Linux, SELinux is really an amazing piece of security software.

But yeah. Windows is more secure than ever, but of course, users will be users, and they will just double-click a ransomware and run it with admin privileges completely destroying their computer in the process.

But anyway, this is an entirely different discussion altogether. I might make a post on Windows security some day, maybe, if people are interested.

I would like you to also acknowledge the note I have mentioned below

but yeah linux being more secure than MacOS or even ChromeOS never has been more untrue

vs windows is where it is debatable

1 Like

Yeah I have nothing to add to your note, I agree overall.

X11 is a security disaster, but Wayland seems to not be 100% (I believe some blind users were complaining about accessibility features missing from Wayland that worked fine with Xorg).

The fact that any process can read any part of the screen in X11 should have never happened, but it sure was convenient and for many years worked very well.

And yeah, Linux does not really have any sort of “real” sandboxing. Flatpaks are also not really that great for security reasons btw: https://flatkill.org/2020/

Flatpaks can be properly isolated, and I started seeing badges in the Fedora software store indicating whether a Flatpak requires privileged access to the host system, but as of now, there is zero enforcement. If you want a certain app (just randomly thinking of one, let’s say, Bitwarden) and the Bitwarden developers do not respect security boundaries (and again, I don’t know if this is the case, just assuming so I can make a point) your only options are to install the app anyway, or to not install the app.

This is in contrast with modern Android apps where you can e.g. refuse them access to Contacts or whatever you want.

1 Like

Where did I say this? I’m sure never ever everywhere. I feel like you try to reduce the content of my messages to a single sentence.

Looks like you are only rating security as end user security. Systems that have no sandboxing = insecure. That makes not much sense. Sandboxing helps on two ends: hacking the people (Phishing or bad software etc) and reducing the risk that a compromised software can escalate further. But in all these cases the system is already compromised when the sandboxing becomes relevant. And now look on what systems you are probably running more software where sandboxing is required: Linux distros or Android/iOS with all these proprietary apps that want more rights than they need for advertising purpose? The attack vector on iOS devices with all the proprietary apps is much bigger. That is also a reason why nobody cared a lot about sandboxing on Linux for such a long time (and yes, it is good that Wayland implement some kind of sandboxing).

I was trying to make an explanation while loosing the focus of the thread and therefore writing and deleting a lot of text (could have written a paper, lol). Currently we are speaking about system concepts and no longer speaking about open source versus proprietary software. Sandboxing and so on can be applied to open source as well as proprietary systems. What should we compare about here? The topic is more about are the 1000 lines of proprietary code more free of bugs and especially vulnerabilities or are open source projects more secure per 1000 lines of code? These are completely different questions. And in this sense we just don’t know how secure proprietary code is. And not knowing it is already a security risk, because it also means it can contain hidden malicious code implemented by the developers themselves (and in apps it happens very often). It is also possible on open source code, but much more hard to achieve.

sandboxing was one example out of many

I also mentioned X11 which is notoriously insecure due to how any app can just capture your screen and unless im mistaken also keystrokes

and I would bet there are more im missing in fact hold on

sources:

And while most apps on Linux are FOSS, you also can trust most apps while you cannot on mac, so I still would trust an X11 Linux system more than I would do about a Mac system (and I am running Wayland btw).

that is not in any way logical, especially as there are FOSS Apps for Mac, MacOS is actually one of the more open operating systems than iOS and iPadOS is while retaining alot of the security if you’re not actively breaking them.

and I would trust the mac’s window compositor than X11 and trust wayland and MacOS window compositor equally

1 Like

I don’t think he does, personally. The stream of though that you’re subscribing to is a very logic-based definition of security, which I think originally comes from the cybersecurity industry where companies are in fact looking for a logic-based security where definitions are only defined by their absolute parts. Parts being: can it survive pentest “X”? can it survive pentest “Y”? That is not what the average consumers are interested in, but they do benefit from this work downstream, since most security features that are developed seem to come from the corporate or military world.

All and all I just wanted to remark that Librekuhimo’s definition makes a lot of sense to me from a consumer perspective. And it’s the definition I use when I say “do “X” for more security.” (When I made my Signal DistroBox tutorial for example)

1 Like