Cloud Security Today

SBOMs: Good but less than a silver bullet

Matthew Chiodi Season 3 Episode 9

Send us a text

Episode Summary

On today’s episode, Senior Advisor and Strategist at the Cybersecurity and Infrastructure Security Agency, Allan Friedman, joins Matt to discuss SBOMs. As Senior Advisor and Strategist at CISA, Allan coordinates the global cross-sector community efforts around software bill of materials (SBOM). He was previously the Director of Cybersecurity Initiatives at NTIA, leading pioneering work on vulnerability disclosure, SBOM, and other security topics.

Before joining the Federal government, Friedman spent over a decade as a noted information security and technology policy scholar at Harvard’s Computer Science Department, the Brookings Institution, and George Washington University’s Engineering School.

He is the co-author of the popular text Cybersecurity and Cyberwar: What Everyone Needs to Know, has a C.S. degree from Swarthmore College, and a Ph.D. from Harvard University.

Today, Allan talks about SBOMs and their adoption in non-security industries, Secure by design and secure by default tactics, and how to make software security second nature. What, exactly, is the SBOM? Hear about how SBOMs could’ve helped against significant attacks, the concept of antifragility, and why vulnerability disclosure programs are so important.

 

Timestamp Segments

·       [02:27] Allan’s career path.

·       [05:10] Allan’s day-to-day.

·       [06:15] What has been most rewarding?

·       [08:00] SBOMs in non-security startups.

·       [10:50] Real-world examples of Secure by Design tactics.

·       [17:30] Will software security ever seem obvious to us?

·       [19:30] What is the SBOM, and will it solve all our problems?

·       [23:41] Could an SBOM have helped against the SolarWinds attack?

·       [27:52] Memory-safe programming languages.

·       [30:16] Misconceptions around Secure by Design, Secure by Default.

·       [32:00] The importance of vulnerability disclosure programs.

·       [35:37] Antifragility in cybersecurity.

·       [41:47] VEX.

·       [44:29] How to get involved with CISA.

·       [48:00] How does Allan stay sharp?

 

Notable Quotes

·       “Sometimes, organizations need a good excuse to do the right thing.”

·       “It is bananas that software that we use, and pay for, still delivers with it not just the occasional vulnerability, but very real risks that require massive investments from customers.”

·       “When tech vendors make important logging information available for free, everyone wins.”

·       “The SB in SBOM doesn’t stand for Silver Bullet.”

 

Relevant Links

Email:              sbom@cisa.dhs.gov

Website:          www.cisa.gov

LinkedIn:         Allan Friedman

 

Resources:

Open Source Security Podcast

Risky Business Podcast

The future of cloud security.
Simplify cloud security with Prisma Cloud, the Code to Cloud platform powered by Precision AI.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

**Auto transcribed, expect weird stuff not actually said by the guest or host**

[00:00] This is the Cloud Security Today podcast, where leaders learn how to get Cloud security done. And now, your host, Matt Chiodi.

 

[00:15] Matt Chiodi: In this month's episode, we feature Allan Friedman from the Cybersecurity Infrastructure Security Agency, otherwise known as CISA. This is a more new agency in the United States, obviously focused on all things cybersecurity, and I wanted to have Allan on specifically because he is an expert in the field of software security, and something that he has been really passionate about, is the whole concept of the software bill of materials, otherwise known as the SBOM. So, in this interview, we talk about all different things from the SBOM to the white paper that CISA released back in April of ‘23, on the concept of secure by design, and secure by default, and what I really appreciated about Allan is that there's nothing that's economically motivating him in anything that he is saying. He's truly a public policy expert, and I think this can really help you and your organization, because you can see that one of the things we'll talk about is some of the economic drivers that are behind secure by default and secure by design. So, as I usually say, get your pen and paper, get your notepad out, take some notes, because you're going to learn about not just software supply chain security, where concepts like secure by design, and secure by default can fit into your security program.

 

One other favor, I'd love to ask you. If you love the podcast, please give us five stars on Apple podcasts or wherever you listen. It makes a huge deal in terms of other reach of our program, and if you don't like it, as your mom would always say, just don't say anything, but we hope you do. Drop us a note. We'd love to hear from you. Cloudsectoday@gmail.com. Again, that's cloudsectoday@gmail.com.

 

Allan, welcome to the show.

 

[02:13] Allan Friedman: Thank you so much for having me.

 

[02:14] Matt: This is going to be fun. All right. I've never worked in government. So, whenever I see somebody in government, I'm always super intrigued. How did they get there? So, career question for you is, how does one become a senior advisor and strategist for CISA? What did your career path look like?

 

[02:37] Allan: Like most of the people in the world I know who have really interesting careers,very much a random walk. Started off, wanted to be computer scientist in undergrad, study cryptography. Turns out, I wasn't good enough to write code. I wasn't smart enough to write proofs. I ended up getting my PhD in applied economics, which means I'm not a real economist. Postdoc in computer science, realized, “right the first time. Still not a computer scientist,” and by then, I was mediocre at so many different things that I gravitated to Washington. So, I was one of the first people doing security and public policy together at a thinktank called the Brookings Institution, and then, one day, a mentor of mine, who was at the White House at the time said, “Hey, this tiny part of the Department of Commerce is looking for someone who can help build and innovate new communities around security.” Because I've been focused on security and economics, we talk about a market failure in security economics, and I wrote about that. So, I joined government. At first, it was only a short stint, to say what can the government do? And the role that I found myself in was trying to build these communities where the government is the catalyst, but the work and the expertise comes from a huge, diverse community from around the world, and much to my surprise, I said “Oh, this is fun and I love the mission,” I was at NTI for a while, working in a bunch of things, coordinated vulnerability disclosure, I ran the first public government program on “let's bring together hackers and product security teams.”

 

I did some work on IoT, and then I was in the right place in the right time, and just sort of encountered the concept of SBOM. It's been around for a while, but it seemed like it was the right time to pull together community, and then of course, Log4j happened, SolarWinds happened, and everyone's like, “hey, we don't know what's in our software,” so I moved over to CISA to help make that a bigger issue, so I think of that as almost the corporate acquisition model,going from startup to the bigger agency.

 

[05:10] Matt: What does your typical day look like?

 

[05:13] Allan: I get to have a lot of fun. One of things we do is we run public working groups, were we help facilitate public working groups on all these different topics that are relevant. SBOM. How do we share them? How do we promote adoption? How do we think about what SBOM means for cloud? I meet with a lot of companies, especially small companies. One of my favorite parts of my job is talking to startups. I can't buy things, but I always want to hear what people are engaging in. We work across the government. So, my day to day is basically spent on video calls, but a very rich community. Everyone from small open source projects to joining calls with senior folks at DOD to talk about what implementation and various things look like.

 

[06:06] Matt: That is fascinating. A couple other questions have popped in my mind, but what have you found most rewarding? Obviously, some of what you work on, you can't talk about, otherwise, we'd have to neutralize our entire audience. So, we don't want to have to do that. So, what have you found most rewarding? Any specifics would be fun.

 

[06:32] Allan: It's a little bit of a cliche, but what I love is that the work around software supply chain transparency and SBOMs really has been a community effort, so seeing people say, “Oh, well, this topic is fun. I'm going to go take it over to the open source security foundation,” or “hey, the OWASP CycloneDX community has a new thing. Isn't that great?” So, watching this go from a small project to be something that everyone in the world is now engaging with, has been really rewarding. Japan just announced that they're having a new joint research effort across some of their big tech companies. The folks at the European Commission have said, “Oh, we're going to integrate SBOM into their regulations,” and of course, I've already talked about all the amazing startups that are saying, “Well, hey, here's a part of the problem that no one's really working on yet. So, let's tackle that one.”

 

[07:38] Matt: It's funny, you mentioned startups a couple times, I’m currently at a startup, but when you're speaking with startups, I know that they fall into a couple different categories. You have cybersecurity startups that are trying to build a solution to solve a problem, and then you have startups outside that domain. So, I'm curious, when you're speaking with startups that are outside of a security product, and you're talking to them about SBOMs, where do you see the adoption of that? I know, we're going to get into what SBOMs are, all that later, but I'm just curious, the interest from non-security startups, is there interest? And what does that look like, adoption-wise?

 

[08:22] Allan: I think there is. Obviously, small organizations, scarce resources. One thing that's really fun, when I do get to chat with organizations about this, is they're saying, especially in cloud data world, “this is easy.” We were all at the same starting point, which is everyone there, a bunch are kind of interested, but no one was doing it the same way, and today, five years later, since the government got involved, there's such diversity of where we are. People who make legacy OT systems, or just yesterday, I was talking to some folks from the airport. We have to maintain the F-16. That's a legacy system, but if you're a new organization, and you're using modern tool chains, this isn't hard for you. So, that's one of these. The other thing, and this surprised me at RSA a few months ago, I started talking to more venture capital organizations. They had reached out to me to learn more, and their fundees, the companies that they were supporting, and one of the things that I wasn't expecting, because in government, we often hear that compliance, bad. Security and risk-based security, good. Compliance, bad. Whereas, when I was talking to some of the CISOs of non-security startups, they're like, “we want to do this, but we're only going to get head count if this is a box we have to tick and it will help us do our job, if someone makes us do this,” and that's always a very paradoxical part, where sometimes, the organizations need a good excuse to do the right thing.

 

[10:18] Matt: Absolutely, yeah, I've seen that, and not all businesses, but a lot of businesses where it's like, “if there's a regulation that requires it, of course, we have to do that, but beyond that point, you really, really have to make a strong business case.” So, sometimes, it is nice to have that to point to, saying like, “we have to do that.”

 

[10:37] Allan: I want to be clear, we're not gung ho advocating for across the board regulation, but it is one of those things where we try to keep that in mind of “what are the incentives that drive change?”

 

[10:50] Matt: So, back in April of ‘23, CISA published a white paper called Shifting the Balance of Cybersecurity Risk,Principles and Approaches for Security by Design and Default, and in there, it talks about a number of different Secure by Design tactics, such as memory-safe programming languages, prioritised queries. Let's talk a little bit, a lot of times, when I read these things, there's a lot of theory in them, and I'm like, “oh, wait a second, what does this look like in the real world?” So, maybe if you could give us a real world example, or a case study, where some of these principles have been applied to strengthen the system security?

 

[11:31] Allan: Sure. So, first, I'm really excited to be part of this Secure by Design and Secure by Default effort. In shorthand, it's SBD2, because they're important complements, where things out of the box today aren't secure. How do we move towards that approach? I will also be honest, the term Secure by Design has been around for a long time, and I've never liked it. It's a little bit like the old joke of the hot air balloonist gets blown off course and he shouts down, “Hey, can you tell me where I am?” and someone shouts out, “you're lost.” It's not wrong. It's just not actionable, and what we try to do with this vision is start with an acknowledgement that it is bananas that software that we use, and pay for, still delivers with it not just the occasional vulnerability, but very real risks that require massive investments from customers. So, what do we do about that? We identified a couple of things. One of the prominent long-term visions is based on some analysis from the CVE dataset, this set of vulnerabilities that we all know about. Then, a massive percentage, and we can talk about the metrics and the methodology about how you measure it, but we'll say more than half of all known vulnerabilities come from memory issues, so one of our goals, long-term push, how do we get memory-safe languages into the world?

 

Another of these issues is logging, where there are a bunch of log details that weren't included, that aren't included by major organizations that would actually give them real and actionable threat intelligence, and many folks may remember, in July, there were some concerns about one company in particular, Microsoft, you had to pay a little extra for logs, and indeed, I went to call up one of my partner government agency, the Department of State, which was paying for these extra logs, and because of that, was able to find a pretty serious ongoing attack, that as of when we're talking, we're still learning about it. Now, CISA had been talking with major tech vendors for a while about how do we make security relevant logs more available, and part of the basic support? Because again, organizations may not pay for it without a clear vision of what’s going to do it. I'm already paying for support. So, let's talk about making them, again, the default, and we're really excited that Microsoft announced that they are going to be making more of these security relevant logs, and this is where CISA had been working with them, and the other partners had a collective collaborative conversation about which one's right.

 

Obviously, you don't want to flood someone with every single bit from your NetFlow, but what are the things that actually allow actionable intelligence. In fact, Eric Goldstein, who is the executive assistant director, he's the Cyber in CISA, he wrote a blog post, publicly saying, “when tech vendors make important logging information available for free, everyone wins.”

 

[15:33] Matt: Love that. That's certainly something that we have been talking about for years, and it's almost a Security Tax that's on a lot of products. Let's talk about something even more basic. When you look at a lot of, for example, SAS subscriptions, if you want single sign on, if you want SAML capabilities, guess what? Sometimes there is a massive, massive upcharge. In fact, if you go to the website, SSO.tax, you can actually see, it's a wall of shame, where I don't know who the guy is who created it, but somebody created it, and it shows just some of the exorbitant charges to what should be, basically, a seatbelt, and they're charging a massive price for it. So, I think this fits right along into that same thing.

 

[16:17] Allan: Matt, your analogy of the seatbelt is something that’s been a strong inspiration for the whole CISAteam, this model of car safety, automotive safety, took a non-trivial amount of time to go to be adopted, and once it was, it was a massive support, it was a driver of new products. Why should I upgrade? Why should I buy a new car? Well, this one has all the latest safety features, because it turns out, at the end of the day, people do care about their families and they do care about their organizational security, but we just need to make that the default path, and the designed-in path.

 

[16:58] Matt: In retrospect, I think that was the 1970s that NHTSA made seatbelts a requirement. Actually, I have to go back and look, I think the agency was created in the 70s. And sometime, late, late 70s, 80s is when that basically that requirement came out, and it seems so basic. Now, we think about it like “my god, were there ever cars without seatbelts?” Now, airbags, ABS, some of these things that we consider, I would never buy a car without if there was one that was offered. Do you think, in another five to 10 years, we'll look back on software and think like, “oh, my gosh, how did I ever buy software without x, y, that all seems so basic to us?”

 

[17:39] Allan: You know, I think we're going to get there. Today, even if you follow the latest news, we do differentiate between the “that's a pretty clever hack” and, “wait, how on earth did that attack succeed every organisation that bases defences against them, all of our software, and I’d judge accordingly. So, one of my […] is making sure that the security takes behind the scenes planning and all the progress. It's still a dumpster fire, but I spend a lot of time sitting on the software side, and one of the reasons why […] chain is because that has become a much more dangerous attack surface, […] don’t have an easy time to get in front of today. Whether you’re attacking products or you’reattacking organizations. Security is still terrible, but it is […]?

 

[19:00] Matt: So, we touched on this, right at the beginning of the podcast, but the whole concept of the SBOM or the software bill of materials, that's something that has gotten fairly popular, in terms of what I see on LinkedIn, and hen I'm speaking with customers, and it's also strongly advocated in the white paper. So, I think, I've seen some people talk about it as if it's a silver bullet. It's going to solve everything. So, I know this is a personal thing for you. So, maybe walk us through, first of all, what is the SBOM? And then, what are some misconceptions around that, that you typically hear?

 

[19:39] Allan: I keep a pack of Twinkies on my desk, as a reminder that if you go to the store and you buy anything that you're going to feed your family, it's going to come with a list of ingredients, and why do we expect more transparency from a non-biodegradable snack than we do from the software that runs our organizations, our critical infrastructure, or national security systems? That's the concept. I will say, never good idea to disagree with the podcast host, I haven't seen people say SBOM is a silver bullet. What I have heard is people say other people are saying SBOM is a silver bullet, and I was like, “the SB in SBOM does not stand for silver bullet.” It's not supposed to solve all of our problems, and in fact, let's go back to that list of ingredients analogy. Will a list of ingredients, by itself, prevent someone in your family from having something that they're violently allergic to? No. Will it, by itself, keep me on my, now a distant memory, 2023 diet? No. Will it keep me on a plant-based diet or help me follow a religious-based dietary restriction? Perhaps. But good luck doing any of those things without the list of ingredients.

 

So, what we think of SBOM as, is SBOM’s a data layer. That’s all it is, but it's a data layer that we desperately need, and we don't have today. Once we have that data layer, we'll be able to turn data into intelligence, into action, and one of the analogies in security might be giving a vulnerability an identifier, giving a CVE number, doesn't fix the damn thing on your network, but all of the tools that we've built and all of the infrastructure and the processes, and yes, the compliance models, depend on having that data layer being able to track that this vulnerability is in fact different from that vulnerability, so that's really where we're trying to go, is to create an expectation of transparency.

 

Put it in another way. Why on earth would you buy software from someone who couldn't give you an SBOM? What does that say about the security of that organization, but beyond security, your total cost of ownership? If you're working with a SaaS provider, like “we don't know what we have,” that should be a large red warning sign.

 

[23:20] Matt: If we think about, in my view, the first time an SBOM came onto my purview was sometime shortly after the whole SolarWinds attack, is where I think people really started to think about, at least, more broadly, software supply chain security. To me, that was a big one. In a scenario like that, if there was an SBOM available, would that have helped with that? How would that have helped if there had been for that software?

 

[23:53] Allan: Great question. That was actually one of the things that, immediately after SolarWinds, we heard,which is, would an SBOM have prevented this? We always try to be very, very clear. The nature of the attack was the adversary compromised the actual tools that was used by a company. But, all of the things that we know we're going to need to prevent this kind of attack, or at very least to detect it in real time, start with this idea of transparency of your supply chain. So, it is a necessary, but not sufficient, condition. In fact, when you talk to Tom, the CISO of SolarWinds, he's back now, we've been on panels together, where all the things we built, SBOM was essential, but we also were aware that we've built other tools that rest on that data infrastructure that helped map to it.

 

The other thing I'll say is SBOM, I think, is the first major idea of a software development artifact, where there's a thing that you can show how it was generated, and you can pass downstream to your customers, or to a government that says, “this is what I have.” Moving forward, I think we're going to see lots more artifacts. Right now, we have a great discussion, and NIST has a wonderful document called 800-218, which is the secure software development framework, and it helps anyone get a handle on secure software development, “here are all the pieces,” but what it doesn't have is, “and here's how you prove to someone else that you have them.” Well, that's the challenge. That's the danger of process standards. They're great, but they're hard to comply. So SBOM is the first piece.

 

Moving forward, I think we're going to see more pieces of that kind of work where your tools themselves will be able to securely generate artifacts that you can use for your risk management, and that you can work with your customers, and there's a lot of great work happening in the Cloud Native Computing Foundation, and many of your listeners will be familiar with something called in-toto, which is starting to piece together all those. It's been exciting to see that go from pure research, where it started off at with some folks from NYU, and Purdue, and now we're starting to build that into advanced, very modern projects that are well-funded. I think, over time, you're going to see more and more organizations using artifacts, not just SBOMs, and pretty soon, it's going to be expected, but we've got some kinks. We need to make sure things can scale first.

 

[27:08] Matt: We were talking about the white paper and some of the secure by design tactics. We talked about memory-safe programming languages. When we think about this, in relation specifically to the cloud, you mentioned that, when you're talking to startups, or companies that are cloud native to start with, they're building in the cloud, they're using containers, serverless, they're using composable architectures, using things like TerraForm scripts and whatnot. You said that it's much easier for them to be able to generate things like SBOMs. With SBOMs, memory-safe programming languages around cloud, my questions are, what are examples of memory-safe programming languages? And where does this all come together with the SBOM? How does some of those help us to understand how that comes together?

 

[28:04] Allan: Oh, that's a fun question. So, again, a lot of what we're trying to do with secure by design, is help shift the community towards the modern tools that we already have. A lot of this stuff won't require inventing new things. The government won't have to get, “hey, let's have some modern programming languages that are memory-safe, and really good fits for the cloud native world,” because we have them. Really, when we talking about the dangers of a lack of memory safety, we mean C. The cloud native world has relatively little C. There's some, and especially as you move into the containerized world, and that's where Rust comes in. So, Rust is fantastic. It's meant to be understandable and accessible by C engineers, and it comes out a lot of community, but it has a lot of great safety properties, and a lot of our early attention has been on how do we push this into critical infrastructure stuff? But I think we're going to see a lot of this into the cloud native world, and of course, Golang and Python and other things that are actually used by cloud engineers are memory safe today.

 

So, I think, drifting away from Rails and such, everyone knows that we have to get there, eventually, but we have a lot of the modern tools and tool chains that already have this built in, so how do we help people make that switch, give them the excuse of saying, “hey, this is now. We're going to declare this officially tech debt and give you the aspiration of making it secure by design,” and then it becomes a matter of how do we actually go from this aspirational vision to getting organizations to invest in.

 

[30:16] Matt: So, the white paper, it talks about some of the common misconceptions with secure by design, secure by default principles. From your experience, maybe if you could share, some more misconceptions, how you address them, what have you found?

 

[30:34] Allan: One, is just the expectation that we're looking for perfect security, and everyone is aware, once you’vespent some time thinking about the philosophy of security, that's simply not acceptable, so that has driven us towards resiliency. So, that's a big piece of it, and resiliency is one of these awkward terms where everyone knows it, lots of us use, can't really quite define it, and mathematical modeling of resiliency is a really interesting field, but very tricky and hard to map to software engineering. So, what are the resiliency pieces that we want? Well, things like vulnerability disclosure policies, making sure that you have the ability to manage that, things like actually having declared vulnerabilities for SAS applications. Too often, we fix a flaw, but there's no documentation, there's no way to track it to figure out, do other people also have this flaw?

 

Of course, going back to my favorite topic of SBOM, were making sure that not just the suppliers, but maybe the downstream customers, as well, have an idea of what's in a piece of software, so that we can better respond when new risks emerge.

 

[32:00] Matt: So, you mentioned the vulnerability disclosure programs, something that has had quite an emergence over the last couple of years. If you think about your career, maybe over the last five or so years, is there any kind of memorable incident or anything that stands out that would maybe underscore the importance of vulnerability disclosure programs, specifically as it relates to software supply chain security?

 

[32:26] Allan: Fun question. One thing I do want to acknowledge is just how far we've come. Those who've been in security for a little while, we need to remember, and again, this comes with the “appreciate the wins that we have.”Recently, this was still a red hot topic, both inside the US government, industry, the hacker community, “no free bugs”was definitely a famous declaration of DEFCON, “we're not going to give anyone free bugs.” Even in the research world, there was some discussion of “is it better just to drop 0 Day publicly or should you work? And now, this is not just seen as good practice, but we have senior government officials, again, not just the US, UK, Australia, Europe, France, publicly thanking security researchers as a community for the role that they play in securing all of software.The Netherlands also been a great leader in the space.

 

One story that I think is quite interesting, that's also relevant in software supply chain, is the Ripple20 bug, which is a series of vulnerabilities that were found in a TCP library, called Trek, that is used in a huge number of IoT devices and embedded systems that are used in critical infrastructure. That was found in 2020 by an Israeli company, named JSOF. One of the things they had to do is they just say, “we want to figure out who has this, we want to do the right thing of disclosing to companies that might be affected before we give our blackhat,” and they had to go on LinkedInand look for anyone who was bragging. They paid for the special LinkedIn subscription so they could find him saying,“I have experienced with trek IP,” look at where they worked, and try to do some guesswork, and they did some disclosure to some major manufacturers, that they would only know about because of OSINT work, essentially.

 

So, this is how a lot of these issues are tied together, where you want to be able to tell people that there's a risk, and want to be able to do so in a coordinated way, and you need the data to help you with that. So, we're tying together good security, good resiliency, and good data all together to make it easier and cheaper to do the right thing.

 

[35:25] Matt: I'm going to throw you a little bit of a curveball here. I don't know if you've read the book by NassimTaleb, Antifragile. Not sure if you've read that or not.

 

[35:34] Allan: Familiar with Taleb. Haven't read that book. Yeah.

 

[35:37] Matt: So, the whole concept of Antifragility, this popped into my head, because you were talking about resilience and all that, and the concept of Antifragile, are systems that get stronger with certain stressors. Not extreme stressors, but just stressors. The human body is a great example of an Antifragile system. It's good if I go out, maybe it's not good for all of us, but you go out for a two, three mile run. It might be really hard the first couple times you do it, but your system adapts. That is a healthy stressor. People who do intermittent fasting, that is an example of a healthy stressor, that’s debatable, but it's supposed to make your body stronger. So, that's an example.

 

I bring this up because a lot of times, we talk about, in cybersecurity, resiliency, and that's like, I think he uses the example in the book of a rock. If you take a rock, you throw it against the wall. Depending upon the composition of the rock, it's going to be fairly resistant, depending upon how hard the surface is, all that. It's resilient, but it's not Antifragile. If I throw it against the wall more and more, it doesn't make the rock stronger, generally speaking. So, I'm trying to think about this in terms of cybersecurity, and concepts like secure by design, secure by default. Do you think it's possible that we can get to a point where, you hear a lot of vendors talk about self-healing and all that stuff. I think that was Cisco’s thing, the self-healing network, a decade ago. Do you think it's possible we can get to a point where we can truly develop systems that are Antifragile in some way? Meaning that they actually get stronger from, maybe not the extreme, super focus, nation state attacks, but from some of those lower ones. Do you see any of this as potentially building towards something like that?

 

[37:28] Allan: I think we can, and there are a couple of examples. Of course, one of the challenges, as we think about antifragility, and Taleb has talked about this in some of his other writings, is the notion that having resiliency and antifragility often requires a certain amount of redundancy, and we know that redundancy costs money, and organizations have gotten good at efficiency for some things, and one of things we need to do is underscore the efficiency of security, as well. So, one of the things, as you were talking, that came to mind, that's in the secure by design model is, right now, we have the common practice for shipping software or starting to use a major cloud product and integrate it in my organizations, “here it is, and then here's some ways you can make it more secure,” and that doesn't lend itself to good organizational responses, and one of the things we're trying to do is, rather than having hardening guides, let's sell things pretty locked down, and then have opening guides or, I don't want to use the term softening, but essentially, figuring out what the integration part looks like so that you're only doing the things that are what you need to make it work for your organization, your context, and each of those is a conscious decision, is an active decision, by an organization to say, “yes, we're turning this piece off.” So, now, you have an organization that can document that, you can build that into your processes.

 

[39:21] Matt: I think that's interesting, because for so long, and maybe this is because software, even as an industry, if you think about other industries, it's still pretty new, relatively speaking, and I almost feel like we started out by making things not secure by default on purpose, because we wanted to encourage usability. We wanted to have usability, we wanted to encourage adoption, so there's some economic drivers behind that. I'm wondering, if part of this effort, in terms of what you guys are doing at CISA, just in terms of market maturity and expectations, if that's going to shift now, because I don't think we necessarily have to do as much encouraging. People know. It doesn't matter what age you are, now, I think people understand, technology's here, it's not going anywhere, the internet's no longer this new thing, and it's almost like, from a software vendor perspective, usability will always be there, the economic drive will always be there, but I think the expectations are what is shifting, in terms of “look, I understand there's going to be trade offs, but when it comes to security, I understand that maybe there are some things, from a usability perspective, that I may need to give up, initially.” So, I'm just curious if that fits with what you guys are hearing and what you guys are looking at.

 

[40:39] Allan: I think you're channeling a lot of the spirit that was built into this effort, is the idea that we do need to have some re-orientation around what these products look like, and also, how do we build this into the developmentprocesses? How do we make this something that's integrated into our tools and into our behavior? One of the big challenges in broader application security right now, it's been a hot topic for the last five years is, what can we do, without changing how developers think, or changing their day to day work? Anything we propose that says, “I've got a new thing for you. It's only going to be a half hour a day.” Not so much. How can we sort of build this into our processes and our tools, starting with our data flows?

 

Another thing that I think underscores the idea of Antifragility is a notion called VEX, which is a compliment to SBOM.VEX stands for the Vulnerability Exploitability Exchange. That's my fault. I'm really bad at naming things. Essentially, what it is, is it's a security advisory, machine-readable, then it can also say that a product is not vulnerable. So, in the SBOM example, it's not just SBOM specific, but in the SBOM example, it says, “I'm using this library.” So, a naive use of the SBOM would say, “Ah, you're using the library. There's a vulnerability in the library, therefore your product is vulnerable.” So, we want a way for the person responsible of that product to communicate, “this is not affected. Yes, I'm using OpenSSL, version 0.9, but I'm not affected by Heartbleed, because I'm only using the pseudo-random number generator, and the compiler has ripped out all the other pieces.”

 

Now, what this means is, it's a way of rewarding an organization that actually has a good product security team that has said, “there's a vulnerability, does it affect our product? No. Great. Let's tell our customers, so they're not bothering us. That lowers our customer support cost, but also gives them the confidence that we're on top of our risks,” and this can work for really any type of vulnerability, of “Yes, we're aware that this is an open source project that may have very few maintainers, but we're aware of that risk as well.

 

[43:43] Matt: VEX, I have not heard of that. Is this new or has this been out for a while?

 

[43:47] Allan: It’s been around for a couple of years, and it is maturing. It's being implemented in a couple of different data formats. In fact, some people are much more excited about VEX than SBOM, especially in the short run, because an SBOM model requires some tools to consume. I need to be able to read the SBOM and then do the mapping and figure out what the risks are. Whereas, we did a tabletop exercise with the energy sector, and they're like, “in the short run, what we really want is, just tell us if we need to worry about a product or not.”

 

[44:25] Matt: Well, this has been a fascinating discussion, and I'm just curious, you mentioned some of the working groups that you guys have at CISA earlier on. So, if someone wants to get involved, maybe they're early-on in their career, maybe they're very senior, they really want to dig in on some of these topics, what are some of the ways they can do this?

 

[44:44] Allan: Well, the email address to get in touch with us is SBOM@cisa.dhs.gov. Or, I've been told that if you just say “SBOM” three times while looking into a mirror, I’ll appear. Email’s better. We have some open working groups that anyone from industry or academia can join. Great opportunity for student that wants to rub shoulders with senior people from big companies and learn about startups. Five topics we're working on right now, one of them is this VEXidea. We're defining it both technically and in practice. The second is moving metadata around, so I have an SBOM. How do I get it to my customer? I'm a customer, I want an SBOM. Turns out that's an embarrassing obstacle. Right now, the companies that are sharing SBOMs with their current customers are using portals. Those don't scale. The third group is on On-Ramps and adoption. How do we communicate the value of caring about software supply chain and make it easier and cheaper to engage.

 

So, if there any folks in marketing who want to weigh in on a, still, pretty new project, love to hear. We have a group that's focused explicitly on SBOM and cloud. Something that we can talk a whole other session on this, which is, what what does it mean? There are lots of use cases why you would want an SBOM for a cloud provider. You make it,you're about to sign a contract, you know what’s the tech debt available, as well as operating, we also acknowledge that there are some new things about cloud technology, including software changes, daily, hourly, minute-ly, microservices, the idea that different customers might get actually using different software, and then of course, the idea that, “we want transparency, not just for static dependencies, but for services,” and what is service transparency?

 

Then, the last group is on tooling and how do we implement SBOM, and that's the folks that are getting deep into the nuts and bolts of what this means to build an SBOM, to consume an SBOM, and making sure that we're harmonized in our expectations. So, tons of work to be done. Our job at CISA is to help build those communities and bring that expertise to bare, and also identify what are the voices we don't have at the table, and make sure that we've got different corners of the open source world, different sectors that want to make sure that their voices are heard.

 

[47:42] Matt: So, there's always so much going on in cyber, and one of the things I always struggle with is just trying to keep up with, I don't even try to keep up with the news, just because I feel like that's impossible, but just there's so much happening, for example, just in the space of AI where you could spend, probably, 12 hours a day, but just in general, I'm curious, for you, what's your method for staying sharp? What does your routine look like?

 

[48:06] Allan: Well, it used to be a lot easier before Twitter started its slow motion collapse. I will say that I really enjoy the work that's happening, that Jerry's doing to create the InfoSec community on Mastodon. So, that's a key part of it. A couple of podcasts that I listen to regularly, in addition to Cloud Security Today. It's a very useful resource, especially because this isn't my domain, so it lets me keep the big picture, and I'll also mention the Open Source Security Podcast, which is two old friends who came out of the Red Hat world and are very engaged on that front, and of course, Patrick Gray's podcast, which is very embarrassing that I forgot, so give me a moment while I remember what the hell this podcast is called.

 

[49:10] Matt: We can put in the show notes.

 

[49:13] Allan: It's Risky Business by Patrick Gray. One of the great things is we try to track all of these efforts as they come across. Anyone who wants to know more, or just stay loosely in touch with the SBOM world, send us a note to SBOM@CISA.dhs.gov. We have a broadcast list. Maybe once a month, we'll keep you updated. We had a big community meeting with over 1000 people called our SBOM-a-Rama, where we got updates from sectors like automotive and financial, we heard from Governments around the world, so we want to continue to have that kind of event as well.

 

[50:04] Matt: It's perfect. Well, Allan, this has been a pretty far-ranging conversation. It's been very interesting. Thank you so much for making time to come on the show.

 

[50:13] Allan: Thanks so much for having me.

 

 

Thank you for joining us for today's episode. To find out more, please visit us at Cloudsecuritytoday.com.