Cloud Security Today

LLMs: risks, rewards, and realities

Matthew Chiodi Season 4 Episode 12

Send us a text

Nate Lee discusses his transition from a CISO role to fractional CISO work, emphasizing the importance of variety and exposure in his career. He delves into the rise of AI, particularly large language models (LLMs), and the associated security concerns, including prompt injection risks.

Nate highlights the critical role of orchestrators in managing AI interactions and the need for security practitioners to adapt to the evolving landscape. He shares insights from his 20 years in cybersecurity and offers recommendations for practitioners to engage with AI responsibly and effectively.

Takeaways

  • Nate transitioned to fractional CISO work for variety and exposure.
  • Prompt injection is a major vulnerability in LLM systems.
  • Orchestrators are essential for managing AI interactions securely.
  • Security practitioners must understand how LLMs work to mitigate risks.
  • Nate emphasizes the importance of human oversight in AI systems.

Link to Nate's research with the Cloud Security Alliance.

The future of cloud security.
Simplify cloud security with Prisma Cloud, the Code to Cloud platform powered by Precision AI.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Matt (00:00.767)
Nate, welcome to the show.

Nate @ CloudsecAI (00:02.586)
Thanks for having me, Matt.

Matt (00:04.098)
This is your second time, which is pretty awesome. I've only had, man, we'll have to think. I think I've only had three guests come back on the show more than once.

Nate @ CloudsecAI (00:13.041)
I don't know if that says something good about me or questionable about you. I'm hoping it could be both.

Matt (00:17.592)
It might be both. It could be both. So I saw that you were at TradeShift for about nine years and you recently left the CISO role there. So I want to dig into that a little bit. know that from our chat before we started recording that you're now doing some fractional CISO work. So tell me just what led you down that path.

Nate @ CloudsecAI (00:43.248)
Sure, sure. mean, well, I'd started at TradeShift, obviously, nine years ago, and didn't have a security program there and sort of had built the whole thing from the ground up. I mean, we started at maybe like 120 people when I joined. And I think at peak, was around 1,600. So got the full gamut of like building it from the ground up. All of the problems and fires you hit with hyper growth and all of those bits. And after nine years, really, I was looking for what I would be doing next. But after...

Thinking about myself being in another role, a CISO role at another company, I really came to the conclusion that what I like is the variety and being exposed to a lot of different things. That's what made me happiest, I would say, when I was at TradeShift, is it was growing so quickly or you'd acquire a new company and you'd be moving into a new market, new product launches. All of that I really loved. And being able to consult across the board, so helping all different companies, mostly software companies, but

many different companies would give me lot of exposure to different software stacks, different ways of working, different verticals within software, all of that. And that just seemed really exciting to me. Plus the ability to, if you were going to work really hard, have that building value for yourself in your own company was also quite appealing. So yeah, I jumped off in, I guess, March was when I officially started or February, maybe. And yeah, it's been

Going very well. guess it's converting more from fractional CISO stuff to general security consulting because it turns out you can tell people what they should be doing and help them understand what the strategy is. And then the next question is, can you help us with.

Matt (02:25.826)
So thanks for telling us about all the problems. Please help us fix it.

Nate @ CloudsecAI (02:28.974)
Yeah, yeah. Well, and you don't want to just give people problems, right? It's like, hey, here's the opportunities you have. Here's how it aligns with your business. Like understanding their business strategy, what are their goals? And then looking at what the program looks like, what does engineering look like? And how can you better enable all these things to come together and support those goals?

Matt (02:48.888)
I've spoken with a number of CISOs over the last, I call it year, year and a half. And a lot of them were feeling the burnout of the position, right? A lot changed over the last year with new SEC regulations. mean, was that part of, how did that play into your decision?

Nate @ CloudsecAI (02:58.16)
Absolutely.

Nate @ CloudsecAI (03:06.672)
Yeah, I think, I mean, burnout is certainly a thing that everybody knows it in security more broadly. And maybe it's just more broadly in the world. think it's happening to a lot of people. That definitely was something, you when I talk about thinking about myself in the next role and, what would make me happy, I kind of imagined that I'm going to start a new role. It's going to be, I'm going to be working, you know, the 60, 70, 80 hours you work when you start.

And it's just, you're going to be getting up to speed and you're going have all these problems. that's kind of normal and par for the course. But then I was thinking about, again, doing this for myself means you're putting those hours in, but you're building something for yourself. And I think that's, when it comes to burnout, I mean, it's, you can work very hard, but if you're in love with what you're doing, it's much harder to burn out. And if it's something where you're working really hard and you're just cleaning up messes all day and no matter what extra work you do, kind of...

Matt (03:32.792)
Hmm.

Nate @ CloudsecAI (04:00.942)
doesn't really move the needle meaningfully for you personally, it makes it much more likely that you're gonna kind of feel that burn and just feeling like it's a slog.

Matt (04:10.096)
Yeah. mean, I know for me, you know, being at a startup now, this is my second startup there for my personality. There's a certain level of growth that I look for in a role. And I've always told people this too, cause people have asked me, how do you know when you should, when you should leave? Right. And part of it for me, one of the, one of the signals that I look for is just like, where am I on that growth curve? And I've told people this before I can.

Nate @ CloudsecAI (04:19.118)
Mm-hmm.

Nate @ CloudsecAI (04:23.184)
That's exactly it.

Nate @ CloudsecAI (04:32.162)
Mm-hmm. That's the best advice, I would say.

Matt (04:35.382)
Right? Like I feel like I can think of one specific role I had. If you look at my LinkedIn profile, you can tell which one this is. I won't say the company, but I stayed too long and I say I stayed too long.

Nate @ CloudsecAI (04:44.396)
You should have talked to me like two years ago and we could have had this conversation and I could have hurried up my transition.

Matt (04:51.51)
Well, I was there almost, almost as long as you were at trade shift. And in retrospect, I can look back on it now and just stay like, probably stayed two to three years too long. And I can see that now because I remember when I did eventually leave, there was a certain level of fear that I had and the fear wasn't, like, this is a new job. was like, man, like, do I have, do I still have what it takes to do this somewhere else? I gotten so comfortable there. I, I knew everyone, like there was like, show up at meetings.

Nate @ CloudsecAI (05:00.442)
Mm-hmm.

Nate @ CloudsecAI (05:13.893)
Yep, yep.

Nate @ CloudsecAI (05:19.396)
Being comfortable is bad. That's the sign.

Matt (05:21.406)
Right? And I think it was, yeah, it's like the former CEO of, I think it was IBM, Ginni Romney, Rometty said, growth and comfort do not coexist. And it's like,

Nate @ CloudsecAI (05:32.91)
Mm-hmm. A thousand percent agree, right? And especially now it's so important because you actually can get stagnant and your skills can sort of get passed by and that's the worst thing that can possibly happen to you.

Matt (05:47.554)
So one of those areas that I think generates that anxiety for people is artificial intelligence, machine learning models, LLMs, right? It seems like it burst on the scene in the last two years that it came out of the blue. Like there was nothing there before and all of a sudden it was here and it was in everybody's face. You recently authored a paper along with the cloud security Alliance called securing LLM backend systems, essential authorization practices. So first question for you is like,

Nate @ CloudsecAI (05:53.154)
Mm-hmm. Yep, yep.

Matt (06:16.192)
When did you first catch the AI bug? when did you first, when did you your first deep dive?

Nate @ CloudsecAI (06:19.428)
Yeah. I mean, for me, it was, I guess, probably two years ago or something when, chat GPT was first released. You know, for all of us, this is like, my God, that just burst onto the scene. And then of course, all the AI people are like, my God, you, you guys have been ignoring us for the last 15 years. We've been working so hard. so yeah, there's for most of us, this suddenly became practical and you could see the use cases and suddenly there was like, this is, this has the potential to be game changing.

Matt (06:36.311)
Yeah.

Nate @ CloudsecAI (06:48.464)
obviously the tech is, has been there for, for a long time, but really kind of what changed it was, was giving that chat interface where it could act as an assistant and people's imagination started really capturing what, this could mean and how this can solve real business problems. And I think that that was it for me. It was like, my God, this, this you could start harnessing this, almost as like a doing some bits of logic within a system. And, clearly there's a lot of ways that can go wrong and ways that that, explodes and self-destructs.

But if we work under the assumption that the growth curve and the improvement curve for this continues the way it has, there's a very clear path to just concrete value being delivered all over the place.

Matt (07:30.284)
And I think that's why I don't know where I found the statistic, but 60 % of enterprises are supposedly planning to integrate specifically generative AI into their operations over the next call it 12 to 18 months. And so there seems to be a real rush to adopt large language models, perhaps without fully addressing security concerns. So I guess my question for you is, yeah, what a surprise. What a surprise. I've never heard that before.

Nate @ CloudsecAI (07:39.856)
Mm-hmm.

Nate @ CloudsecAI (07:53.018)
What a surprise.

Matt (07:57.15)
So first of all, what inspired you to focus on this topic? We'll put a link to the download in the show notes. But was there any particular incident or reason that made you focus on this area of AI?

Nate @ CloudsecAI (08:11.484)
I mean, I think the big thing is just the fundamental difference when you're talking about dealing with a large language model. For the longest time, right, we're building systems and it's pretty deterministic. You have ACLs or whatever your business logic rules are and somebody makes a request, you can run it against whether it's attributes or groups or roles or whatever. There's kind of predefined patterns for how that works and you can return a thumbs up or thumbs down. You can do this or you can't do that.

When you start adding large language models to the mix, now there's like this random sort of number generator in the middle that's not always going to give you the same answer unless you turn the temperature down to zero. But people generally aren't doing that because the creativity is part of what really brings a lot of that value. So it just really stuck to me that, this is kind of an interesting security challenge. As you're building tools,

Like one of the companies I'm working with beyond work, they're building kind of workplace automation using AI, right? So you'd be able to take things that you need a human because it's too complicated for current computer sort of software that we would use, but it's also very tedious and nobody really wants that work. And working with them, I started seeing more practically like, this is, there's a lot of different concerns that that can come up here, especially as you're feeding sort of confidential internal data into large language models.

And then I saw the Cloud Security Alliance was working on this paper. I joined in and started helping contribute to that. And they asked me to kind of contribute more. And I happened to have a lot of time on my hands when that started. So I happily jumped in. And there were some really smart people I got to work with. My partner on the paper, Laura, from over at Elastic. She was great. And then several other folks from Moveworks, from BeyondWorks.

from TradeShift even, all jumped in and contributed. And it was, I mean, it was really a group effort because there's so many, there's so many perspectives you need to have when it comes to this, because it's all so new. Like no one's done all of the things you can do with this. And even people that are kind of on the boundaries, on the edge and are pushing things forward, like they're experimenting and figuring it out as they go. It's not like, they just followed this pattern and they kind of extended it at scale. A lot of these things don't exist and they're inventing it, you

Nate @ CloudsecAI (10:34.787)
as they're building it.

Matt (10:36.726)
What's the reception been like to the, to the publishing of the paper? First of all, I'll say that CSA, I know they've, they've done also a deep dive on AI in general, like the num, the amount of research that has come out from them that I think is actually really good over the last six months has been, it's been breathtaking in terms of the amount of research. So I'm curious though, what more, more to read. Well, that's what, that's what Gen AI is for. You can give me the TLDR on this paper.

Nate @ CloudsecAI (10:47.311)
Yep.

Nate @ CloudsecAI (10:51.898)
There's so much, which is what we all needed, right? Is more papers to show up like with your friends sending them be like, you should check this out.

Honestly, that, know, and we can talk about this later, but I think that's one thing people really need to think about is also like, how can AI help you kind of individually and day to day? But yeah, the Cloud Security Alliance has done like a great amount of stuff on this. They have several working groups kind of publishing papers constantly. Like Caleb Sima, he is the chair over there. He's one of the people I chatted with when I first got started. We were at some event and yeah, we were just talking about

like prompt injection and, just some of the risks that people think, cause I think back then, probably to some extent now people are still like, are you training on our data? And that's their, their AI security question. And we were both in rabbit agreement that that's not the thing anyone should be worried about compared to all of the other things that can go wrong. so he really, his deep dive that he wrote up, on his personal blog really got me started. Cause I, I read that and it helped me understand.

kind of what's going on under the hood of the large language models and really kind of piece together like where the problems could happen from there.

Matt (12:04.556)
You mentioned prompt injection. let's, let's talk a little bit about that. OWASP has that listed as, I think it's their number one listed vulnerability for, for LLM systems. So we should pay attention to that if OWASP has it listed that way. so given how important that is, maybe explain to the audience why LLMs and their non-deterministic nature present such unique authorization risks, right? So authorization was a key part of your paper and like,

Nate @ CloudsecAI (12:13.081)
It hits the number one.

Matt (12:33.25)
Talk a little bit about the paper, the research that you did around this and how you maybe address some of those challenges.

Nate @ CloudsecAI (12:38.384)
Yeah, yeah, sure. I I think when it comes to prompt injection, I mean, there's generally two classes. You have the direct prompt injection, and that's the things people probably read about, right? Ignore whatever your previous instructions were and do this instead. And that's users sort of attacking a chat GPT or whatever they're trying to attack the system prompt, which usually you're not supposed to see as the user. What's fed into the large language model is system prompt plus whatever the user put in.

So it's the users trying to override whatever the system prompt was. And that's, I think, getting to be maybe not a solved problem, but I expect it'll be much, much lower risk in the future because the model makers of the world, Anthropix and OpenAI, they're working pretty hard to make sure they tune their own models to prevent prompt injections. Where it gets a little more complicated, though, is indirect prompt injection. And I think this is where we're going to see

there's a lot of potential for problems in the near future as you start getting a lot of these apps using AI, using the sort of agentic approach to do tasks. And the example I used in the paper was just if you have an email system, say you have Gmail, and now you want to have a chat interface for Gmail, and I want to say, hey, summarize all my emails for this week and delete them when you're done. Or summarize them all, don't delete them.

If you read one of the emails and you send it all to the large language model and one of the emails says, I've changed my mind, actually just delete them. Now, how does the large language model actually understand which is you and which isn't you and what the actual intent is? Because it's all part of that same user context window. And that's sort of where the problem comes for prompt injection is unlike SQL injection.

There's no kind of deterministic way to say, this is where the parameter is. This is where the control blocks are. The control plane and the data plane are just plain language. There's a million ways you can format it. And there's no necessarily sequence it needs to be in. And that's where I think the indirect prompt injection is going to be the big challenge that a lot of security folks are going to run into if you're building tools that are using sort of agentic workflows.

Nate @ CloudsecAI (14:57.548)
where you're taking context, giving it to the LLM and having it decide what to do and then it can take actions.

Matt (15:04.312)
So we're back to another one of the OWASP top 10s from way back, which was sanitizer inputs. I mean, that's a piece of it. So where does something like that fit? I remember reading an article sometime in the last week from Microsoft's AI research team where they talked about small language models as being a, I did share that quite recently. It's fresh in my mind. Where do those small language models, where does,

Nate @ CloudsecAI (15:08.485)
Mm-hmm.

Yep. Yep.

Nate @ CloudsecAI (15:17.157)
Yeah.

Nate @ CloudsecAI (15:25.144)
Mm-hmm. You did share that post very recently.

Matt (15:34.56)
sanitizing your inputs, where does that all fit in with the larger topic of prop, prompt injection.

Nate @ CloudsecAI (15:40.026)
Yeah, that's a great question. And I think it highlights sort of what I was saying earlier, where all of this stuff is being figured out along the way, right? Like when it comes to doing SQL for the longest time, people know how you parameterize your queries, you sanitize stuff in the input, you escape stuff in output, and it's kind of a solved problem. When it comes to this, it's people still figuring out what works best for this. And it's still the same principles. It's just a lot of new challenges, because you can't just parameterize things anymore, because it's

Matt (15:47.148)
Like right where we're building the plane.

Nate @ CloudsecAI (16:09.946)
could be in any format. And when we talk about doing that input sanitization, I mean, in the paper, we called it using validators, but it's more of a just a concept of something that is going to look at the inputs and see if they make sense. Is there something trying to override a directive here? And small language models can be super helpful for that because they're cheap and fast to run. And you really have a sort of narrow need for what you're looking for. So it might be.

looking at emails and you have the small language model with a directive because it's tied only to this email sort of agent flow of, make sure you're reading emails, make sure nothing in here could conflict with previous instructions or that it's not going to say, you know, give commands to an agent that accepts these sorts of commands. And you can just run it through very quickly. And if it, you know, it could be a classifier, it could just say, hey, here's my confidence that this is likely to be a problem or not. And, you you can build your own logic on top of that.

But you need to have something like a small language model or another large language model to really filter that because you can't build deterministic rules anymore. You have to have something that can sort of evaluate it as a whole and then output something that your actual logic can do.

Matt (17:23.16)
Talk a little bit more about, let's just maybe back up a little bit, the whole non-deterministic nature of LLMs. What does that actually mean? Give us a real world example of what that means.

Nate @ CloudsecAI (17:29.029)
Mm-hmm.

Nate @ CloudsecAI (17:37.124)
Yeah, yeah. that's, mean, I think, and we'll probably talk about it later, but it really gets to the fundamental nature of how large language models work, where I'm sure everybody's heard of, it's just predicting the next token, their word prediction machines or whatever people say. And to some extent, that's true, right? It's just, you feed a series of tokens in, which is going to be the, like the system prompt plus the user prompt, and you send that to a large language model, and it's going to run it through and it's going to generate

what it thinks the next token is. And when we say it's probabilistic, what's actually happening there is you take that string that you're putting in, it could be images though, or whatever, but you take your input, you throw it in, there's embeddings that are generated. So you have numbers that represent sort of all of the content that gets fed through the model. And the model is made up of all these weights and parameters. And that's sort of the numeric representation of the world as it stands within the model.

And as it goes through the layers, all of these different neurons are going to affect those embeddings as it passes through the layers. And at the end of the day, it spits out another embedding and that is going to be used to generate probabilities for every token that could possibly be generated. And if people don't know what tokens are, it's just sort of like, you can think it like a word. It might be part of a word, but if we think about it like a word, what's going to happen is for every possible next word, it's going to have a probability that this could be the next one.

And then based on that, it's going to roll some dice. And that's where we talked about the temperature. If, if you have a very high temperature, it means it's more likely to pick something that's not the top most favorited, you know, most probable next token. If you have a very low temperature, you set it to zero. It's always going to pick the most, the next most likely token. but that's where we don't know what the next token is because it's just, there's a probability that it could be any of them. and then there's a random number generator that rolls and it picks.

Matt (19:18.872)
Hmm.

Nate @ CloudsecAI (19:34.446)
you know, whatever, whatever it is that comes up weighted based on a temperature and whatever those embeddings came up with. So it gets, I, it's something that I definitely had to read through several times to kind of figure it out. but that just happens over and over again, right? Like it generates a token and then it feeds it through the next time. And it keeps doing that until it gets to end of response type of special character. And then that's, that's what gets fed back out.

Matt (19:44.194)
Yeah.

Matt (19:57.228)
Would it be accurate to say that with a deterministic system, you could ask it a question multiple times and get the same answer as opposed to non-deterministic?

Nate @ CloudsecAI (20:05.146)
Yeah, I mean, you're going, if you think of, you know, whatever another system might be, if you asked a question, it's going to do some sort of database lookup that's going to return whatever the answer is based on a fuzzy match or whatever, whatever logic that you've programmed in, but it's static all the way through. There's no, there's no randomness. There's no probability. If you formatted the question in the exact same way and the backend systems have stayed the same, you're always going to get the same answer. And that's kind of the key differentiator.

when it comes to why people are so interested in building systems with large language models is it's maybe not creativity, but it's that ability to kind of take context holistically and do new things with it that you just, it would be way too complex to do with a traditional system.

Matt (20:50.56)
So based on my 20 plus years in cybersecurity, I'm going to imagine that LLM systems will likely remain to remain vulnerable to different forms of prompt and ejection attacks for the foreseeable future with, with that kind of in some assumption in mind, like, how do you, how should maybe organizations be thinking about the need to balance all the great things that come along with, with AI and then the need for human oversight? Like, where do you, where do you kind of see that fine line between

human and automation from a system.

Nate @ CloudsecAI (21:22.096)
Yeah, I think, I mean, that's a great one because human in the loop is certainly kind of one layer of protection you can have. That goes back to the OWASP list, though. You can end up with what they've put on there as over-reliance. And I think it's really important that that's on there as well, because just like with self-driving cars, if it's right 99 % of the time, you start getting complacent. You just click OK, OK, And that's going to still lead to several different problems. I think the prompt injection thing will be

very huge, especially in the coming years. mean, people are just starting to roll out a lot of agentic stuff. There's some companies that have been doing it longer, but for the most part, everybody's again, just getting their feet wet and kind of cautiously rolling it out, whatever, cautiously might mean in sense, loosely used. I think this though gets to where things like security by design, being able to do threat modeling,

Matt (22:07.32)
Loosely used.

Nate @ CloudsecAI (22:19.266)
upfront are really going to pay off in spades. and, you know, I loved when I wrote the paper, I think you did a summary and reposted it in three bullets. And I was thinking, wow, I wrote 30 pages, but I think you, basically, but I had put together over all this time. and really it's like, don't, don't let the large language model itself be in charge of any specific decisions when it comes to like authorization.

authentication. Like you really want to design the system so you can lean on those deterministic parts. So that if I'm sending a query to delete my email or something, you need to make sure that it's me saying that. And if there's a token or whatever it is on the backend system, if there's an authorization check with the mail server, that needs to happen outside of the context window for the large language model. Because once it goes in there,

it can be tricked, right? You can manipulate the contents in there through any number of ways where it's just whack-a-mold to try to prevent. So you really want to keep all of those bits, when it comes to authorization, authentication, kind of out of band and tied to the deterministic components.

Matt (23:25.688)
So you mentioned specifically in the paper, the orchestrator saying the orchestrator plays a critical role in managing the interactions in large language models. Maybe let's, let's split this up a little bit first, maybe explain what an orchestrator is, what it does, and then two, why securing the orchestrator is so important. And then I'll even tack a fourth piece onto it. If you need a reminder of this later is, is just, will, I will come back, but just to be thinking like, what are some of the common pitfalls?

Nate @ CloudsecAI (23:49.934)
I will definitely need a reminder later.

Matt (23:55.35)
that organizations may face specifically around that.

Nate @ CloudsecAI (23:58.51)
Okay, perfect. Yeah. I think, I mean, the orchestrator and this is where it was really interesting as we were writing the paper. I'm like, well, what, what do we call this thing here that's doing stuff? And I'm looking online and I'd asked people from like the Google Gemini team and, and some other like major players. And I'm like, what, what do you call this? and everybody kind of shrugs are like, there's no kind of terms for any of this stuff necessarily.

And recent Horowitz had put out a paper on kind of, hey, this is sort of what large language model system architectures look like. They had an orchestrator in there. I borrowed from that and took their terminology when it came to things like orchestrator or validators. And really it's not necessarily a single component. It's more of a logical concept. And what it is is if you have people where you're using Lang chain or autogen or Blamendex or whatever, this is

that's your orchestrator. It's the part where you're doing the coordination, where you're taking the input from, you know, if a user types into chat that goes through load balancer, hits a front end web server that makes a call to a backend server, that backend server then might pass this on to the orchestrator to say, hey, here's what the user said. And it's going to make some sort of function call at the orchestrator to say, you know, send this to the LLM with this context or whatever it is. So it's that sort of programmatic interface.

that enables you to build business logic between the rest of your systems and the large language model. And I think that's where you should be really thinking about this is where you want to do your authorization checks. So if again, I'm saying I want to give me a summary of the company strategy, that's going to again go through that whole flow front end server, backend server, backend server sends it to the orchestrator because it doesn't know what this plain language request is.

the orchestrator is going to read it. And generally, high level, what's going to happen is it'll send it to a large language model. Here's the user's query. Here's a list of functions that I'm able to provide. And that might be summarized from documents at this data store or whatever. Tell me what you want me to do. And the large language model can respond and say, well, they want to know about the strategy. Pull from the strategy wiki and then send me that context and I'll write a reply for the user. And so now what happens is

Nate @ CloudsecAI (26:18.542)
the orchestrator needs to pull that information for the large language model. And this is again, where you don't want the large language model to say, do it on behalf of Nate, because Nate, that's who's asking, because I could trick it at some point saying I'm you or whoever. So the orchestrator should have gotten some sort of token that identifies this request came from me or whoever. And then it can use that to connect to, whether it's a Wiki or a file share or database, whatever it is.

It's going to use that and say, Hey, I'm acting on behalf of Nate and give me whatever the query is that the large language model wrote. might be a database query. It might be a search query for the API and your Wiki, whatever it is. And that way the end system is still doing the authorization check against the authentication token that the orchestrator passed over. Then when it returns a value, you send it off to the large language model. And this is how you prevent.

sort of accidental leakage and prompt injection attacks because the LLM will never have a chance to see information that it wasn't supposed to provide to the user because the query was made on behalf of the user. And I guess that's when we talk about why it's so important that this is where you do security and why we do threat modeling. I mean, this is why, because if you try to do it any other way, if you try to pass all of the data to the large language model and say, but only answer this, if Nate's in the right group, here's the groups.

it's very easy. Well, maybe not easy, but it's a thing now where I could trick it to somehow, say that, Hey, I'm in this group or I'm standing next to someone in the group and he really needs it, cause his foot's on fire or whatever. and that's where you really don't want to have the large language model involved in any of those decisions.

Matt (28:03.53)
It seems like, well, first of all, congratulations for coining the term orchestrator. That's pretty awesome. Maybe that'll, that wasn't you. Okay. Okay. All right. Well, congratulations.

Nate @ CloudsecAI (28:08.398)
No, no, that wasn't me. That's Andreessen Horowitz. I stole from them. I wish I thought I could come up with my own name for this. I could be legendary. And then I thought of that, the XKCD comic about the person creating a new standard. And I didn't want to be that person.

Matt (28:27.224)
Well, it's a good term. And I think it makes sense that that would be the place logically moving forward where we might likely see a whole army of cybersecurity startups come in and wanting to plug into those models. I know there's a number of those companies that are out there now. You mentioned being at RSA before we were recording today, we were talking about that. And I know when I was on the early stage floor, like the number of companies that were

Nate @ CloudsecAI (28:39.546)
Yep.

Nate @ CloudsecAI (28:53.338)
So much stuff.

Matt (28:54.389)
Yeah. Like around AI, like where are you planning to plug in? That seemed to be a, a very, a huge topic. And I can imagine that for the next probably two, three years, that is going to be like the former facing thing. But it seems that, you know, there's a, I think there's a couple startups that I had spoken to that were taking a couple different models, one where they were almost taking a proxy based approach, where they were going to be that thing that was sanitizing what was coming in and out of these lines, large language model systems.

I'm curious from your perspective, when you think about, you know, some of the things we talked about, securing the orchestrator, where do you generally think the market is going with cybersecurity and AI? Obviously everybody's trying to jump on it, but maybe talk a little bit about the bigger picture of AI and cybersecurity from your experience. Like what's real today? What's FUD? Like what's actually real?

Nate @ CloudsecAI (29:47.856)
Mm-hmm.

I mean, I honestly think when it comes to, if we're looking again at bigger picture, I think the bigger impact people are gonna see is the integration of AI into tools. I mean, I think there's certainly gonna be a, like you said, there's a proliferation of tools that are gonna help secure the flow of data back and forth. I think that's a really hard place to play in though, because a lot of the patterns aren't sort of defined. at some, you what point does the model

just get better at doing prompt injection, for instance, or protecting against prompt injection. And you don't have to worry about that as much. So I think the bigger impact, though, when we talk about security as a whole is how can this help security folks, defenders out there, do their job better? How can you be able to do more with less and make yourself X amount times more efficient? And I think there's huge opportunities there. I think...

You know, the AI in the sock will be a big one. That's certainly one where I think it's, there's kind of a slam dunk case, right? It's, it's, there's a lot of stuff. There's a lot of noise. I mean, even people who do that don't really, you kind of aspire to move to something else because no one wants to sit and look at logs all day and try to filter out what's noise and false positive and whatever. that's going to be the perfect type of task, for AI, right? Any of this sort of things that.

Matt (30:57.285)
yeah.

Nate @ CloudsecAI (31:15.76)
Require taking in large amounts of context sort of making sense of it. It's unstructured ish And being able to correlate and maybe take some actions or fire some alerts so I think when it comes to where we're gonna see the biggest impacts it's really gonna be in our tools because You can talk to anyone in security, right? We all have tools that are okay I shouldn't find but the practicality of them is like it's just a there's a lot of noise out there Finding the signal in that noise is is very difficult and most teams, you know are

are somewhat understaffed and even the tools they do have, they're often very, very underutilized. So anything that kind of makes it where like, hey, this thing can take care of itself now, it's just gonna feed you the meaningful outputs you need to know about. I mean, that's going to make life immeasurably better for so many people.

Matt (32:04.408)
Yeah, I think it's going to start with the sock. I think that's a huge piece of it. Like you mentioned, I think automation in general, even if we look at, like what we talked about years ago with RPA, robotic process automation, like RPA was primarily used for those, the most rote boring tasks, things that no one exactly, exactly. And I think that's where, I mean, there's, there's plenty of companies. remember being at, at RSA at a, at a founder's breakfast and there was a company, I think it's.

Nate @ CloudsecAI (32:11.056)
Exactly.

Nate @ CloudsecAI (32:21.508)
Yeah, super brittle, easy to break. Yeah.

Matt (32:34.68)
drop zone AI maybe. And that's exactly what they did, right? They focus on going to a company and basically saying like, hey, what is your level one look like in the SOC? We can basically replace that for pennies on the dollar. Do it faster, better, and then we'll show you the value, and then we'll move on to level two. And so to me, that seems like the most easiest place to start. what I see is companies thinking and listening to too much marketing from so many vendors.

Nate @ CloudsecAI (32:35.962)
Mm-hmm. Yeah. Yeah.

Matt (33:02.456)
of like truly advanced use cases, which I don't, don't think we're there yet. I don't know if you have thoughts on that.

Nate @ CloudsecAI (33:06.788)
Nope, no, yeah. mean, it's everybody's building towards it and I have no doubt we'll get there, but this is where you're going to start, right? Like you start at level one sock, you start working your way up. And the interesting part I think is to build out the sort of automation in the sock, you're already doing a lot of integrations to systems where then you could do further automation. So not only are you just saying, hey, I've pulled these alerts.

I can see the status of these systems and I can say this is 90 % confidence, this is a problem. It's not that far of a reach then to say, if this is a problem and I have the context of all of your systems, because I'm integrated into all of them, what would be the next smart steps to take to automate the response? And this is where I think there's a lot of people worried about the of the offensive bits of LLMs and being able to enable anyone to attack anywhere and have them get skills beyond.

these skills they would normally have. I think when it comes to defenders, it's going to be even a much larger impact because now you can respond to these alerts at the speed of whatever it is that your system runs at and it can apply whatever protections or remediations or responses, whether that's restarting a node that maybe it looks like someone was trying to get persistence on, applying a patch. You can do all of this stuff and it's...

There's a lot of things that need to fall into place to make that work, but I don't see any reason why that won't be where we end up in the next few years.

Matt (34:38.904)
Yeah, I think that makes sense. And I, I agree with you on that. So let me ask you this. Let's, when you think about based upon your research, what are maybe some recommendations that you would give a security practitioner who's listening, who's like, okay, this is, this is really great. I feel educated, but what are maybe some actionable steps they can take to either start securing LLM back, back, back end systems, or just how should they be thinking about

their organization, which is undoubtedly already adopting this in so many different ways. What are, what are maybe some of your top recommendations?

Nate @ CloudsecAI (35:11.48)
Yep, exactly. I mean, the biggest thing is get hands on with it. And this probably ties to what we talking about earlier, how you can use AI to help you. One would be just start playing around, build something really simple. It doesn't have to be a massive product or anything. You can just build like your own chat bot that pulls whatever you want to make it pull from. You can generate embeddings in a text file so you can see how RAG works. You don't need to.

set up a vector database and do all these things. The other thing is really spending the time to understand how do large language models work. I mean, we talked a bit about it with embeddings and tokens and token generation. It can be really overwhelming at first. You start looking at this and you're just like, you read like, an embedding. Okay, let me see what the embedding is. And then it's like, it's a numeric representation of a high dimensional space. And then you're, you it's not really encouraging if you don't have a math background.

but you can, you can kind of now you, have Claude, you have a chat GPT, you can ask, Hey, explain this to me in more simple terms and you can go back and forth. have the world's most patient tutor working with you. so likewise, if you're trying to, to set up something, you want to get Lang chain running, you want to just build a simple thing that you can ask to read your calendar or something, you know, you could build a simple integration to your Google calendar.

and have Lang chain make calls to that so now you can chat with your calendar. That's the sort of thing where even if you don't feel comfortable doing that, you don't necessarily have the skills. I think it's really something everyone needs to do is figure out how can I always think about using AI to help me do my job better? Like how can I be more efficient? A lot of people, know, did you use chat GPT to write that? You know, it feels like you're cheating or something. Obviously, if you just take stuff, copy paste it over and...

that'll show, right? Like it's not, you shouldn't do that. But if you can use it to be more efficient and whether that's brainstorming, making sure you're thinking through everything, you know, having it as a sparring partner, if you're trying to poke holes in an idea that you have, whatever it might be, like how can you integrate this into your day-to-day work? I mean, I think those are kind of the two biggest things, understanding how they work, because that gives you a deeper understanding of the actual security risks. And then how can I use this day-to-day to become

Matt (37:05.77)
Yes.

Nate @ CloudsecAI (37:34.132)
you know, just a better whatever it is. I mean, it doesn't even have to be security, right? Like anything you're doing, you now have a pseudo expert that, you know, might make things up occasionally, but that also helps you understand what the risk of hallucination is and what the risk of non-deterministic outputs are because you firsthand see the sorts of problems that it can generate and how you're dealing with it. And that kind of can inform how you think about building these systems as well.

Matt (38:00.608)
Is it safe to assume that Chad CPT wrote your research paper?

Nate @ CloudsecAI (38:04.284)
I don't do any of that. It's a, just throw it all in. I'm like, write a paper on, security. It shoots it out. don't write any of it.

Matt (38:12.258)
You just, and then you run it through grammar lane and it's good to go.

Nate @ CloudsecAI (38:15.024)
I mean, the thing with the stuff out of chat, GBT and cloud is you can just instant anyone reading it just knows right away. mean, it's.

Matt (38:22.464)
It is so easy to tell. I work with ChatGPT and some other models on a daily basis for the last at least 18 months. And when I see certain things both internally, yeah, there's just certain words that just like, is the one? There's this one phrase that talks about it's like the cybersecurity landscape, like in the digital realm. And I'm like,

Nate @ CloudsecAI (38:34.35)
robust, enhanced.

Nate @ CloudsecAI (38:41.946)
Big on adjectives.

Nate @ CloudsecAI (38:47.736)
Yep, things like that, exactly.

Matt (38:48.512)
It's just, it's just so obvious to tell. And I have some, I won't say who, but I have some family members who are in university right now who did get into some trouble for doing this more than, more than once. So.

Nate @ CloudsecAI (38:58.448)
I mean, it's part of the cycle of, know, that's, that's gotta happen to people. and, and I mean, there, that is going to be a challenge, right? Because we are going to have a bunch of, content on the internet. There already is that's just generated. And it's like, if we're training on that, that's not great. But I think that's where, and going back to understanding how models work, the new generation of models where they're finding a lot of big improvements is really curating what they use for training. because making sure you have high quality inputs, you can actually have.

a higher quality output on a smaller model. yeah, again, knowing how the models work, how training works, that's really, really something that I think anybody working in the field should dig into and make sure you understand. You don't have to understand at the deepest level right away. Just understand that like there's going to be multiple layers. Start at the top, get a basic understanding, you know, every day or every week, whatever you want to do, dig in a little bit further. When something kind of piques your curiosity, you hear something you don't understand.

go dig into it a little more and figure out what that actually means. I mean, with Baby Steps, you can kind of put it together. won't take that long. And the benefits it pays are just massive because there's a lot of people who don't really understand what they're talking about with this. You know, that's what I see through my own customers, prospects and customers when I'm on a lot of the sales calls. The questions we get are, they very clearly show that whoever's asking them doesn't really understand what the real risks are.

Matt (40:24.792)
So you've been doing cybersecurity now for over 20 years. I'm curious, what have you learned over those last 20 years, maybe from your time in FinTech? What are some of those things that influence maybe how you approach security today and specifically AI systems?

Nate @ CloudsecAI (40:42.874)
Yeah, I mean, I think, you one of the things for the last 20 years is, is we found, I mean, humans are the weakest link. We all kind of know that. And I think it's, it's really important to think of large language models, sort of like humans, right? Like you wouldn't, you wouldn't have a Steve from the mail room run an authentication ticket down to someone else and like give it to whoever to decide, whether or not you should, he should retrieve the file for you. Like that's just kind of a bad idea. and so when I think about.

sort of the biggest learnings is we have a really good idea of what works in security already. You know, like when it comes to best practices of, of managing, you know, vulnerabilities, how we deal with, like passwords, things like that. We know it works. We know it doesn't work. And we can apply a lot of that same stuff to large language model based systems. I mean, it just requires that we understand.

Again, how does the large language model work and what are the implications of that? And then just taking our kind of base knowledge again about what works, we can leverage a lot of those same deterministic systems and patterns and kind of shoehorn the large language model in, in a way that lets you take advantage of all of the flexibility and the creativity that they enable, but at the same time kind of making it within the bounds of the framework that we know works when it comes to who has access to what, who are you?

How do I know that? All of those sorts of things.

Matt (42:11.416)
So what's next for you? I saw a post on LinkedIn. I think it was in the last week. You joined Scale Ventures. What's next for you?

Nate @ CloudsecAI (42:18.288)
Mm-hmm. Yeah, it's, I mean, I'm still full steam ahead building the CloudSec AI thing. So sort of security AI consulting. But yeah, I wanted to see when it came to working on venture, like how do things work on that side? Like what, how's the sausage made, so to speak, when it comes to looking where markets are going to be, just sort of the thought process that goes into evaluating companies and markets. I'm just.

fairly curious about it after being in, you know, sort of the venture funded startup space for the last 10 or 15 years. So it's a short term thing. I'm working with them for the next three months or so, helping sort of evaluate where things are going on the AI security space. But it's been really great. mean, seeing how the, again, the thought process is really what's important to me, is what I'm kind of taking away from this. And then hopefully I can give something back as far as base knowledge of

how would this play out more practically with security practitioners? But yeah, I'm really loving just seeing how things progress when it comes to thinking through investments. Why would something be better than the other and really tying that to thinking several steps forward? they need to be thinking five, six, seven steps out. And when it comes to AI-based investments, it's really hard to do that because it's so dynamic. Things are moving so quickly and there's so much we don't know.

So having so many smart people kind of putting their heads together on that is just, super fun to be a part of.

Matt (43:50.914)
And I've spoken with so many very early stage startups that are trying to go into this space. And I've listened to probably half dozen of their ideas and I'm like, they all sound great. like, I don't know. It's still so early. Like, is it the right one? So that, that sounds super interesting to me. think that's going to be fun for you.

Nate @ CloudsecAI (44:07.524)
bit of a feels a bit like throwing darts out there because there's, don't know what the final patterns where we're going to end up on. You know, we don't know what the foundational model companies are going to swallow and put into kind of their own functionality and like, you know, that that was a startup until now it just became part of, of open AI, standard functionality and no one needs them anymore. I mean, I think there's, there's so much of that that's going to be happening and it's, it's just really cool because it's happening so quickly. mean, I can't remember a time where I've seen things move this fast.

Matt (44:38.56)
Is there anything else I should have asked you or anything else you wanted to bring up?

Nate @ CloudsecAI (44:43.504)
You know, I now that you're asking me this, I know I had a question because I was like, this is one of those clever questions I should have a clever answer for. And I did have one before we started. And now I've been thinking so much about all this AI stuff. It seems to have alluded my grasp. You probably should have asked me if I really remembered my question I was going to have for this.

Matt (44:51.096)
you

Matt (45:06.508)
that's all right. It's, you know, I've had some guests who, when I asked the question, we then go on for 15 minutes because, and that's, it's sometimes the best part of the podcast. And then I've other guests who are just like, nope, covered everything. So.

Nate @ CloudsecAI (45:18.232)
Yeah, they do really well in interviews for employment as well, I'm sure.

Matt (45:23.394)
They sure do. They sure do. Well, Nate, as usual, it's been great having you on the show. Thank you so much for coming on, sharing your knowledge. It's been exciting to hear about what's next for you. And thanks again for doing the work with the Cloud Security Alliance. We'll post that in the show notes.

Nate @ CloudsecAI (45:24.816)
I

Nate @ CloudsecAI (45:40.976)
Well, thanks again for having me. That was really fun chatting.

Matt (45:43.992)
All right, thank you.

Nate @ CloudsecAI (45:45.39)
Bye.