Cloud Security Today

30 years in cybersecurity

December 20, 2023 Matthew Chiodi Season 3 Episode 12
Cloud Security Today
30 years in cybersecurity
Show Notes Transcript

Episode Summary

On this episode, InfoSec veteran, Aaron Turner, joins the show to talk about everything from Cloud to AI. Over the past three decades, Aaron has served as Security Strategist at Microsoft, Co-Founder and CEO of RFinity, Co-Founder and CEO of Terreo, VP of Security Products R&D at Verizon, Founder and CEO of Hotshot Technologies, Founder and CEO of Siriux, Faculty Member of IANS, Board Member at HighSide, President and Board Member of IntegriCell, and most recently as CISO at a large infrastructure player.

Today, Aaron talks about the critical decisions that led to his success, the findings in his IANS research, and the importance of physical vs logical separation in home networks. What are the things that are lacking in current AI services? Hear about the security applications of behavioral AI, Aaron’s approach as he gets back into industry, and what it takes for Aaron to remain sharp.


Timestamp Segments

·       [02:49] Getting started.

·       [10:53] Aaron’s keys to success.

·       [16:40] Aaron’s IANS research.

·       [20:42] Physical vs logical separation.

·       [24:19] Top mistakes that customers make.

·       [26:56] Real-world AI applications.

·       [32:13] Thinking about AI and risk.

·       [36:15] What’s missing in the current AI services?

·       [40:46] Getting back into the industry.

·       [45:22] How does Aaron stay sharp?


Notable Quotes

·       “Get deep in something.”

·       “Make sure you put yourself in situations where people expect you to be sharp.”


Relevant Links

LinkedIn:  Aaron Turner.


Secure applications from code to cloud.
Prisma Cloud, the most complete cloud-native application protection platform (CNAPP).

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

[00:00] Intro: This is the Cloud Security Today podcast, where leaders learn how to get Cloud security done. And now, your host, Matt Chiodi.


[00:15] Matt Chiodi: Today's guest has been in cybersecurity for over 30 years. Now, usually, maybe, not always, when you find someone who's been in an industry that long, a lot of times, they're getting lazy, they're falling back, they're not staying up on their game, but that is not the case with our guest today - Aaron Turner. For those of you who don't know, Aaron is an amazing individual. He is someone I've known for many years. I have a tremendous amount of respect for him. What I love about Aaron is that, despite the fact that he has been in cyber for so many years, he is always hungry to learn more, and also share that knowledge with others. The discussion today is fun, because we jumped all over the place. We covered Cloud security to AI. You name it, we covered it.

As I always say, if you love the podcast, give us a five-star review wherever you listen to your podcast. It really helps others find us.

Thanks so much. Enjoy the show.

Aaron, thanks for joining us.


[01:23] Aaron Turner: Well, good to talk with you, Matt.


[01:24] Matt: So, I want to jump right in here. First of all, I want to tell a quick story, because I don't know if you remember this, and I didn't tell you I was going to say this. So, I first met you, I don't know how many years ago it was, but if you remember, we were at some kind of party at RSA. It was actually at somebody's house, and the way I got there, my boss at the time was like, “Hey, there's this party. It's focused on identity and access management. I want you to go,” and I said, “I don't know anybody.” He's like, “just go.” So, I show up, this house is full, everybody's drinking wine, I really don't drink, and it was loud, and I'm leaning up against the wall, and I look to my right, and there's this big guy standing next to me. Sure enough, that was you. So, that's how we actually met, and I just thought that was a funny story. Do you remember that?


[02:07] Aaron: Yeah, I do. I can't remember whose house it was, but it was a huge group of identity folks, folks from all over, Oracle, Microsoft, Google, lots of folks in identity, and I also do not partake of libations at these parties. So, I'm often standing off to the side. So, yeah, I think we had a good conversation, and then I think, the next time we really got to chat was Atlanta. We were at some event, and then you invited me to a couple of cool dinners, and then we know each other through the IANS community as well. So, I think it's been, gosh, a decade. It's been a long time.


[02:46] Matt: It's been a while. Well, thanks for coming on. Hey, you've been in cybersecurity for longer than some of our listeners have been alive, unfortunately, or fortunately. So, let's just talk through how you got started, and the reason I ask is that if I look academically at you, you have a BA in Spanish Linguistics. So, just tell us, how did you get your start?


[03:08] Aaron: I did a two-year service mission in Mexico, and afterwards, loved everything about Spanish. I just said, “I want to be a Spanish professor,” and I love sports, so I was like, “maybe I can be a Spanish professor and a basketball coach or something.” I was 21 years old and starting my life, and I set up this server. I've always been a lover of technology, and I've been dabbling in network news transport protocol, NNTP, Usenet, that sort of stuff. So, I set up my own Usenet server to allow people to collaborate and talk about dialectical differences in Spanish. So, “why do people in Mexico use this word and people in Argentina use this word?” and in 1994, this was a huge deal. I begged a 500-megabyte hard drive from a friend at Novell. That hard drive was worth $15,000. So, it was more than a car. So, I take this 500-megabyte hard drive, put it into this Usenet server, and now we've got all these people from around the world chatting about dialectical differences in Spanish, and it was a big deal at the time. No one else had really done anything like that in the linguistics area, so the head of the Spanish department asked me to put up a screen. It was a scrolling screen of all of those people chatting and stuff, and he could brag about, “hey, look at all these people, 1000s of people, talking about Spanish.”

I did my undergrad at BYU. BYU has a very strict moral code. They call it the Honor Code. So, this was going great for a couple of weeks. Well, one Monday morning, I got a call from the administrative assistant of the dean of the Spanish department, said, “Aaron, you've got a problem. There's something wrong with your system. Get up here right now.” I was like, “Ah, it's eight o'clock in the morning.” Okay. I go up, get there, she's turned off the monitor. I turn on the monitor and there's graphic pornography scrolling on the screen, and let's just say that graphic pornography and BYU, that's not part of their brand. That's not on brand for them. So, I'm like, “oh, man, what's going on here?” Well, the Honor Code office gets involved and they have a very simplistic view of this. It's like “your server, your porn, you're out of here,” and you've got to get out, so I'm like, “Well, I'd really like to finish my degree. I've got time invested here. Getting kicked out of college isn't exactly on my roadmap,” so I actually worked with Cisco to develop some of the first versions of NetFlow monitoring. So, “how can you tell what's actually happening between switches and firewalls?” and that sort of thing. Helped them develop some of the first logging capability between switches and routers, and things like that. There weren't firewalls yet. This was before firewalls existed, really, and I basically was able to put together a case to prove my innocence.

Now, once I did that, I didn't have to defend myself because these guys kept coming back, and ended up being a couple of dorks from Cal Berkeley who thought it would be funny to put their porn collection on a religious University server, and that became the hand-to-hand combat that I was doing with these guys to keep my servers safe. Now, I regret the fact that I did not meet my friend Marcus Ranum until about five or six years later, but Marcus holds the patent on the first firewall, and his network flight recorder system was a great thing, and eventually, I learned how to harden the system and stop it, and then I moved on to other things, but I essentially had to do InfoSec because I wanted to do technology. I wanted to use technology for its benefit, but I had to protect myself on this crazy new world that was the internet, and I ended up writing a little booklet on how to conduct an internet investigation, and a guy at the FBI got a hold of that manual, and got in touch, said, “Hey, would you help us think about how we do internal investigations?”

So, in the 95 timeframe was when I started to work with law enforcement on “how do you think about capturing information?”, and I eventually went to law school because the Spanish professor's thing I realized that wasn't a good career path from a monetary earnings perspective. So, I go to law school, I get involved in this crazy case, ancillary to Enron, and I end up dropping out of law school as a result of that experience. So, I was doing white hat work on the side as a result of all the skills that I put together, so I end up doing that white hat work, and doing pen tests, and that sort of thing, and then my lucky break happens in 1998, when Microsoft was looking for someone who spoke Spanish and knew about the internet, so I joined Microsoft Latin America in 1998, and I was the only white guy in that whole team. So, it was the opposite of most people's professional experience. I spoke Spanish and Portuguese every day at work, and it was an awesome way to use my linguistic skills. Worked in some crazy places, Venezuela, and that sort of thing, and at the time, Microsoft wasn't doing security full-time, so I was much more focused on connecting stuff. So, I helped build the largest ISP in Venezuela, the largest ISP in Colombia, and then security was an afterthought until, in the summer of the year 2000, there was a diplomatic incident between a Chinese fighter jet and a US Navy spy plane flying off the coast of China. They end up touching wings, Chinese jet crashes, US spy plane lands on a Chinese military base, and this starts a massive diplomatic incident that results in the Code Red incident, which was one of the first nation state-written malware attacks. It was a virus that targeted Microsoft software. If you had an unpatched version of Internet Information Server, it will take over the server and say, “Hey, code red, Chinese rule, whatever.” At that time, I was responsible for all the internet servers in Latin America. So, every Microsoft server that was in Latin America, that was essentially my thing. I had to support it.

We had 2000 customers that called in, in one day, and our support process was, “first, give me your license code,” so, you had to read the 16 digits off of the back over the bad telephone lines from Latin America, “then, give me an email address and I'm going to email you the security hotfix to fix this.” So, not scalable at all. By that time, I had won a couple awards at Microsoft due to some of the crazy stuff that I'd done. So, Bill Gates had recognized me for one project that I worked on. I had this cache within Microsoft, where I could do interesting things, so I went to the Microsoft product team, Windows server product team, and said “we've got to do something,” so I helped to envision the first version of Windows update, and then that started a whole career inside of Microsoft are basically doing security startup stuff. So, there's, in under five minutes, how I went from a Spanish linguistics guy to helping the FBI, to working at Microsoft, to helping solve nation state hacks.

So, it's just this process of being in the right place at the right time, but I worked my butt off. I'm an autodidact. I've never taken a day of computer science training my life, so all the stuff I've learned has been through my own research, trial and error, and I didn't sleep much in those first few years because I was just always learning. I'd have to go head-to-head with Microsoft guys who went to MIT to learn how to write C code, and I was going head-to-head with these guys about doing buffer overflows and stuff like that. I can remember, for two weeks, I didn't sleep because I was reverse engineering C compilers, and that was my life for a while, and luckily, I had the aptitude. I had a loving spouse who put up with me, so that's the beginnings.


[10:48] Matt: I love it. I love it. So, you touched on a couple of these things already, but your career path, you mentioned, you did stints at Microsoft, Idaho National Labs, you started a few companies, sold a few, most recently became the portfolio CISO for a large private equity firm. You did touch on some of these things already, but maybe, what were one or two events, or maybe even decisions that you made early on in your career that you think set you up for success?


[11:16] Aaron: When I talked to people about success in cybersecurity, there's a lot of folks right now that are coming through and getting these cybersecurity certificates or bootcamps, and that sort of thing, and I think I tell them is that, “okay, those are great teaser things to get you aware of what's going on, but the key to my success was, when I got involved in technology, I went super, super deep in that tech.” For example, when I was dealing with the internet stuff, I was one of two people within Microsoft who knew how to build a complete ISP with the complete product. Literally walk in, bare racks, bare metal, and within two days, I could build a complete ISP, from OS to DNS, to routing, and every instead, and that took time, no one else, and the reason why I did that is because no one else was taking responsibility. What I saw in my career is, there are a lot of people who are willing to wander and get into their little niche and be happy there, but there are very few people who look at holistic systems. So, I think the key to my success was being able to dedicate the time to dig into entire systems and understand the system to the nth degree, all the way from, like I C decompilers, I was looking at firmware, I was looking at drivers, there was no stone unturned when I went to try to understand something, and as a result of that depth of knowledge, when someone wanted to go head-to-head and challenge me about a cybersecurity principle that I was trying to go after, I could go to them and say “no, this is the way this memory is handled with this network port. You’ve got to do it this way, or it's going to blow up.”

I think, a lot of times, people who go into cybersecurity, especially the newer folks that I'm seeing now, is they wander in, and they maybe get an analyst job, and they get set behind the blinky lights thing, they're looking at alerts and sort of thing, and very rarely do they have the opportunity to get deep. So, what I tell most people is “get deep in something, because the domain of cybersecurity is flexible. You can apply cybersecurity to any technology class,” but over the course of my career, the thing that I was proudest of, I guess, one of the things I was blessed with is, I was able to completely retool every two years. So, every two years, I would completely retool my specialty. For example, I was one of the first people at Microsoft that discovered an SMTP buffer overflow in exchange. So, for two years of my life, it was email, email, email, email. Well, I saw the dawn of databases around ‘99/2000 timeframe. So, I went super, super deep. I was lucky to work with a guy named Dave Lichfield. Super smart guy who was a consultant outside of Microsoft but was just rapacious in the way that he discovered database vulnerabilities, and I learned a ton from watching him and his brother, and his father, and then after databases, I went to mobile. I was lucky to work on the Motorola Q project, which was the first Windows smartphones. I went super, super deep on mobile, and then I went super deep on embedded systems, and that's when I went to go work for the government, because it was that convergence of mobile and embedded systems that attracted me to go to IML, because they had the national wireless testbed, they had the SCADA testbed, they had the SCADA cert, and all that stuff, so that's what attracted me to that, because I knew I was going to work in a place where I could go much deeper than I could at Microsoft.

It was a very difficult transition, working for the government vs working at Microsoft. It's two completely different cultures, but I made the commitment to do two years of public service to do that, and I'm really glad that I did. We did things like the Aurora generator test, that we actually show that you could blow stuff up with cyber-attacks. No one believed us when we went and pitched the idea to do the Aurora test. We got laughed out of the Pentagon. There was a two-star general who said, “if I ever want to go kinetic, I'll bring in artillery.” “Well, someday cyber is going to be bad. It can do bad things.” And we showed that that could happen.

So, I think if I were to tell people how to have a successful career is, be willing to retool, move with the industry when it comes to specialties, spend the extra time outside of work doing that investigation, having a lab, or something you're doing, because if you just do cybersecurity within the domain that you're given the opportunity, you will be frustrated. I see a lot of frustration when people are not happy with their careers in cybersecurity, because they feel like they're just watching blinky lights all the time.


[15:38] Matt: Now, that cycle that you have, it sounds like it's every two to three years, something like that. Is that something you're intentional about or is that just your attention span?


[15:49] Aaron: I'm someone who was always looking at the demand curve, and I'm someone who does not do well with the same thing every day. I have to be doing something new to scratch the itch within me. So, I think it was a combination of my watching the market, and my personality of being sort of ADD, whatever you'd call, that just has to be doing something new all the time, and there's been times has been terrifying. There's been times I've almost lost my job because I've taken risks, because I leaned in a little too far before I was ready, but most of the time, I was rewarded for my ability to take risks and to lean in and go, “hey, no one else is looking at this. Okay, I know, it could be really bad, but I'm going to give it a shot.”


[16:38] Matt: So, this is interesting. You and I are both on faculty with IANS research, and I was looking at some of the work that you've done recently, are you're quite busy with IANS. I was looking at all the stuff that you've been doing. For those that are listening, if you check in the show notes, I'll put a link to IANS. You can check them out. Great organization, but just this month, we're recording this in October of ‘23, you did a session with IANS called the Top 10 Cybersecurity Misconfigurations, where you essentially covered a recent collaboration between NSA and CISA, the cybersecurity infrastructure security agency. What caught my attention was actually the research that you cited from Fortinet. So, could you walk us through what you found, and what was the impact from a cloud security perspective?


[17:28] Aaron: So, the top 10 misconfigurations are easy stuff, like don't do default configuration. Make sure you've got separation of duties, make sure you do network segmentation, keep your stuff patched. The top 10 is not rocket science. It's stuff that people should be doing, but the angle that I took off the Fortinet research was, Fortinet had discovered a series of zero days remote software compromises against home networking equipment. So, let's take a look at the LastPass Incident that happened a while ago. That was a situation where a patient attacker was able to identify a privileged user who had access to the LastPass stuff, had access, specifically, to the AWS key management stack, where their s3 bucket keys were kept. The attacker was patient enough to leverage a vulnerability in the home network of the individual, lie and wait until the browser session opened up, steal the browser session token, play it against Key Management Service, steal the keys, and now they can decrypt all the stuff they wanted to. The root cause of that problem was a home networking vulnerability.

So, what I brought up in the Tech Talk, the tech briefing that I did for IANS a couple of weeks ago was, how many organizations prescribe to their home privileged users, let's say you’re a privileged AWS user, privileged Azure user, whatever, that they should be running on a segmented network at home? Because they should be. How many people prescribe to make sure that they're doing 48-hour patch cycles on their home network? They should be. So, it's that whole aspect of, in a zero-trust world, you really have to go zero-trust to the nth degree. You can't assume that all my Zscaler control is going to help me in a home network. It didn't help, because in several cases, we've seen where the bad guys get a hook on the home router, they then turn the home router into what looks like Zscaler, a captive portal, and guess what Zscaler does in a captive portal? Lowers all the defenses. So, all you have to do is get that hook on the home router, trick Zscaler into thinking you’re a captive portal, and now you have command and control, xFill, whatever you want to do on that endpoint.

So, we've got to be better about thinking about holistically. For example, I practice what I preach. Here at my home, I have six networks. I have six logically separated networks and one physically separated wireless and wired network. I'm talking to you on the one physically separated network. When I'm doing important work, that's what I'm working on. I've got different slices of IoT networks that I run. I've got an IoT network for the stuff that I absolutely despise, like Sonos, and the stuff that's super, super leaky, that doesn't have great integrity. So, I've got an entire IoT network just for that. I've got an IoT network just for video streaming, because that's different performance requirements for that. So, I think we need to make sure, as technologists, that we apply segmentation, patching, all that stuff, to the home networks that we rely on to do important work.


[20:42] Matt: One thing I'll double click on there, I do some of that at home, where I have got an IoT network, and then everything else network. How important is physical versus logical separation?


[20:53] Aaron: It's all about how much you spend on the gear. So, the more costly your infrastructure is, the greater likelihood that logical is going to work for you. The lower cost the stuff is, the less likely logical segmentation is going to work. So, I don't spend a ton of money on my gear. So, I really liked the Netgear Orbi class, and they've got both their personal and then the small business stuff. So, each one of those will host three logical networks, so I have one that's physically separated on top of that that's just a dedicated one, but I think, taking a look at some of those more advanced wireless routers, more advanced home systems that have the capability to do automatic reboot, and that's a really key thing is that, not just check for updates, but will apply and restart the device upon issuance of an update, and the personal version of Orbi doesn't do that, but the Home Office version does, where you can say, “2am, get updates and reboot,” or whatever. That's really critical, I think, because there's been four remotely exploitable vulnerabilities on Netgear Orbi stuff over the last year, that I was protected from on the automatic reboot system, that I had to manually do on my consumer version. So, spending that extra money for the automatic reboot, I think that can be useful for certain people. Now, within the IANS community, as I've associated, spread this, and socialized this around, with some IANS customers, we've actually talked about getting towards a managed home network for certain individuals, like CEOs, board members, CFOs, super high material assets, human assets, and some IANS customers are starting to do that. They're working with the likes of Cradlepoint, Meraki, thinking about treating the CEO’s home network like a branch office, and remote managing and monitoring that sort of thing, because I think that's going to become more and more important.


[23:42] Matt: When I was back at Palo Alto Networks, almost two years ago, now, they had a brand called Okyo Garde, and the whole idea was exactly what you're just talking about, help customers extend, they make all this investments in Palo Alto Networks here in the enterprise, on campus, but then they go home on their this garbage consumer grade, lowest common denominator equipment from Verizon, or whoever their telco is, give them something that's enterprise grade. Well, interesting enough, I just looked at it a while ago, I I think they shut it down. So, maybe they were too far ahead of the curve when they did it. It's interesting, I know we've mentioned IANS a couple of times, but looking at one of the questions that I would get often as an IANS advisor, so to say, was customers asking about, “how do I secure my 365 tenant?” This was a question I don't get quite as much anymore, but you know, going back 18/24 months ago, it was super common. I know we're jumping all over the place here. This is what I love talking to you about is that you've been in so many different areas that you've got a lot of wisdom in these areas. What are maybe the top three mistakes that you often see customers make? Or maybe if we think about it another way, what do customers often assume about 365 that is maybe wrong and gets them into trouble?


[25:05] Aaron: When they don't understand the identity attack surface. Choose an identity provider and go with it. Don't do this mixed up. If we take a look at what happened with MGM, part of the problem was lack of identity hygiene, and the same holds true, regardless of which identity provider. I was just on a call with Wolf Goerlich, another IANS faculty member, today, where we're talking about the importance of having really clean identity supply chain. So, first, choosing a provider, stick with it. Don't do this co-mingling stuff. So, there's number one. Number two, the Microsoft 365 defaults are set up to sell you more Microsoft stuff. They’re not set up to protect you. So, don't use the defaults. This is weird that people are like, “Well, it's in Microsoft's best interest to protect me.” Well, it's in their best interest to protect you as long as you're paying full fee, and they want you to buy the E5 license and all that stuff, and very few do. An E5 license is double what an E3 license is, so most people go in the door on E3, they don't understand that they're not getting the full protections. CISA has published some really great guides around how to harden the Microsoft re-certify platform. It's eight different guides you can go through, it's almost 300 pages of step by step how to do it, but you’ve got to do it. You’ve got to harden that platform, and then the last thing is, you've got endpoint detection response, you have EDR. You’ve got to get CDR. You’ve got to get cloud detection response, and that was the project that I worked on and sold to Vectra, was how to get behavioral AI to monitor the complexity in that platform and be able to actually hold Microsoft accountable. Who's watching the watchers? That's what the Vectra project was over the two years before I left there.


[26:55] Matt: AI is a topic that everyone's talking about, and you just mentioned that. You mentioned it, obviously, in the context of within a security product. What are you seeing in terms of real applications? Maybe even talk about what you did at Vectra, because so often what I see is just AI window washing on things. You dig in, it's really not any different. Everyone just wants to throw an AI term on there. What's actually real with AI today, in terms of cybersecurity?


[27:25] Aaron: AI means so many things to so many different people. So, you'll notice that I use the adjective “behavioral AI.” Within the Vectra context, it was, “how do you establish baselines by looking at data that's either so voluminous that a human would get bored, or you can't do it themselves or so fast that you’d need so many people?” I think, first you need to think about “what is the security problem I'm trying to solve?” I think, if I take a look at where's innovation happening right now, I think what Microsoft is doing around security co-pilot within their Sentinel ecosystem is really interesting, because they're the only ones that I've seen that are looking at artificial intelligence as being staff augmentation. So, thinking about is how do you take a really smart person and make them more effective? How do you take a good small group of working people, of analysts, and amplify their output? A lot of the folks on the vendor side of things just slap AI on that for marketing purposes. So, I would say, whenever you see AI slapped on that, ask the question, “which particular discipline of AI are you talking about? Are you talking about natural language processing? Are you talking about visualization?” There's all these different branches of AI. So, have the conversation with the vendor or the partner who slaps AI on that to say, “well, what's going on here?”

The way that I try to simplify this, for people who are wanting to learn more about it, is that AI in its best sense is essentially automation based upon machine learning and a lot of data, and and once you go beyond that simple representation at that point, I generally perceive that I'm wandering into snake oil territory where someone's trying to sell me a bill of goods, and I'm like, Okay. Key questions I'll ask are, “so tell me about how your data scientists go about creating models to train themselves. How do you think about things like,” here's a term that I use with folks to really see what they're doing is like, “tell me about how thermal noise impacts your AI model,” and if you ask that question to somebody, they’re like “thermal noise? What’s thermal noise?” Well, in large AI systems, when the GPUs heat up, or the hardware that's being used to run the AI, when the GPU’s at different temperatures, you have different outputs. So, for example, even inside of ChatGPT, part of the non-determinism is that based upon the thermal loads on the GPUs that are running the GPT model, you can have different outputs. You can have different results. So, I think you develop very piercing sniper shot questions to ask people who say, “Oh, we do AI,” is “okay, well, tell me about this,” and all of a sudden, it deconstructs the problem, because they have to lift the veil to say, “well, are they really doing AI? Are they just slapping on a sticker?”

So, I think, if I were to tell people about how to use AI today, one of the most valuable ways to use AI, what is available as AI, branded AI, today that's widely available, like ChatGPT or Bard, I think using them as tabletop Dungeon Masters is probably the best thing to do. So, you create a scenario, say, “I am a 3000-person organization that's trying to create a business continuity plan. Give me a scenario about how to test my backup or recovery.” So then, you use it as a dungeon master to say, “hey, here's what the situation is. What are you going to do now?” You enter and you create an interactive gaming scenario with the GPT model as, essentially, the answer-giver. Because it's random, it's going to help you think about things you haven't thought about before. It's going to take other people's input into the system, and doing that is a very low risk situation from a data leakage perspective, because you're just talking about process and what's happening. It's not like you're entering PII into the system, saying, “please analyze this user data for me.” So, that's where I've helped a lot of what I'll call cybersecurity AI beginners look at, is using these large language models as scenario drivers, desktop exercises, creators, that sort of thing.


[32:01] Matt: I had Caleb Sima on in the past to talk specifically about AI, and it was just a generic general discussion on it, but again, we've been talking about this a little bit already. If you were advising a Fortune100 company, or let's just say, for that matter, any large company on AI and risk, how would you encourage them to just to start thinking about it? Because I know, the default often is, “hey, we're just going to block it,” and that was the same approach that was taken with Cloud, if you go back five, seven years ago. What's not being talked about?


[32:35] Aaron: First, just like with any complex topic, there is no one right answer. So, you need to create a risk matrix. If you're working on designing policies, procedures, guidance for artificial intelligence relative to a large company, the first thing you’ve got to do is you’ve got to create a Shades of Grey chart around risk. For example, the way that I like to do it is around the CIA. So, keep it simple. Confidentiality, integrity, availability. So, let's talk about confidentiality. Let's say that you've got a team of marketing people who’s trying to write a new advertising campaign for an existing product. So, the product is already in the market. No downside of exposing that. I think that's a really good use. That's a really safe use. You're using it to create a human readable text about something. The example I give is, I hiked across Spain for a month this year. So, I did the Camino de Santiago across northern Spain, and I was away from my wife for a month, and I missed her, so I actually used ChatGPT to write a Shakespearean love sonnet and about how much Aaron loves Holly and how much he misses her on this trip or whatever, and it came out nice, flowing iambic pentameter, with Shakespearean flowery language and all this stuff. I couldn’t have done that. I can do a lot of things. I can't write, but it impressed my wife.


[34:05] Matt: And now, she hears this. You just blew your cover, my friend. Sorry.


[34:08] Aaron: I told her. She knows. I'm a geek. So, using it to do creative things for human language is a great use. So, they're safe. Now, let's talk about the other end of the spectrum. Dangerous. So, where's the dangerous way to use AI? Especially with these open models that are available, depending on how you license them. We're focused on the availability vehicle. SLAs around this stuff are not great. What's the availability of ChatGPT? What's the availability of Google Bard? What's the availability of these things? So, I was involved in an IANS customer conversation where someone had built a natural language processor to be on the front of their knowledge base. So, the information was public, wasn't confidential, but the natural language processing component relied on the large language model. Well, because they were using a freeware version of the ChatGPT API, it went down one day and it broke the whole thing, and they didn't have the service level to support it, and when it comes to the integrity, there's all the hallucination that happens, so the thing that I would caution you is that, if you're going to use this stuff, make sure you have an expert review the output. Jake Williams, and who was the other IANS faculty I was working with on this? Mick Douglas.

Mick Douglas went, and he tried to optimize a whole bunch of code that he'd written in C# to bypass EDRs, and it was 800 lines of code that he had to do an EDR bypass, and he took it to ChatGPT and said, “hey, please optimize this code,” and it came out with 120 lines of code. It's like, “oh, frickin’ awesome.” Well, ChatGPT created a whole new class in C#. You can't create classes. Microsoft gets to create class, not you. So, I think it actually slowed the project down when they did that. So, I think you've got to look at it through the lens of “what's the validity of this output?” and run that through the CIA - confidentiality, integrity, and availability. That helps create the matrix.


[36:14] Matt: So, when you look at the big three cloud service providers, AWS, Azure, Google, they are offering various different forms of AI services. Now, the platforms themselves, not necessarily the AI services, they all have certs. They have attestations, ISO, SOC 2s, you name it, they have it. What's missing, specifically around the AI services?


[36:39] Aaron: I was lucky enough to get invited to Microsoft's campus a month ago, and I got to interact with their AI teams, and I was able to ask some very specific, probing questions of them. “Hey, tell me about the logical data protection models for your Azure GPT stuff. You market to say that your data will never be used to model, your data will never be leaked, or whatever. Okay, so just show me what the logical controls are to stop that,” and I got a lot of “uhm, well.” So, I said, “look, this is not a hard question. You guys did this before. I'm not trying to be unrealistic be too demanding.” When Microsoft went to the market and said, “we want you to put PCI workloads, payment card workloads, inside of Azure,” they went and had Coalfire do this awesome audit to say, “here are the logical controls and this is why you can put your PCI data here,” and Microsoft didn't have to expose their source code, they didn't have to expose the intellectual property. They just exposed it to Coalfire, and Coalfire said, “yeah, these are commercially viable controls.”

I just want to see a Coalfire report about the logical controls for the GPT models. I don't want you to expose your secret sauce. I just want to have a viable third party who said that they've looked at the controls, and they agree with the adequacies of them, and unfortunately, these things are moving so quickly, the marketing people are so far out in front of the technical people right now. The marketing folks are out there trying to make a difference, trying to say, “we do this better.” AWS with their code whisperer, or Google with their Bard stuff. Everyone's out there just marketing the daylights out of this, and the security stuff, as is normal, as we've seen time and time again, security is getting dragged along for the ride until we, as consumers of this stuff, make them prove that the controls work, and I think that's where we need to work together as a community to raise our voices to say, “Hey, where's the third party auditor report on these controls?” I just, I just want to see the audit report. It’s all I want to see. So, I think that's where we need to join voices and create that market demand.


[38:45] Matt: It took a while. I'm thinking back, years ago, even with AWS, when they first launched, it took a while before they started to get their certifications. Now, you go look at their security center, any one of them, and there's nobody that has more certifications and attestations, and this and that, PCI, HIPAA, etc. But it took time for them to get there, so I mean, from my perspective, I think that we are still so early on in this game that they're not likely to have those attestations, probably, for nine months, a year. Maybe longer than that.


[39:18] Aaron: The technology is moving so quickly. How do you build a data isolation model for a large language model? I've done a lot of stuff in my career. I've wandered across all sorts of domains, and the power of the large language model is the data sharing. That's why you get such interesting output. So, how do you take the potential of the mass sharing of data through a large language model and apply it to a private model? I've seen it drawn on whiteboards about how it uses these things to do optimized indices and what they call RAGs and other things like that to basically tie this stuff together. I still don't get it. I'm like, “but how does that optimized index interact with that without leaking data back?” I hopefully don't come across as trying to be difficult. I just want to understand. From my perspective, when I think about applying data protections, that means you've got to have at least a data protection model that sits in between those two things, and I don't know how the compute model works for that when you've got to combine GPU, massive platform that's doing all this stuff, so I just want to understand, and I haven't seen any of the platform providers give any sort of guidance about how that actually works.


[40:45] Matt: Well, it sounds like this is something that you may be dealing with hands-on pretty soon, because you recently made the jump from the vendor world back into industry. You took the global CISO role at a large PE firm. What's going to be your process for getting a handle on risk? Anybody can go and look at your LinkedIn profile and see what we're talking about. This is a massive global portfolio. Do you have a 90-day playbook you use? How are you thinking about this?


[41:14] Aaron: So, what's cool, and the reason why I took this job is because it gets me back towards my law school days. My specialty when I was going to law school is doing M&A work. Mergers, acquisitions, that sort of thing. So, I wanted to get into a situation where I could take my knowledge of cybersecurity and apply it in a meaningful way to some of the acquired entities. Use cybersecurity as a competitive advantage in their field. So, you actually drive value into the portfolio companies with cybersecurity, because those entities get hardened, and attackers go off to others, do damage in other places. So, it's been fun to learn. I'm two weeks in. So, I'm still a newbie, but it's been fun to learn about their process around 100 Day plans, business transformation, oversight versus direct responsibility, making sure people have the right resources. So, it's still a work in progress. I've got some ideas about how this is all going to come together, but the thing that I'm focusing on is, what is the minimum viable security plan? Regardless of industry, regardless of jurisdiction, where you're at on planet Earth, what's the minimum viable plan? And it's been interesting listening to people. A couple of the folks that I'm working with so far, they've referenced the Australian Essential Eight. So, okay, cool. Australian Essential Eight. Then you've got the more complex MIST stuff. MIST is the exact diametrically opposed.


[43:04] Matt: I love that about the Australians. They boiled it way down.


[43:06] Aaron: “Here’s 4000 pages. Figure it out.” I think what the Australians nailed is the fact that people don't have unlimited resources. You’ve got to use finite resources to make a difference. My criticism of the Essential Eight is, I'm not sure they're ready for Cloud. It has some really good stuff when you own the software, and you own the network, and you have control. The moment that the shared responsibility model kicks in, I'm not sure how well the Essential Eight work as they stand today. So, maybe we'll have to get to the top 10 instead of the Essential Eight, whatever the alliterative word you want to use. The NextGen Nine or whatever. But I think you can still keep it under 12 and make a difference. So, I think that's where I'm going to be spending a lot of time to basically promote that, and a lot of the organizations we're working with own infrastructure, pipelines, power generation facilities, and stuff like that, and that's another thing that I spent a couple of years on, was how do you protect infrastructure? Because that has a different flavor than enterprise IT and now there's this conjoining of enterprise IT and Operational Technology, Industrial networks, in ways that most people aren't prepared for. So, I think there's another slice to that that has to be done, and luckily, I still have friends at FERC and other places that are doing really cool thought leadership around. How do you take a NERC CIP compliance situation, which is, gosh, look through that documentation. Good way to go to sleep.


[44:50] Matt: That'll put you to sleep. I was going to say that's great bedtime reading.


[44:54] Aaron: But we've got to have better prescription guidance there. I basically look at the next five years of my life as essentially going back to the IML days. So, this is scratching the itch of a lot of stuff that I uncovered when I was working at IML. That convergence of IT, OT, industrial enterprise. Can we make a difference using automation? Can we make a difference using large data analysis? What can we do?


[45:21] Matt: So, you've done a lot of things over the course of your career, from entrepreneurial work to consulting. Now, the portfolio CISO a large PE firm. How do you stay sharp? What are the things that you listen to, that you read? 


[45:39] Aaron: There was this guy named Rob Thomas, who started up a firm called Team Cymru back in the day. He's a rabbi, and I really respected Rob. I first met him back in ‘99. So, my morning read is his Dragon news list. The Dragon news list has been around for 25 years now. I love to look through the Dragon news list and see what's there. Being IANS faculty, I am forced to read. I think, in fact, the thing I love the most about being IANS faculty is it forces me to stay sharp. I'll handle on average eight to ten in-depth security questions for IANS customers about Azure AD, or identity, or whatever. I’ll usually have to spend, for every hour of a phone call, I'm usually doing 60/90 minutes of research before I get on the call. So, I'm going and rereading, sharpening up, getting ready to do something, so I would say, if I were to tell people how to get and stay sharp is, make sure you put yourself in situations where people expect you to be sharp, because that secondary motivator is “Okay, I’ve got to do this. I’ve got to go up and stand in front of these people and do this stuff.” So whether that's at a small-scale level of doing lunch and learns at your own team. “Hey, every week, we're going to have someone get up and do a presentation on what's the worst thing that can happen to us, or whatever,” where you have to go and essentially do that investigation, and then present it in a way. For people who don't like to do that, at least writing things, maybe keeping a security journal, where you're like, “hey, that's what I saw.” So, it just depends. People are so different. I'm an extrovert. I love conflict. I love it when people say “no, that's stupid. That's dumb.” I'm like, “Okay, let's talk about that. Let's push to the end.” So, I'm an extrovert who loves to learn through conflict and learn through that challenging learning environment. Not everyone does that. Well, maybe you're an introvert and you don't like conflict. Well, then at that point, you’ve got to figure out what your learning style is so that you can go and acquire knowledge, and maybe that's through Reddit. I love what Reddit has done to cybersecurity. The online groups around hacking and cybersecurity on Reddit, they can be weird places, but at the same time, it's an anonymous forum where you can just put stuff out there. Want to ask a question about something. So, I think you need to find out what your style of learning is, and people who do well in cybersecurity are lifelong learners. I'm not a spring chicken. I've been around a while, but I take it seriously to learn something new every single day.


[48:32] Matt: I think the key there, for me, at least, what you said is putting yourself in a position where you are forced to be uncomfortable, and that might look different from somebody else, but find that. Don't be just comfortable in your role. I know that's tempting, especially if someone's been in their role for five years, seven years, eight years. You're just used to doing what you're doing, but you have to ask yourself, “Am I still growing? Am I getting stagnant?” If you put yourself in that position, to be constantly uncomfortable and forced, that's a way to help yourself to continue to grow and learn. It reminds me of a quote from, I think it's Ginni Rometty, who used to be the CEO at IBM. She said “growth and comfort do not coexist.”


[49:16] Aaron: And thats so true. I've taken some huge risks in my career, in my life, and I will say that, the greatest key to my success is having a loving wife. What has made Aaron Turner great in cybersecurity is having a wife, who's not part of cybersecurity, who I can come home to, and have this reset around, “okay, this is my home, this is what I'm doing,” but yet she was accepting of the fact that there are times when I bet our entire financial future on stuff. We could have ended up living in a trailer. Huge risks. So, I think finding the right companion in life, whatever partner you want to have, married, boyfriend, girlfriend, whatever. I think this career is very, very challenging, and people who do not have a strong partner, suffer. I look at the people that I've lost. I've suffered through the suicide of several folks in our community, and if I look at those individuals, the commonality among them is they did not have a good life partner, they did not have someone who would support them when times get really bad.

I'm dealing with an incident right now. I got invited to go do some investigations on some stuff, just as an independent consultant, and it is dealing with some really, really significant consequences for the company that's involved, and there's an individual working on this team who does not have a good life partner, and I worry about them. I worry about the stress, I worry about the outcome of what this thing's going to do to them, personally, so I think there's my plug to say, if you want to be successful in this, find a good life partner, find someone who will wrap their arms around you, take you no matter what, help you disconnect when the time is right. It's going to take a special person. I look at my wife. I was gone away from home more than 80% of the time for more than five years. Just crazy. She was essentially a single mom raising my kids. I've count myself as grateful that she put up with that. We're still together. Hasn't been easy, but I'm so grateful that I made that investment because I think that's key to people's mental health.


[51:49] Matt: And this has been a fascinating conversation. Thank you so much for joining us.


[51:54] Aaron: Glad to do it. Always good to talk with good friends.


Thank you for joining us for today's episode. To find out more, please visit us at