Cloud Security Today
The Cloud Security Today podcast features expert commentary and personal stories on the “how” side of cloud security. This is not a news program but rather a podcast that focuses on the practical side of launching a cloud security program, implementing DevSecOps, and understanding the threats most impacting the cloud today.
Cloud Security Today
Microsoft 365 incident response
Purav Desai is a Microsoft 365 incident responder at a large financial institution (name withheld to protect the innocent). He shares his journey and expertise in the field. He explains how his early exposure to Microsoft security solutions and their constant innovation led him to specialize in 365 security and incident response. He discusses the importance of mentors and influential figures in his career, highlighting the lessons he learned from them. He then dives into his popular project, Deciphering UAL (Unified Audit Logs), which aims to make sense of the complex logs in Microsoft 365.
Purav shares an incident response scenario involving a banking Trojan and how he used telemetry and logging to investigate and remediate the issue. He concludes by discussing effective threat detection methods in Microsoft 365, including threat hunting with KQL and leveraging Zero-Hour Auto-Purge (ZAP) to prevent the spread of attacks.
In our conversation, we dive into:
- How specializing in Microsoft 365 security and incident response can be a wise choice due to the constant innovation and market demand for Microsoft solutions.
- How having mentors and influential figures in your career can provide valuable guidance and inspire you to push yourself and try new things.
- His personal project, Deciphering UAL (Unified Audit Logs), aims to make sense of the complex logs in Microsoft 365, providing insights for digital forensics and incident response.
- How proper licensing and logging configuration are crucial for effective incident response.
- How native tools like Purview Audit and eDiscovery provide valuable insights for forensic analysis.
Simplify cloud security with Prisma Cloud, the Code to Cloud platform powered by Precision AI.
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Matt (00:00.942)
Welcome to the show.
Purav (00:03.084)
Thanks for having me Matt, hope you're well today.
Matt (00:06.222)
I'm always excited on podcast recording days, especially when I have great guests like you. So let's jump right into it.
Purav (00:13.644)
sure let's get started.
Matt (00:15.278)
So you are, you're known as a Microsoft 365 security specialist. And I'm just curious, there's not many folks that are specialists in this area. What, what inspired you to specialize in 365 security and incident response?
Purav (00:33.324)
Yeah, great question. I mean, I think it all started when I was a sock analyst. So I was a sock analyst back in 2019 for one of the largest fashion retailers in the world. And they were a closed Microsoft Design partner. So I got exposed to the whole security step very early. We're talking Defender ATP, which is now Defender for Endpoint. We're talking Office 365 ATP, which is now Defender for Office. And just seeing the amount of telemetry we got across the solutions, it really kind of...
stood out to me, right? It was my first Soccanalyst role and my first Cyber role. So I didn't know what else to expect. So I was kind of just taking on and embracing it. But as I started to get more confident in it, you know, I started to realize things like the Gartner Magic Quadrant. So for people that aren't familiar, the Gartner Magic Quadrant has...
two axes in its chart, right? So one is ability to execute and one is completeness of vision. And Microsoft have constantly been leaders in that space. So when I was picking my roles after the SockHandlessDrawal, I was like, I want to go more for Microsoft security because they're showing that they're innovating an industry and staying at the cutting edge, you know, in terms of the market demands and things as well. And to kind of further that, right, one thing I've noticed and some of your audience may be aware.
Microsoft have made a lot of strategic acquisitions, particularly of Israeli based startups. I'll give you some names. Adalem. Adalem was an Israeli based startup, which has now basically formed into Defender for Cloud Apps, or some people may fondly remember it as Microsoft Cloud App Security, which is a CASB. A CASB is a Cloud Access Solution Broker. It's a way you monitor your sessions to SaaS apps. You can then enforce.
Matt (01:53.454)
Mm -hmm.
Matt (02:06.158)
yeah.
Purav (02:16.62)
controls, et cetera. Another one most recently is they acquired CyberX. And CyberX was an agentless OT solution, so operational technology, and that's now become defender for IoT. So you can kind of see how they're strategically acquiring those companies and then building their portfolio. But I think ultimately the market demands did kind of go towards people going towards Microsoft. A lot of companies had that E5 license, but they just didn't realize security was within that.
And, you know, I was in various consultancies after my sock analyst gig. And our job was to basically unlock that potential in those clients to say, Hey, you have E5, you know, maybe you don't want to pay crowd strike. Maybe you don't want to pay proof point or, you know, other email security kind of solutions, but you can actually go all into Microsoft. And I think, you know, ultimately when I was a sock analyst, I was a tier one. So I was dealing with alerts and, you know, triaging them a little bit, phishing emails, things like that.
But I never got the kind of expertise or understanding to really go deep, like forensically. And so I think coming to incident response now allows me to get back into that sock analyst mindset, but a lot more deeper, right? Really go into the logs and drill in. So yeah, kind of all that, you know, inspired me to specialize. And I think the defining moment, right, was probably the sock analyst role. Because as I mentioned, after that role, I strategically picked all the different roles I chose.
Matt (03:19.086)
Hmm.
Purav (03:42.7)
If they didn't have Microsoft security, I was like, Adios, like, I'm not trying to learn Splunk and CrowdStrike from scratch, right? I want to bolster what I've learned. So yeah, that's, that's a bit about my journey and why I picked this specialty.
Matt (03:55.278)
Seems like a wise choice. So has there been any mentors or influential figures in your career who have significantly impacted your journey? And if there are, I'd love to hear maybe just any lessons that you've learned from them that you think would be relevant for our audience.
Purav (03:57.396)
Yeah, five years later. Sorry.
Purav (04:14.668)
Yeah, great question. I mean, so some of your audience may remember in 2017, we had the WannaCry hack, right? It crippled the NHS in the UK, but also worldwide other companies and industries. And I think the one that stood out to me from that was Marcus Hutchins, right? So for those unfamiliar, Marcus Hutchins was the one that found the kill switch to WannaCry version one, which basically had some code to say, if this domain exists, you know, shut down. And so him being a curious, I think maybe he was 16, 17 at the time.
Matt (04:41.038)
Remember.
Purav (04:44.716)
He was like, you know, well, what would happen if I registered this domain? So he did it and it killed the version one. Now, naturally the threat actors came back with version two, which didn't have that kind of a kill switch, right? But I think the key lesson from there was just don't be afraid to try new things. And I think that stood out to me going forward. It's like, if I have an idea, I'm going to try and push it as best I can. Naturally we'll run into blockers and hurdles, but you know, how far can we go with just pushing ourselves and believing in ourselves? So that was definitely one.
and then I think another one is, in my current role, actually one of the directors, they really pushed me to understand the logs, right? And I won't name them, but people that know them will know, know who I'm talking about when I say pivot, pivot, pivot, and live in the data. Right. So they really pushed me to encourage me to understand what the fields in the logs mean. Like what are these IDs and what are these goids? which basically inspired the deciphering UAL series, which I'm sure we'll come onto shortly.
But the key lesson from that is like, you know, live in the data and try and understand your scope and story from that because forensics wise, you need to be able to piece everything together. So I think those are the main two kind of mentors or influential figures in my career. Marcus Hutchins for sure, because he inspired me to actually get into cyber. When I saw that he found that kill switch initially, I was like, wow, this is so cool to actually understand a piece of malware and then decode it and, you know, kill it effectively.
And reverse engineering, which is basically what that is in a low level, is kind of a long -term aspiration for mine. It's a very specialized skill. You need to understand assembly code and things like this, which is very advanced. But on the side, yeah, I'm trying to keep up with that and learn as much as I can. And then, yeah, like I said, the credit to my director for pushing me to understand the logs, which ultimately led to the forensic series that you know and love. So, yeah.
Matt (06:42.254)
So let's just jump to it. You mentioned your deciphering UAL project and it's gained actually a lot of traction in the cybersecurity community. Tell us, first of all, maybe explain what UAL is. Maybe some of our audience aren't familiar with the unified audit logs. Tell us what's confusing about UAL, what led you to create the GitHub repo to decipher it, and then maybe talk about this all from a digital forensics and incident response perspective.
Purav (07:08.62)
Yeah, sure. So UAL is the unified audit log, right? It's basically a place where all your M365 kind of activities are logged. So think about sending an email, uploading a file, downloading a file, accessing a SharePoint site, adding someone to a Teams, removing someone from a role in Azure AD, all those kind of like activities would be logged in the UAL, right? And threat actors can...
can do malicious actions in your M365 tenant, right? It is a juicy target. So if they were to compromise one of your users and then perform some bad actions, how would you understand that? You'd have to look at the logs, right? And I think initially I was looking at kind of getting permissions to the unified audit log, right? Which at the time needed a permission in Exchange Online where you needed to be added to a role group with a particular role, that role being view -only audit logs.
Matt (07:39.982)
Hmm.
Purav (08:03.404)
And so when I did that for myself and I looked at the audit event for this, I was like, this makes no sense because I'm adding to a role group. That's just this like random string good, right? What does this mean? You know, nobody knows, right? and I was like, well, if I can't understand this, like how can others understand it? And you know, how, what can I do to kind of learn this? So I reached out to a few people in the community and there's various LinkedIn groups about Microsoft security. and no one really knew.
So I was like, okay, I'm going to try and take this on and see what I can do. And then very quickly I figured out that exchange PowerShell, all those GUIDs on the backend are stored within, you know, statically somewhere within the tenant. And you can query that with PowerShell. And when you query that with PowerShell, you not only get the GUID, but you can get like the friendly name, you can get the description, et cetera. And slowly, slowly I start to piece things together to say, okay, well, if this is the GUID for the role group, you can use Exchange Online PowerShell with this particular permission.
to basically decipher it and understand that event. And because I knew that I was struggling to find that content in the community, I was like, this could actually become a niche for me, right? And my specialty of sorts. Some people do know that I'm trying to go for the sort of MVP award, which is Microsoft Most Valuable Professional. And with that, you kind of need a niche or a specialty that you're contributing back to the community. And so I thought this forensics piece could be really good.
But in terms of talking about this from a deeper perspective, right? So what I've done so far in the series is there's eight parts and each of those eight parts, apart from the first part, which just shows you how to get access to the audit log, the other seven parts walk through operations, right? And what is an operation is basically an activity that occurs in your tenant. So, you know, let's say add role group member from the name. You can kind of work out what it means. Like it's not rocket science. You're basically adding someone to that role group, but there could be some more mysterious operations like.
you know, added service principle. And if you know what a service principle is, you'll kind of understand it, but what if you don't, right? So my kind of GitHub series breaks down those operations in terms of what does the audit event look like if the operation was to occur? What fields are useful to you from a forensics perspective and the fields that have like those GUIDs and IDs, how do you decipher them? Or at least try and decode them and understand them to a degree. So yeah, that's a bit about the Deciphering UAL series and...
Purav (10:27.404)
I've got a lot more planned and other operations, but so far I've focused on Exchange Online, Exchange Admin specifically, and a little bit of Teams and Defender as well in terms of like policy configurations.
Matt (10:40.238)
Now I'm going to, I'm going to take a little bit of a guess here. You talked about a little bit, but kind of led you to creating this series. Was there, you know, without naming specific organizations, are there, is there a memorable incident response scenario that you've handled in a 365 environment? And you know, was that a part of you, you know, trying to scramble to understand things? Was any of that related to the deciphering UAL? Tell us, tell us the story. People always love to hear even if they're anonymous breach stories.
Purav (11:10.54)
Yeah, I can share an anonymous incident story. I can say the forensics piece didn't come into it too much, but maybe I'll think of a way to weave it in. But what happened was we initially got an alert on Defender for endpoint, right, which is the Microsoft EDR solution for suspicious PowerShell being ran. So we looked at it and we realized very quickly it was Base64 encoded, right? So we decoded that, you know, just using like Base64 decode .org or something like that.
or CyberChef as well. And from that, you could kind of understand that there was some dash sleep commands. There was like HKCU, which is HK current user within the registry. But the way it was written was very unique. It wasn't just like HK local machine or HK current user slash in a software, whatever the path is. It was like, quote, HK, quote, plus, quote, CU plus. So they were kind of obfuscating that path.
Matt (11:54.254)
Hmm.
Purav (12:10.476)
Right. So we later found out this was good load up, but we didn't exactly know that at the time. So what we did is we reached out to our threat and told team, for kind of understanding attribution, right? So we gave them the power show, the decoded version. they were able to attribute to bootloader, which is a banking Trojan, but this did occur in a previous engagement, just to clarify. and, and then basically once we knew it was good loader, we were like, okay, well, what is this actually doing?
So there's a great resource for DFAR, you know, people that are interested in DFAR called the dfarreport .com. They do really good reports covering the whole life cycle. And when we looked at their Gootloader one, it mentioned a few things. For example, it mentioned that sleep command that I mentioned in the PowerShell. It also mentioned the HKCU. And what we learned was it was trying to plant a Cobalt Stripe Beacon, right? Which then basically is going to reach out to a C2 and say, Hey, I've got a foothold here, you know, take over.
do more malicious commands, et cetera. So when we realized all that, we basically told the client, like, you know, you're vulnerable to this. You know, this is an attack kind of actively in your, one of the hosts that you have. But for those that have dealt with consultancy engagements, especially for EDR, you may not have like the full autonomy to isolate machines or run AV scans or things like this. So our job and our remit was very much to just inform the client.
and advise them. So we quickly got on a call with them and an email with more steps afterwards that followed on how to isolate and how to do an AV scan. So they did all that. The user still had the machine though. So you can maybe see where this is going. So the user still had the machine, they isolate the machine, but when you isolate the machine, you can still get in using cash credentials.
because the machine doesn't talk back to then just like kick you out, right? It wasn't like a remote wipe over Intune or whatever. It was just a isolate machine. So the user logs in the next day, okay? And we get another alert, the same alert. So we're like, this is a bit weird. Like, why is this happening if the machine's been isolated and the AV scanner was wrong, right? So it turns out that this was set by a scheduled task. So this is how the malware was establishing persistence, right? And a scheduled task for...
Matt (14:05.294)
Purav (14:31.372)
people that aren't aware it's a way to basically run an action or a trigger on a specific schedule and you can do that within this like Windows task scheduler basically. So again, we didn't fully know that at the time. We just saw the second alert and we were a bit confused. So then we looked at the DFIR report again and that mentioned that schedule tasks could be used for persistence. Now Defender for Endpoint has this great thing called the investigation package. So you can download the investigation package from the host.
Matt (14:53.87)
Hmm.
Purav (15:00.62)
even if it's isolated, because the key thing to remember is when the machine's isolated, it's still talking back to the Defender for Endpoint, right? And there is, in fact, a second level of isolation called a partial isolation, which will basically allow you to use Teams Outlook, et cetera. But for this case, we told the client just to do a full isolation. So it's still talking back to Defender for Endpoint. So we were able to download that investigation package. And when we looked at the scheduled task, there was one scheduled task that stood out to us.
Matt (15:11.406)
Hmm.
Purav (15:27.724)
because the name of that schedule task was the name of that user. So that was clearly the attacker trying to evade detection, right? And when we looked at the action associated to that schedule task, it had that encoded PowerShell. So we're like, right, this is what's happening. The user's logging in whatever time, right? It's triggering that schedule task, which is then triggering the encoded PowerShell and is then causing our alert. So, okay, cool. So we figured that out. So what next? Well, we wanted to know when that schedule task was first created because then that would show...
Matt (15:31.15)
Hmm.
Purav (15:56.62)
how long the machine had been compromised, right? What I didn't say at the start of the story is that this machine was only on boarded to defend for endpoint a day ago. So we only had a day's worth of logs, right? Now, thankfully the device was on CrowdStrike prior to that and the client had access to that. So it was almost like a migration piece, right? Consultancies will kind of migrate clients from one tool to another, like one vendor to another. So with CrowdStrike, they were able to find that the schedule task was planted eight days ago.
Matt (16:04.878)
Wow.
Matt (16:26.414)
Hmm.
Purav (16:26.572)
And also going back to the DFA report revealed that the initial access is from a JavaScript jar file, right? Typically something called like payment remittance dot jar, something like that. And they were able to find that file on that host eight days ago in a zip file, which was then downloaded by the user at that time, maybe through a phishing email or something like that. Right. And then then we were able to close the kind of case and hand it off to them to deal with what they did. We told them, you know, swap the user out.
immediately like swap the machine out, you know, get them in the office or send them a new machine remotely. I think this was during COVID times. So all of that, right. But one thing that was quite frustrating for a client for us trying to onboard them to Defender for Endpoint is that we didn't have enough logs to piece it all together, right. Had the device been on Defender for Endpoint like two weeks ago, we would have had the full timeline. We would have known when the schedule task was created and maybe proactively Defender could have done some things to block it as well.
but all I can say is, you know, crowd strike was there. It clearly didn't flag any alerts to the client as far as we know, because they never mentioned it or anything. so Defender came and clutched there, but yeah, it, it, maybe you could consider that like a hybrid win, right? Because both solutions ultimately led to understanding the scope of the event. and then I believe after that, they were able to swap it out for the user and get the old machine rebuilt. So thus containing and also remediating the incident.
But yeah, from a forensics perspective, I'm not sure how much deciphering UAL could help there. I suppose the only thing you could do is the defender for endpoint actions. So running the AV scan and isolating device, I think now are logged in the UAL. So that could serve as validation to say, did they actually isolate the machine? Because you could tell someone to isolate the machine, but how do you actually verify that they've done that? So UAL could potentially help there. But other than that, it was just, yeah.
discovery of an alert that really CrowdStrike should have picked up and then we educated the client as best we could and they got it remediated. Yeah, that was my first kind of full incident response kind of scenario.
Matt (18:37.166)
I love that. I love that. So I'm curious when you think about that, right, you kind of took us through it from the beginning all the way almost to the end. What were, what were some of the key lessons that you took away and maybe that you, from that event that you are now able to use in roles that came after that, and maybe even some of the things you're doing today.
Purav (18:58.796)
Yeah, good question. I mean, I think the biggest one is give like, make sure that you have the telemetry that you need. So for example, if we knew that the device was just on boarded to Defender for Endpoint, maybe we could have also had some read -only access to CrowdStrike to actually kind of look into it and piece it together ourselves. I think one other thing that led to after that was because the client realized that, you know, we didn't have abilities to isolate machines and we were 24 seven.
So what would have happened if it happened at 3 a and we can't contact the client. So they gave us a bit more autonomy to isolate, you know, certain devices after that, as a result of that, basically. So in the roles following, you know, when I dealt with Defender for Endpoint, it was almost like we want to establish at least some devices we can isolate because otherwise it's just going to be an email and you may or may not see it. And even if we call you and it's out of hours, you know, do you respond? Are you near your laptop to them? Do it, you know, kind of.
trust us with that, but naturally don't give us the keys to the kingdom in terms of isolation. But, you know, how do we work on maybe 20 machines we can isolate and then kind of build up over time? Or maybe we have some logic app to kind of automate that, which then kicks off some approval workflow that you just need to hit yes on your phone and then it will go ahead and isolate the machine. So those two things. But yeah, I mean, having the telemetry and the logging, I think is critical as well. So.
at any future engagements that I dealt with, it was like, you know, is the logs going to a SIM? Is a SOC actually dealing with it? You know, what is your escalation process from tier one to tier two to tier three? And how does that look like? Because you don't want an alert to go to SOC level one. They don't know what to do with it. So they just panic and then, you know, it just sits there and gets worse. So have those operational workflows designed and have the tier two and tier three people trained on how to deal with them. I think those were the key lessons here.
Matt (20:57.006)
So what threat detection methods do you find most effective in 365?
Purav (21:04.812)
threat detection methods. So I would say proactively, right, if you have, for example, threatened to tell about a hash or a particular TTP or a behavior, you can threat hunt for that, right, using KQL, which is crystal query language and advanced hunting. So I'll give you an example from one that I saw in quite a previous engagement ago with party, right? So those of you that use, so those unaware of party, right, it's a way to...
remote into another server, typically a Linux server, I think port 22 over SSH. But the thing with Party is if you're not careful, and maybe this is a default behavior, but you can actually configure this in the settings, is it can transmit your username and password in clear text. And we could see that with KQL because when I think there's a device process events table and you can then get the process command line, and the process command line would show something like party .xc.
dash you, you know, whatever the username is dash P and whatever the password is. So we were able to threat hunt for that because I think it was linked to a, I can't remember what threat actor or what profile it was, but some sort of threat intel that mentioned that party can be misused in this way. So we got curious to see, well, how is that working in that client environment? And we were able to find some passwords that were actually that user's password. And, you know, we told the client and they got that remediated. And I think they then enforced a way to.
encode or obfuscate the passwords. But by default, party does send that in clear text. So that's one way to threat hunt, right? Another one could be like if an attacker gets into your environment, they're going to look for low hanging fruit. So stuff like password .docx, password .txt, password .xlsx on the desktop. And with Kusto query language, you can hunt for that because there's a device file events table. So you could say device file events where the file path.
contains desktop, or you can map the whole path, like c slash user slash name slash desktop. And then you could say where file contains password. And then it will kind of list out all those potential files that you may want to feed back to those users or to your CISO or whoever it is to say, hey, we've got a case of, they could be dummy documents. You don't know if they're actually passwords there. But if an attacker gets on a machine and they see passwords .txt,
Purav (23:27.532)
you can bet your bottom dollar they're going to click on it. So those are two, I think, examples of that one threat detection method. I think in terms of a way to prevent attacks from spreading more, I think Zap is a really good one. So Zap is zero or auto -purge. So the way that works is, let's say a user receives a phishing email and they report it. So there's two methods that when you report a phishing email from Outlook,
natively. So let's say you don't have any third party kind of email security, you're just using Defender for Office. When you report it in Outlook, it can either go to a kind of SOC mailbox for review and then can go to Microsoft or the users can report it to Microsoft directly. Now, naturally you'd prefer the first option because you know, you never know, there could be company proprietary data there and then, you know, if a user reports that to Microsoft directly, it's bypassed all your controls and your vetting and just gone straight to Microsoft. So.
I would say, you know, set up the second one. But what I saw in a previous one was a user reported it to us. We looked at it, we reported it to Microsoft. And because of ZapRights, a zero -hour auto -purge, it started purging those emails from multiple mailboxes, not just that user. So what that allows you to do is then proactively prevent those other users from clicking on that email, right? Because if they're on lunch break or they're off that day, et cetera.
you know, they may not see that email. And then when they log in the next day, it's been purged from their mailbox completely anyway. So that completely neutralizes that attack factor. But yeah, when that user reported it to us, we were a bit dubious about it. I can't remember the exact details because it was quite a while ago, but we thought that, hey, this is worth feeding back to Microsoft. Like, why did this even come through in the first place? And Microsoft, within a few hours, were able to say, yes, this is suspicious.
And then we could see that because we had Zap enabled, it was starting to purge through the various mailboxes. So that definitely helped to kind of prevent a major incident. So those two, I think, like, you know, proactively, you could have KQL queries to kind of threat hunt for certain behavior that you're looking for. And then Zap, having Zap configured in Exchange Online can definitely start helping with reducing phishing emails.
Matt (25:43.822)
So I love that those are two really practical things that people can start doing. So you talked about threat hunting, which is the proactive side of it, right? You're trying to find something hopefully before it happens, you're looking for indicators. Now on the other side of it is, okay, there's been an incident. Now you have to do incident response. So from your experience, what are maybe some best practices for incident response in 365 that you would recommend?
Purav (26:10.38)
Yeah. So firstly, I would say ensure you have the right licensing and the login configured, right? So kind of to keep it in brief, if you have an E3 license, you're only getting 90 days worth of data, right? So if you have an incident and you need logs from longer than that, because maybe you have some suspicion that, this could have actually started previous, you're kind of crap out of luck basically, right? Cause you only have 90 days of logs. So worth investing in an E5 license to get more logging.
But E5 license can be quite expensive if you have a large footprint, right? So there is an E5 compliance add -on license, which isn't as expensive, but that will give you a year's minimum worth of log data. And you can actually configure up to 10 years, right? Depending on your risk appetite and maybe you have some regulatory requirements that you need to meet that say keep it for five years or seven years, but you can configure it up to 10 years. And then in terms of like the actual logging configuration, right? So make sure...
those logs are going to a SIM. There's an Office 365 management API, which will then send the unified audit log data specifically to your SIM. So, you know, make sure you're doing that because maybe the SOC analysts in your team won't actually have access to the native audit portal, which is called purview audit, but they would have access to a SIM. So the log should be in the SIM because that's where the alerts would go and would allow them to threat hunt themselves or, you know, query the UAL themselves through the logs in the SIM.
to better understand the activity. And then the other one is like, don't leave the default configuration, right? A lot of companies, maybe they're quite intimidated by Microsoft setup and tooling and configurations, but spend some time looking into it. Like one thing I've seen in a previous organization is like Defender for Cloud Apps, right? We touched on this earlier. So Defender for Cloud Apps is your CASB solution looking for SaaS apps. If you think about it, Microsoft 365 is a SaaS app. So you can get...
Defender for Cloud Apps connected to Microsoft 365. In fact, it is connected by default, but I use the term connected loosely. The reason for that is because it's not fully connected. The way it works is it's connected in terms of Azure AD sign -ins, I believe, but there's four other things you can configure. One of them is Office 365 activities. If you don't have Office 365 activities configured, your UAL is not going into Defender for Cloud Apps.
Purav (28:32.652)
And, you know, I'm sure maybe you have this question on the top of your head. Well, why would I want the UAL in Defender for Cloud Apps if I can consume it through the SIM? It's because as we kind of mentioned earlier, the UAL itself has those GUIDs and IDs, but Defender for Cloud Apps will do a good job of correlating all that data in a visually representable way. So the good example that I like to give is if you have a file, right? So I upload a file today, I modify it tomorrow, I rename it the day after.
I share it with you, Matt, let's say the day after that, right? And two weeks later, I want to know like, what was the life cycle of the file, right? Defender for CloudOps will do a good job of giving you that kind of life cycle timeline view in an easy to understand human readable kind of language, right? Pro I've uploaded the file, pro I've renamed the file, pro I've modified the file, pro I've shared this with Matt. All of them have timestamps. So it's basically using the same UAL data, because I think that's where people get confused. Like, how is it able to do this when the UAL can't?
Matt (29:06.478)
Hmm.
Purav (29:30.06)
It's using the same URL data, but it's kind of enriching on the backend because it knows what that particular file go it is. And it can then see the operations with that same file go it and piece them together. so those are two things that I would say best practice wise, like make sure you've got your licensing and your login configured and don't just leave the default config. Like if you're, if you're watching now and you think you don't have the benefit of cloud ops ingesting 0365 check. And if it's not, you know, get, get that enabled and you know, if that requires change management and.
you know, policies and things to go through in your organization, go through it because it's going to be empowering you from an incident response perspective. You don't want to be in there just looking at the UAL. Be in there with a stronger view of Defender for Cloud Apps because it's much more easier to correlate the data than going through the goods and IDs yourself, even with my deciphering UAL series. So.
Matt (30:21.902)
One of the things that I often hear from some of the clients that I work with is that how they do incident response in the cloud and on -premise is different. And sometimes it's even a different team that may do it. But I've worked with a lot of clients who are trying to bring that into one practice. So I'm curious specific to 365, you know, defer type of stuff. How does, like, how do you, how would you recommend going about integrating traditional on -prem?
with something like that, but the traditional on -prem program and maybe think in terms of not, you know, what should be done, but like, how do you, how do you actually go about and do that?
Purav (31:02.796)
Yeah, it's a good question. I mean, so if we think to the incident response life cycle, right. And I use the sans one because that's the main one that I use, which is Pyser L. So that's preparation, identification, containment, eradication, remediation and lessons learned. Right. So let's step through them kind of one by one and see how they could apply to cloud. So preparation, right. So ideally you want to make sure your controls are in place. So what I mentioned, you know, getting Defender for Cloud Apps configured with Office 365, but also enabling Zap.
enabling your logging, all those kind of things will help you to prepare for an incident. So that's kind of the preparation phase. Identification, again, make sure that the logs are going to a SIM or however your SOC is consuming them, right? For less mature organizations, it could just be an email or a Jira ticket that they deal with. But ideally, everything should be going to a SIM and your SOC team should be dealing with it there. So that's kind of the identification, like have the detections in place so that when suspicious behavior
or known behavior that you think is bad occurs, those will be detected and sort of seen by the people that need to see them. From a containment perspective, you can still isolate the machines like I mentioned about Defender for Endpoint, right? That is a cloud capability because your machines are talking back to the Defender for Endpoint cloud portal through the signals, you can then isolate them. But I think it can go a bit further in terms of like, if we talk about general cloud resources. So let's say you're talking about like a virtual machine in Azure.
How do you contain that? Well, you just make sure it doesn't have any network configuration like to talk outbound or nothing can talk to it inbound as well. So you'd modify what's called the network security groups, which is what is almost like a firewall in a way, right? You define like you're allowing deny policies. So you'd have to kind of overwrite the NSG to basically say deny all inbound and outbound and thus you've contained it, right?
Matt (32:32.238)
Right.
Purav (32:58.348)
But also you could also think about it in terms of containing it from like a user perspective. Like, so if a user gets compromised, what resources do they have access to? And then how do you lock down those resources from being contacted by that user? Because by the time the threat actor does, you know, who am I and things like this to figure out their permissions, maybe they discover that, I've got a valid role to reach this virtual machine. But if you kick them out of I am in the virtual machine,
they won't actually be able to communicate with it. They may not even be able to see it in the Azure portal if they're not having the right IAM permissions to it. So from a containment perspective, I think that's two things you can do in cloud, like isolate the resources, endpoints, but also cloud resources by restricting the network communications and also preventing users from communicating to those resources in the first place. So that's containment, right? Eradication, well, as kind of low,
low tech as this sounds, right? Sometimes it is best to just rebuild the machine. Like, so if that's a virtual machine, just, you know, if you have a gold image on a snapshot, restore that snapshot. Or ideally, you know, just, just provision a new disc and, and, and put the image to it. Because sometimes even if you investigate forensically and you know exactly what the attacker did, you still don't know what else the attacker could have planted. Right. And, you know, you may not be capable enough to really understand.
each and every kind of registry key, each and every kind of DLL, each and every kind of API hooking that could have been done, like all these advanced techniques. So when in doubt, the best way to eradicate is, yeah, just rebuild the machine and start all over. And then like remediation wise, I think if you can, you know, often the main remediation, for example, if a user has a phishing email in cloud, right, and they've clicked on it and they've entered their credentials, reset their credentials, but also kind of kick them out of sessions, right.
because then if the attacker was logged in as them, they would be then re -forced to authenticate. They'd try the same password that they thought worked, but they wouldn't be able to re -authenticate. If you don't kick them out of sessions, their session would still persist until it expires, right? So they could still be seeing data and taking screenshots and exfiltrating it themselves on their side. So you want to kind of reduce that blast radius. So reset the password immediately and also kick them out of sessions. And Defender for Cloud Apps and...
Purav (35:22.412)
Entry ID do a good job of kind of facilitating those. And then lessons learned, right? So similar to like on -prem, if you discover, like, let's say you discover through an incident that, we didn't have Defend for CloudOps in Justin 0365, or put some plan in place to get that remediated, you know, make that business case, you know, feed it back to your key stakeholders and higher ops, and try and get that configured. And if there's still no appetite because of the organization that you're in, try and see if you can get a risk raised.
So next time an incident like that occurs, the business is aware that, hey, we do have these limitations. So I think those kind of examples of the PySatL can be applicable to cloud. But I think the main thing to remember is it's not just a lift and shift. On -prem, you may be collecting actual disks, doing more forensics, looking at memory, things like that. You can do that in the cloud, but you can only do that in the cloud for infrastructure as a service services, like virtual machines. You can't collect memory from a user.
We can't collect memory from a storage account, at least not yet, maybe with AI eventually. But, you know, so it's important to think that on -prem incident response is more about kind of endpoints, servers, infrastructure. So to bring that to cloud, you can only really fully apply that to IaaS, but you need to apply different approaches, like what I mentioned about containing, you know, resources through their network security groups and things like that, to basically apply the same principles in cloud.
So yeah, hopefully that was useful.
Matt (36:54.606)
It is, it is. So no discussion on security would be complete without talking about tools. So let's talk a little bit around forensic tools. I think you've mentioned some of these along the way, but what are some of the tools that you have found most effective in 365? And if you want to comment on, you know, obviously Microsoft has some native tools that are available on the platform. How do they compare to some of the third party tools that you've used?
Purav (37:20.556)
Yeah. So I mean, well, purview audit is the main kind of native tool. So that helps you to retrieve the logs in a kind of a GUI form, right? So there are some filters you can apply, like the date range, the operation, et cetera, but you can't kind of query that database. So if you don't know what you're looking for and you kind of just want to do a world search, it's actually better to do that in your SIM because you could say, you know, whatever the table name or the index name is, you know, for your username and then start to kind of understand what.
activities a lot. But if you know what you're looking for and you can work within those filters, then Purview Audit is great. Another tool that's a forensics tool, it doesn't directly relate to auditing, but I think can still be helpful, is Purview eDiscovery. So Purview eDiscovery will let you actually help you find the content across the various workloads. So for example, for Exchange, that could actually help you find the email, because if a user sent a suspicious email with an attachment and you need to get that email to preserve it,
and also understand what's in that email, what's in the attachment, eDiscovery will help you. It can also help you pull files from people's OneDrives, users' OneDrives, and SharePoint sites. So if you have a malicious file on the SharePoint site, or if you have a document where passwords are being shared, for example, and you need to collect that, you can do that through eDiscovery. And then the final one is like Teams chats, right? So people can harass people or like...
whatever like profanity, et cetera, in Teams chats, which could lead to investigation. If you need to get that evidence, you'd have to pull the Teams chats using eDiscovery. So those are a bit on the native tools, right? And I think the final benefit of native tools before I come into third party is that they're kind of what we call in -place tools. So you can get the data or even the content without actually moving the data. Whereas sometimes third party require export. So one example is like with Exchange Online, when I'm...
Matt (39:08.526)
Hmm.
Purav (39:14.764)
collecting the emails using eDiscovery, I'm not moving it from wherever that mailbox is stored on Microsoft's backend server. I'm just retrieving a copy of it to then have in my forensics bucket or whatever evidence preservation method that I use. Whereas some third -party tools would actually export it out from Microsoft Exchange Online into their tool. The risk with that is then you can mess with the metadata because then the modified date and
created day, et cetera, could be completely different and then you've lost your chain of custody. So I think those in -place tools are definitely great for that. But third party tools can sometimes offer new perspectives on the way you consume native logs. So one I'm gonna shout out is from Invictus incident response, which is the Microsoft extractor suite. I haven't used that fully myself yet. Just yeah, I'm planning to do that soon.
But from what I've seen, they do a good job of like correlating the various logs and acquiring them in a very efficient format. So with Purview audit or with PowerShell natively, you have to kind of specify a lot of fields and run multiple searches and things like this. Whereas they can just acquire the key kind of operations and activities for a particular user in mass. So that will then free up your time, right? And it's already doing some job of kind of correlating those events.
and presenting you with a high value output off the bat. But what I will say is some organizations may be a bit risk averse to third party tools. So start with those native tools. Maybe explore those third party tools in your own demo environment if you have such access or spin up a trial subscription. Get the value add for those third party tools. And then similar to the Defender for Cloud Apps conversation, build that business case and see if it can be operationalized in your organization.
But yeah, I would say start with the native tools because they're trusted, they're Microsoft first party, et cetera. But I have noticed some Microsoft blogs and some Microsoft Learn pages even recommending third party tools, like for example, Hawk. So Hawk is a way to forensically acquire things like forwarding rules, any sort of high level exfiltration kind of attempts by users.
Purav (41:38.828)
specifically forwarding rules, but it can also get tenant level settings, which could lead to external sharing and things like that. So yeah, in some Microsoft documentation, you may even see Hawk mention, but they do disclaim it to say, this is a third party tool, we can't guarantee the success of it, et cetera, et cetera. So play with it as you wish, but yeah, for an organization, some organizations may not entertain third party tools, you may need admin rights to even install them, which one could argue.
Local users shouldn't really have admin rights, right? So maybe you have some admin in your company to provision it for you, you get access to it and then you can kind of query it on an ad hoc basis. But yeah, start with the native tools and then, you know, play around with those third party tools. And I think the people that disregard third party tools because they're like, well surely the best enrichment would be in the native tools. Ironically, no. Ironically, I think the people that actually build those third party tools themselves are very much experts in the game.
and they have a much better understanding of how to correlate and what actually makes value to the customers. I'm not saying Microsoft don't, but I think the third parties that deal with multiple clients and multiple customers, they just have a much more holistic view than maybe Microsoft has access to. In an ideal world, they'll combine maybe some of the third party tools could become native tools, but yeah, maybe then that would lose kind of market share for the third party. So.
Matt (43:03.982)
it's interesting you say that because I think that's, that's common, right? When you think about the Microsoft ecosystem, they have partners, right? They have ISVs that are part of their security program. And even though Microsoft has like some really good native tools, you know, like you said, they're, they're the reason these third party tools exist is because there are gaps, right? Or there are tools that Microsoft may have that are okay, but someone has been either in your type of your shoes have been in a similar role.
And they're like, Hey, I had to develop something that in order to do, to fill this gap. And they're like, wait, if it's something that I need, this might mean that there are other 365 customers that need it. So yeah, I think that's, I think that's pretty, it's pretty common. And do you, I'm curious, do you maintain a list of, of tools that you have anywhere, whether it's in GitHub on a website, anywhere? I'm just curious. I know I didn't ask you this beforehand, but.
Purav (43:44.684)
press.
Purav (43:57.932)
No, not really. I mean, I suppose the main tools I use is probably Purview audit and any discovery. So the native ones, right? But I'm trying to play around with graph. So graph is another way to get access to the audit logs. There's a graph API. You need to, there's a bit of configuration required there.
Like you need to create an app registration and that registration is basically an application in your tenant, which is then given permissions and kind of acts on behalf of your tenant to then get the logs and things you need. Right. So you authenticate based on your user or based on a certificate, et cetera, which the app registration then has those permissions and can then query graph. but no, I don't have a list because I don't, I'm not kind of a developer or like a, a tours person as such, but yeah, I mean, per view order.
and Power VE Discovery I've definitely dove into. So I've done various talks and contributions like this podcast to talk about it. But no, I don't have a repository of tools because I'm just using the native tools. I haven't built my own tools or anything like that. No. But yeah, for people curious about the non -native tools, I think Microsoft Extractor Suite by Invictus is definitely worth a look. So yeah, that's all I can say about it.
Matt (45:12.238)
So you're deep in the weeds with 365, pretty much everything around the Microsoft ecosystem. How do you stay sharp? What's your routine look like? Because things are constantly changing.
Purav (45:24.268)
Yeah, it's true. I mean, especially now with AI, right, different levels of attacks and deep fakes and all sorts. So I think, yeah, because as we know, like cyber does constantly change, like attackers are constantly getting better. I think the main way I keep up with is just checking, well, the main source is news websites. So sites like Blip and Computer.
you know, the hacker news, these are really good sides for deep diving into kind of attacks and the news when those happen. But I think like D for report, right? Because now I'm in incident response, like it's great to see the whole life cycle and how they detail them.
from like an incident response perspective because it gives me new ideas of like, these are new initial access vectors. I wonder, can I threat hunt for these TTPs and things like that. I think podcasts also really help, right? So darknet diaries, some of your audience may be familiar with. I really like the stories that they tell.
and the kind of historical elements to those as well. And, you know, the cloud security today, this podcast, Matt, right? This has become a new favorite of mine as well. So this definitely helps me keep up. This definitely helps me stay up to date and also offers a refreshing perspective. Like I remember, I think a few months ago, you had the security as a process episode, which really resonate with me, the whole process mining kind of concept. So.
Matt (46:29.646)
Thanks for the plug.
Purav (46:45.131)
Yeah, for anyone interested, you can go watch that. I'm not going to talk about that here. Just to kind of get that additional reach. But another last one, right? Podcast wise is the Sans Internet Stormcast, Storm Centercast, something like that. Sans Internet Stormcast. Yeah, something like that. It's by a guy called Yuhannes. And it's like a quick five minutes in the morning usually, right? And when I was in vulnerability management, it was essential for me because they would break down the key vulnerabilities of the day or of the week.
especially during Patch Tuesday and things like this because it then gave me a quick view on how to present my threat bulletin to clients, you know, a few hours later, basically. But I think, you know, ultimately after the news websites and the podcasts, the biggest place is LinkedIn because like I'm really active on LinkedIn. To be honest, some people think I'm like a bot on LinkedIn because they'll send me a message and I reply instantly. I'm not. I'm not a bot. I am real. I just I'm usually quite active on LinkedIn.
But I think the reason why I say LinkedIn is because a lot of people in the community share useful content, right? And it's really helpful to be aware of what else is out there. Like, so you've rightly said, Matt, that my focus is M365. I do still keep an eye out on AWS, right, and other players to kind of know holistically what's going on in this industry. And I think LinkedIn is great for that because people will share all sorts on LinkedIn. And just taking a quick two, three minutes, whatever, to read a post or...
learn about it, do some Googling on the side, watching a short video really helps you bolster your knowledge in cybersecurity. So if I ever do deal with something in AWS, I could be like, yeah, I remember reading that somewhere or watching that somewhere. And maybe I can find that if I, if I, if I get through my whole, all the things that I've saved and eventually find it again, to then rewatch it and, you know, approach it with a bit more of a deep insight. So yeah, I think news websites, podcasts and LinkedIn together is what helps me to stay sharp.
of a change in industry.
Matt (48:44.43)
So this has been a pretty far ranging interview, but is there any other question I should have asked you?
Purav (48:51.404)
Yeah, I think the main question that you should have asked potentially is like, how do I get started with forensics in M365? Right. And to that, what I'd say, and I may be alluded to earlier, is look at your own activity in the logs. Right. So one I haven't covered on deciphering UAL, and hopefully I will do, so this is planting that seed in the future, is an inbox rule. Right. So as we mentioned, inbox rules could be used for exfiltration. They could be used for forwarding out of the organization.
But if you want to learn how those audit events look like from a forensics perspective, create your own inbox rule, right? So from a colleague with a subject of whatever it is, and then make that rule in your Outlook client. Look at the UAL. There's an operation called update inbox rules. And then look at the properties. Can you actually see the string that you specified? Can you see the user that you said if the email is from this person, right? And once you do a few of those with different varieties, you'll kind of get an idea.
Could you do something like, I only want an inbox rule where the email has an attachment, right? That is one of the options. So you could take the option in another inbox rule, you know, give it 20 minutes for the log to populate, look at the log and then say, okay, well, where is the attachment? there's a property called has attachment true. Okay. That's how then I start to understand it. Right. And for people that maybe got a bit lost with the inbox rules one, cause it can be a bit advanced. Another one you could do is just sending an email and classifying it with a sensitivity label.
So sensitivity labels in most organizations is a way to classify data because then it helps you know what's confidential, what's internal, what's potentially sensitive, what's PII, et cetera. So you can send an email to, again, a colleague, you can send it to yourself, it doesn't really matter, it doesn't need to go outside the organization because you may flag some policies there. So just send it to yourself or send it to a colleague with one type of label.
and look at the audit event, there's a sensitivity label ID. Okay, well, I wonder what would happen if I send another email with the same label, do I get the same ID? Yes, I do. Okay, cool. Well, that tells me that this ID is that label, right? Now what happens if I change the label? I get a different ID. Okay, cool. So, you know, and then you start to build up your own dictionary that way. And on both points, like the sensitivity label, and again, hopefully this will be covered in my series eventually, you can use PowerShell to decipher them so you can get a static list of your sensitivity label goids.
Purav (51:08.556)
as well as the friendly names, right? And one thing that I've thought about and I've mentioned, I think previously on other kinds of contributions is you could have that as a lookup table in your SIM because then when those alerts file or detections, queries, et cetera, fire of like users sending an email with that label, you could do a lookup to say, okay, well, what label have they actually classified this as? And, you know, for example, if you have a policy, not that I'm advising people on how to bypass things, but if you have a policy on like is confidential,
attempted to be sent outbound block, what's the stopper user for making it internal and then trying to send it outbound, right? Even though it's confidential. So there are smart things you can do around that in terms of like automatically classifying based on content. So as you mature, you can look at those. But I'm getting into the weeds of something outside of your question. So I'll stop there. But I think the key thing is just conduct your own scenarios, right? Look at your logs and try and understand them. And that's the main way to get started. And that's definitely how I got started.
Matt (51:44.814)
Mm -hmm.
Purav (52:08.46)
So, yeah.
Matt (52:09.998)
I love it. I love it. Well, Pirov, it's been great. Thank you so much for coming on the show.
Purav (52:14.732)
No, thanks for having me Matt, it's really good and really good questions and excited for your audience to discover my insight and hope they find it valuable.
Matt (52:24.654)
I'm sure they will. Thanks for coming on.
Purav (52:26.636)
Thank you.