Cloud Security Today

Keeping Governments Secure in the Cloud

July 13, 2021 Matthew Chiodi Season 1 Episode 5
Cloud Security Today
Keeping Governments Secure in the Cloud
Show Notes Transcript

Cloud security is essential for any business but particularly for government agencies. On today’s episode, we speak with an expert in the field, Ravi Raghava, who is Chief Cloud Strategist at General Dynamics Information Technology (GDIT). Ravi speaks about his personal experience with dozens of cloud deployments for civil agencies and shares best practices.


  • ATO = Authority to Operate
  • POAM = Plan of Action and Milestones
  • CDM = Continuous Diagnostics and Mitigation
  • OCM = Organizational Change Management


“Over the next few years, we will see a lot of traction and we will see accelerated workload migration to the cloud. It's not just one cloud but multiple clouds, and multi-cloud is becoming the new norm.” — Ravi Raghava [0:04:55]

“We are very strong advocates of OCM, and we work with our government customers to have a well thought-through strategy, providing the right skills, the right training, right medium of training to people.” — Ravi Raghava [0:25:43]

“Having those security frameworks in place, testing infrastructure, having those security tools in place nicely help you automate the entire thing because automation is key.” — Ravi Raghava [0:31:20]

Links Mentioned in Today’s Episode:

Ravi Raghava on LinkedIn
Prisma Cloud

Secure applications from code to cloud.
Prisma Cloud, the most complete cloud-native application protection platform (CNAPP).

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

**Note: Transcript is automatically generated. Expect typos and crazy stuff that a poorly written ML algorithm thought was said but probably wasn’t! :) **

Thanks for joining us for today's podcast. My name is Matt Chiodi, and today we have Ravi Raghava from GDIT on to really talk about cloud security, cloud deployments from a federal perspective. We haven't talked about this topic, so I'm really excited to have Ravi on to chat about it. Ravi, thanks for joining us today. I think I'd like to start things off by just kind of asking you to tell us, our audience, just tell us a little bit about GDIT, what you guys do. Then maybe tell us about your role and just how you're involved in what your role is at GDIT. 

[00:01:27] RR: Excellent. Thank you, Matt. Greetings to all our listeners. It is a pleasure to be on the podcast with you. My name is Ravi Raghava, Chief Cloud Strategist at General Dynamics Information Technology, GDIT. GDIT is a leading federal systems integrator, and I lead the Cloud Center of Excellence for GDIT’s Federal and Civilian Division, steering cloud business growth. I like working with the customers in shaping enterprise cloud and cyber strategy and helping drive cloud migration. GDIT is a known federal systems integrator with several decades of experience helping our federal government, state, and local governments. We do [inaudible 00:02:11] anything in IT, and we are a subsidiary of General Dynamics Corporation.

[00:02:20] MC: Excellent. So, one of the things that I have gotten really involved with over, I'd say, the last two years is the whole public sector, the federal space. I have been amazed at just the amount of progress that's happening on the federal side with moves to the cloud. I would love to hear from your perspective, when you think back over the past few years, what activities have you seen help agencies move to the cloud faster?

[00:02:50] RR: Great question, Matt. First thing is I believe the acquisition of cloud services have gotten better. Compared to a few years back, there are several lessons learned across agencies, I would say. Certainly, we have got more room for faster cloud adoption and acceleration of workload migration. I would also say that, right? Secondly, the last 15 months, also as a side effect of the pandemic, I would say, many agencies have adopted cloud services at a faster pace for increased resiliency and redundancy. I would like to give an example that we helped an agency transition to complete a remote functioning within 24 hours during the early stages of pandemic, right? That's kind of the – It gives you an idea of the scope of the cloud services has to offer today, right?

The pandemic, as another example, it's like cloud given the scale ability and the computing power that it offers, we were able to work with some of our health customers, health agencies to kind of provide the required research capabilities, data analytics capabilities to uncover and unravel huge amount of data in this. I see agencies accelerating their workloads to the cloud more and more these years. So we've come a long way in that way, right?

Also, thanks to initiatives like the government's Cloud First and Cloud Smart initiatives. I know that needs to be appreciated. Agencies have leveraged such initiatives to adopt cloud, and they began their cloud transformation journeys, right? So there are some lessons learned for us, which we'll talk about during this podcast. But I do see that over the next few years, we will see a lot of traction and we will see accelerated workloads migration to the cloud. It's not just one cloud but multiple clouds, and multi-cloud is becoming the new norm, right? So agencies have started with one cloud. Now, we see that slowly, agencies are branching to multi-cloud environment and adopting cloud services like Microsoft 365 and things like that on the SaaS, software as a service side as well.

[00:05:32] MC: You've been part of the cloud game, I was looking at your LinkedIn profile, it looks like going back to 2012 and helped several government agencies move over. It looks like 100 plus applications to the cloud. When a federal agency is planning a move to the cloud, what are some of the most important things from a security perspective they should be planning for upfront? It may be – I would ask you to kind of think about this from a ‘how’ perspective, like how they should do it versus what they should be doing. So think about some practical things. What would you recommend based upon your experience?

[00:06:10] RR: I love that question, Matt. First and foremost is having a vision for the cloud, right? Assuming we have a well-defined vision for the cloud, with some well-defined goals and objectives for the agency, right? We need to have agency perform a discovery and assessment of the current state. That is kind of key to understand the current portfolio and what are some of the gaps in the existing ecosystem, your ecosystem of tools, processes. What are some of the gaps in the skills perspective, right? So I think those are key. 

From a security perspective, first and foremost, when we talk about cloud is we need the cloud ATO. So based on the sensitivity of the agency's data and agency requirements, we’ve been able to develop a system security plan based on – the sensitivity can be FedRAMP low, moderate, or high impact. That’s going to be the fundamental thing. This is where agencies typically spend most of their time in getting to the cloud. So this ATO process takes much of time, and it can take anywhere between a few weeks to a few months, right? In this whole process, we need to define the security boundary, the security controls like access control, auditing, configuration management, etc., etc.

We need to allocate time for this, and it does require extensive testing, validating of all these controls. An agency needs to expect a plan of action and milestones, simply POAMs, to come out of that, right? We need to have a strategy to address those POAMs before agency can start moving their workloads into the cloud. So ATO becomes a very fundamental thing when it comes to federal agencies.

[00:08:15] MC: You mentioned two acronyms that I think some of our listeners may not be familiar with. You mentioned ATO and POAMs. Could you explain to us just what those are and how do they relate to, say, a federal agency that might want to move to the cloud?

[00:08:31] RR: ATO is Authority to Operate or Authorization to Operate, and that is a requirement from an agency based on the data sensitivity and the requirements within the agency. We need to follow some controls such as security controls. This is where I mentioned about leveraging something like FedRAMP, right? So FedRAMP, as you know, it's the Federal Risk Management Program, and they have defined controls using the NIST 800-53 controls and additional federal baseline controls that is for the agency to implement. So they have defined the abstract way of how those controls need to be and what controls need to be in place, right? The agency is based on their requirements, based on their security posture. They define how those controls are to be implemented, right? 

When you go through – I’ll give some examples too. So for example, access control, auditing, how you're planning to audit the activities in the cloud. For example, in AWS, we have something like cloud trail, right? We do want to monitor the behavior of the users, the API calls that are being made and things like that. So it requires some effort. Once you have all of these controls and the documentation ready, either a third-party assessment team kind of goes through the documentation and validates the significant role that you have defined towards what's your implementation, if the implementation essentially satisfies the control or not, right? If some controls are not satisfied, so then what that leads to is plan of action and milestone or simply POAMs. 

[00:10:31] MC: Okay. So that's the POAM. Okay, that makes sense. I know that when I first came into this world. I had all these new acronyms dropped on me, so I appreciate you explaining that for our audience. You had mentioned the fact that going through that ATO, the Authority to Operate process can take weeks to months. Are there things that you have seen in your experience because a large part of that ATO process is security-related? Are there things that federal agencies can do to speed up the ATO process? How much is that in their control versus there's just certain procedural things they need to go through from a FedRAMP perspective? Are the things they can actually do to speed up the ATO process?

[00:11:16] RR: Absolutely. So there are two parts to the ATO process. First and foremost is the ATO for the actual cloud environment. Second thing is when applications move to the cloud, you also need an application ATO. How we can augment or speed up this whole process is to essentially automating the security documentation. So if there are tools that we can essentially leverage, that will be the ideal thing, right? In fact, we have helped some of our customers, multiple customers get their ATOs at much shorter timeframe, right? So what we did in such cases where we were able to leverage tools to kind of get the common security controls in place, and we were able to add the implementation details, tailoring it to the needs of the agency. So that's one way to kind of cut down the time it takes to do it. 

Once you have something like that, it becomes much easier for other workloads and applications that move to the cloud to get their respective ATOs because once you have a base ATO, then the applications can essentially inherit those controls, right? Having a tool based or automated way of documenting those security controls enhances the speed at which you can get an ATO, subsequent to it as a continuous ATO. Let's talk about that one too. 

Having a real-time tracking of security controls, that's another good mechanism and technique to have in place, which can essentially help you significantly in eliminating or mitigating POAMs, right? Also, for the other applications, because we talk about workload acceleration, accelerating workload migration to the cloud, right? In such scenarios, the automation of the SSP having real-time tracking of security controls certainly augments it. Also, from a cost perspective, it can drive down your costs significantly, combined with having well-established DevSecOps processes. DevSecOps and SecDevOps, whatever or however you want to call it, right? That actually can certainly speed up the ATO processes for the agencies.

[00:14:04] MC: That's interesting. You bring up the continuous ATO. I had a phone call in the last two weeks with a non-civilian federal agency. We’ll just put it that way. That was of huge interest to them, the whole thing of continuous ATO. How do they not go through this largely manual process but how do they make it automated so that it's not just this one time check, right? A lot of times, if we look at even on the commercial side, if you look at certain standards like a SOC 2 or an ISO 27000, those are often kind of point in time checks, right? You meet all these different criteria. 

Then as we all know, any type of system usually as some type of scope creep, where it starts off in a good state, and then over time it tends toward to move to a more an insecure state. So I'm curious, from your perspective and what you've seen around the whole notion of continuous ATO, what are some things that have really worked well for some of the agencies that you've worked with?

[00:15:13] RR: Yeah, absolutely. For some of the agencies that I’ve very closely worked with, I mentioned tools, so having tools and having those controls. When we talk about FedRAMP moderate, we are talking about 160 plus controls, right? So you can imagine how much of work that systems integrators and the agencies essentially have to deal with, right? Having a tool would certainly help. Having the appropriate content is going to help. I would also suggest that establishing interagency collaboration on what has worked well. Some of the best practices and lessons learned from agencies can essentially be shared. If an agency is adopting cloud, and they are beginning their cloud journey, they can certainly have good exchange of communication with other peer agencies to kind of get some of the lessons learned and best practices from them. 

From our perspective, we encourage agencies to have such collaboration. That way, the new agency or the agency that's going to the cloud doesn't have to reinvent the wheel and can learn from other’s experience. This is where companies like GDIT really shine because we bring to the table the best practices and lessons learned, and it kind of mitigates some of the hurdles and bottlenecks right out of the gate.

[00:16:49] MC: Yeah, absolutely. We have firsthand experience. I say we. Palo Alto Networks, we went through the FedRAMP moderate process for a couple of our different cloud-based solutions. I know that, as you mentioned, kind of the first time you go through it, it can be a little bit painful, and it's funny. I saw an article that came out, I think it was earlier this week, where GSA, which runs the FedRAMP program, their technology transformation services arm has made significant investments in automating some of that security authorization processes for cloud service providers. That was something I was very, very excited to see, as I know that, obviously, there are literally thousands of companies that are going through that FedRAMP authorization process. So I know that will be welcome news for both agencies because the more solutions that go through that, the more options that they have to securely operate on as they move into the cloud. 

One of the other things that it kind of goes hand in hand when I talk to various different agencies is there is this whole desire to move towards DevOps. A lot of them though really have no idea how to get there. So maybe today, they're operating in a very kind of siloed waterfall fashion. They want to get to DevOps. They know that it's something that's been happening for perhaps years on the commercial side. Put on your consulting hat. If you were advising a civilian agency that today was they're still running their traditional monolithic applications but they really want to move towards microservices in the cloud, where should they start? What examples, best practices, worst practices can you share?

[00:18:38] RR: Absolutely. That is a great question, Matt. First and foremost thing is I would suggest to start off with an assessment or a rationalization of their applications and their current state, which is kind of very important to identify and understand what the agency has and what the disposition for those applications, whether they are cloud suitable or not. So determine that, right? I'll give you an instance. 

GDIT has something called as move to cloud framework, which is a phased approach to helping agencies get to the cloud. The first phase in that is essentially to discover and assess. This is where we help agencies go through application rationalization exercise and where we determine the suitability of those applications, and then get a complete understanding of the current state. This can essentially be done in two ways. One is to perform an automated assessment using industry tools. The second mode of conducting that exercise is to have stakeholder discussions. Kind of meet with business stakeholders, meet with the operations teams like the SecOps teams or the [inaudible 00:19:59] teams to understand how they operate. What are some of the processes that they have in place and things like that, right? That will give a great preview of the current state. 

Conducting a value stream mapping exercise like that will help identify any kind of application dependencies or barriers, right? As part of the assessment, identify what applications and application components can be decoupled. So this goes back to the DevOps in the microservices question that you asked for, right? Once we identify the dependencies and once we identify the application components that can essentially be decoupled, so then we can talk about how those can be converted into microservices. Then we can appropriately design the target state architecture. So this is something that I love to do. 

This exercise also will be very helpful in the adoption prioritizing, developing, and deploying microservices in the cloud, right? The decoupling of these services also has several benefits because the agency and the agency customers have to think about like, “Hey, what's in it for me?” Because like, “Why should I do that?” When you deploy those services, you have the option of deploying them independently, right? When compared to the monolithic applications because monolithic applications where there is a benefit to it too because it's easy to develop and deploy. 

But when it comes to the microservices, when you decouple those and everything is like API-driven, and you have benefits of scalability. When we talk about scalability, we are looking at scaling individual tiers of your architecture independently, right? That way, you also reap cost efficiencies because you don't have to necessarily build multiple servers, so you can essentially compartmentalize your web tier versus the app tier and versus your database tiers, right? It offers significant cost benefits as well, right? Scalability is big and so are the cost benefits. So these are fundamental to the microservices architecture.

I will also add few data points when it comes to microservices, no matter how lucrative this. So we have to look at it from the agency perspective because it requires a well-thought-out strategy. You need to have a well-defined roadmap, a catalog of offerings for your customers to leverage so that you empower and enable those customers when you have a solid foundation, when you have the right processes, standard operating procedures to support that ecosystem of microservices, right? So when agencies are looking to adopt microservices, these are some of the things that they need to consider.


[00:23:07] MC: Prisma Cloud secures infrastructure, applications, data, and entitlements across the world's largest clouds, all from a single unified solution. With a combination of cloud service provider APIs and a unified agent framework, users gain unmatched visibility in protection. For our federal customers, Prisma Cloud is now FedRAMP moderate. To find out more, go to 


[00:23:36] MC: That makes sense. I've spoken with – It's interesting. I think oftentimes we think that public sector is just so different than what we see on the private side of the house. I work with customers across the board, predominantly on the commercial side. But it's interesting that they are all after the same thing, and that is being able to release faster, release software faster, and be able to better serve their own markets. One of the things that I often hear or I often get a question about is the people side of moving to DevOps, so the people process and technology, the process part and the technology. Obviously, there's challenges there, but the people side seems to be an area where many organizations struggle, just moving kind of mentalities. 

What have you seen – Again, you've been doing this for working with different federal agencies going back to 2012, so almost a decade. What have you seen on the people-side that maybe can really help in a federal agency move closer to that vision of DevOps? What's worked well?

[00:24:49] RR: Great question, Matt, again. So one thing is we call it as the organizational change management or simply OCM, right? Why is that Important? So an enterprise invests in their people and their staff, and we need to educate them. We need to train them so that they are well familiar with the tools and services that the cloud service providers offer, right? So this essentially, the OCM, having a strategy for the OCM essentially bridges the delta or the skillset gap, essentially, I would say, right? We have often seen agencies have a good OCM strategy, where they are providing training options. 

In fact, we do from GDIT. Also, we are very strong advocates of OCM, and we work with our government customers to kind of have a well thought through strategy, providing the right skills, right training, right medium of training to people, and work with our customers to educate them to bring out the road shows, or talk to them about some of the series. Educate them through webinars, right? Encourage them to leverage some of the trainings. We impart trainings through different mechanisms, through virtual, through recorded seminars and webinars and all that, right? 

That kind of bridges the gap, and agencies certainly are working on that one and have accelerating strategy on the OCM side and encouraging their staff to get trained and certified on platforms like AWS, Azure, and Google, right? We are seeing that pattern. From our perspective, we are providing the appropriate support to augment the OCM capabilities within enterprises.

[00:26:53] MC: That makes sense. That makes sense. Almost always, obviously, I'm looking at this through the lens of security. So as an organization is thinking about whether it be an agency or commercial entity, as they're thinking about that next iteration of DevOps, what almost always comes up is DevSecOps. Or as you said, sometimes in federal, I see sec DevOps. I know many agencies, they want to go there. A lot of them haven't even got the DevOps. But let's talk about that next iteration. 

So maybe a federal agency is their way with DevOps. They're working toward it. They've done the OCM. What’s actually happening in the field, from your perspective, around DevSecOps? What recommendations could you provide around kind of getting there? What have you seen? What's worked well? 

[00:27:42] RR: Absolutely. We are seeing various levels of maturity when it comes to DevOps, DevSecOps, or SecDevOps, as some would like to call it, right? Some agencies are mature on their DevSecOps adoption, with having a well-defined tool chain and standard processes supported by standard operating procedures and things like that. So those agencies are tech savvy enough and with appropriate support structure, so they are able to innovate and they are able to accelerate their workloads. 

When it comes to other agencies, so we are seeing agencies have a bit of a struggle when it comes to DevSecOps adoption, right? So such agencies need a strategy to be in place. When we talk about strategy, we need to define the tool chain that's going to work well for the requirements, especially the security tools, having tools like Checkmarx, JFrog, Prisma Cloud, AppScan, and things like that because this is kind of fundamental for the agency because some agency, they've already made investments on some tool chain that they are using for their on-premises workloads and infrastructure. When they get to the cloud, they either want to use the same tool chain because there is an investment from a people perspective too. We talked about tools, we talked about people, and we talked about the process, right? People, process, and technology. So it becomes much easier for them to have an agnostic pipeline and or a pipeline that they are so accustomed to using it, right?

There are some benefits for this approach too. When they bring in the DevSecOps pipeline to cloud, so having a pipeline that has a git repository, some tools like Rundeck or Artifactory. From a security perspective, some of the tools that I mentioned like Prisma Cloud, JFrog Xray, AppScan, things like that. They were able to adopt the cloud native services as well. When you have a requirement for supporting multiple cloud or like hybrid cloud environment for multiple data centers, right? I don't want to leave that out because the data centers and the cloud CSP platforms are kind of – That’s kind of the norm that we are seeing. That's going to be there for foreseeable future, though cloud adoption is definitely increasing, right? 

So if you are invested in a DevSecOps tool chain, so you want to have that consistent process across data centers and you want to break down the silos. So you don’t want to have like, “Hey, I have this cloud. I have my data center. I have the second cloud.” When you talk about having your own siloed processes, this is where the agencies tend to increase their costs, right? It’s kind of your investments in those areas can shift into increased. But when you tend to have – There are benefits to it too, having a very cloud native pipeline, because you can take advantage of something like pay as you go model, right? You don't have to have the overhead of maintaining and managing those infrastructure, right? So there are benefits to it, and you trigger it only when you make changes. Those services get triggered only when a comment is made, a bill is made. Then the deployment happens, and you have all the controls, right?

Having those security frameworks in place, testing infrastructure, having those security tools in place kind of nicely help you automate the entire thing because automation is going to be key. When we talk about cloud, we want to build environments in a consistent fashion. We want to build our application stack in a consistent fashion. We want to build the infrastructure using infrastructure as code, right? This is the DevSecOps pipelines and DevSecOps processes that we talked about only add to that vision of automating your environment, right? Cloud native biggest advantage, as I mentioned, is about leveraging the pay as you go model, so which kind of brings down your capital investments, and we are just talking about operational costs. 

[00:32:10] MC: Right. Yeah. I think that's interesting that you say that because when I see, when I talk to various different agencies, again, they're all either maybe they've done some DevOps. Now, they're moving towards DevSecOps. What's interesting to me is that they – I think the ones that are that are actually doing this, they're already starting to see the benefits of doing it because it is, in my view, almost impossible to get a continuous ATO if you don't have this type of automation that's in place, right? If you're not moving down the line towards microservices, towards containerized architecture using things like Docker or Kubernetes, it is much more difficult to do a continuous ATO with a traditional monolithic application. Are you seeing that as well?

[00:33:01] RR: I completely agree with you. You're spot on there. So we are seeing that, right? So we are definitely seeing that and few things. When people started talking about container sets, Docker was definitely the go-to strategy or go-to technology [inaudible 00:33:17], right? We still see that Docker seems to be very popular among various agencies, among developers, right. One thing from a orchestration standpoint that I do want to mention, I've seen agencies use tools like OpenShift, and I've also seen agencies use Kubernetes-based technology and services, right? Like, for example, Amazon's EKS, Google Anthos, right? That certainly brings from a cost perspective, there are advantages to it in, also leveraging that cloud native service like that. 

Of course, from an on-premises and a hybrid cloud environment or a hybrid environment, so to speak, having agnostic tool chain and processes also makes sense, right? So when we talk about agencies, what's best for them essentially depends on their environment and the current workloads, some of the business’s need, essentially.

[00:34:23] MC: That makes sense. That makes sense. So we've mentioned a lot of different, I think, federal standards from a security perspective. We talked about FedRAMP. We've talked about some of the processes they need to go through with the authority to operate the ATO. One of the programs that I've spent some time with probably over a year ago now was the CDM program or the Continuous Diagnostics and Mitigation program. I think for those that aren't familiar with it, really the CDM program really the focus is to really help with reducing agency's threat surfaces, increasing visibility into cybersecurity posture, and also helping to streamline some of the reporting around FISMA or the Federal Information Security Modernization Act. Ravi, from your perspective, where does CDM come into play with cloud security?

[00:35:12] RR: Great question. I think you kind of touched on some of the key data points about that whole program, right? The CDM program essentially provides an approach to enhancing the security posture or the cybersecurity posture among the government agencies, right? From the agency's perspective, you need to have the right set of cybersecurity tools and integration services like [inaudible 00:35:37], takers, trusted Internet connection for our customers. We have seen agencies go from I think 1.0 to 2.0 and now 3.0, right? So that is something that is significant when we talk about CDM, right? So having the right approach to the integration services is key. 

Implementing or implementation of a zero trust model, that is also another aspect that we need to have and the agencies need to have in place, visibility into the endpoints and the visibility into the access, real-time visibility, real-time visibility into the threat service that you mentioned, right? Those are kind of key, and we need to have the appropriate tools that can provide the right level of visibility into the threat surface. We need to have tools that can provide the appropriate reporting mechanisms, that can provide the dashboards that's needed, that can help agencies be proactive in warping some of the attacks and reduce the third surface that is going to be fundamental toward. Having a [inaudible 00:36:53] through cybersecurity response capability is also going to be key.

[00:36:59] MC: That makes sense. That makes sense. I think it's interesting how all these things kind of comeback full circle. We started off earlier on the show talking about kind of the move to DevOps and DevSecOps and how that can really help with the ETL process and especially the continuous ATO. It seems like that there is a real, tangible benefit for agencies to move towards DevSecOps security automation in the cloud because it sounds like it would also help them with achieving some of the goals of the CDM program. Is that is that accurate?

[00:37:34] RR: That is very much accurate, right? So when we talk about DevSecOps, let's also talk about some of the fundamental security elements from a posture perspective, from a security posture perspective. There are a few things that we need to consider, right? We need to have appropriate guardrails in our DevSecOps processes. We need to have right baselined images, right? So sticking, for example, right? So these are stick templates, these security technical implementation guide state. Now, those essentially help you to have a strengthened virtual machine, right? So those templates essentially provide you with right level of security posture. Combine that with having additional tools on those virtual machines so that you get detailed OS level logging information from those machines and having a perimeter level visibility into it like the network level flow logs, the transactions that are happening at the perimeter level. 

Having a web application firewall would also help because that gives you insight into where the transactions are emanating from. If there's threats coming from certain regions or certain parts of the world, you get to block those IPs or those regions essentially from attacking you, right? This kind of reduces denial of service attacks, distributed denial of service attacks, and things like that. So we need to bake in or have a comprehensive security architecture, multiple layers of architecture right from the host, all the way to the endpoint, all the way to the edge, right? So this kind of nicely provides you like various levels of security.

[00:39:34] MC: Ravi, I've really enjoyed having you on the program today. If listeners want to learn, either connect with you, learn more about what you guys are doing at GDIT, where should they go to find out more information?

[00:39:46] RR: Great discussion with you, Matt. I enjoyed the conversation. Our listeners can go to to learn more about GDIT’s cloud cyber data, artificial intelligence, and machine learning capabilities. It has been a pleasure to chat with you, Matt.

[00:40:02] MC: Thanks so much, Ravi. Thanks, everyone, for listening.


[00:40:07] ANNOUNCER: Thank you for joining us for today's episode. To find out more, please visit us at