The Future of Cloud Security with VMware Carbon Black

Dec 9, 2020
Dan Mellinger
Senior Director of Corporate Communications

Share with Your Network

We welcome a special guest from VMware Carbon Black to discuss the state of cloud infrastructure and security, primarily through the lens of vulnerability management today, tomorrow, and far into the future.

Transcription

Dan Mellinger: Today on Security Science, the current and future states of cloud security. Thank you for joining as we discussed the state of cloud infrastructure and security primarily through the lens of vulnerability management today, tomorrow and far into the future. As always, I’m Dan Mellinger. And with me today, we’ve got Kenna Security’s head of science and data and chief prediction officer, Michael Roytman. What’s up, Michael?

Michael Roytman: Hey. I like those titles. It’s an upgrade.

Dan Mellinger: Yeah, I know. I had some fun coming up with those. We actually also have a special guest today. He has some experience ranging from counterterrorism and physical security as a marine to building security programs in almost every single industry imaginable. And I wasn’t joking. I was actually reading through this. Quite a few, retail, DOD, you name it, he’s done it. His friends actually call him Mike, but you can call him Rick McElroy. And he’s the head of security at VMware Carbon Black. How’s it going, Rick?

Rick McElroy: Oh, good, man. Dude, you totally have the podcast voice down. I’m impressed. It’s great.

Dan Mellinger: I’ve been working on it. It’s gotten a little bit better over time, but it’s mostly the microphones. I think that’s the-

Rick McElroy: I’m going to start listening more, it’s soothing. It’s good.

Dan Mellinger: Awesome. Just real quick, I did want to do a little caveat because we like to be fully transparent on this podcast. So Kenna Security, we work with VMware Carbon Black. We’ve worked with actually VMware right ahead of them announcing that they were buying Carbon Black in 2018. And we actually announced some new cool stuff that we’ve been doing with them this last VMworld actually, what? Two months ago now, I think it was.

Rick McElroy: Yeah.

Dan Mellinger: Yeah. Feels like a lifetime ago at this point. But yeah, Kenna Security, we do some cool technical integration, some cool security scoring, and we’ll continue to do work with them in the future. So I just wanted to put that out there, so you know about our relationship. Outside of that, Rick, could you give us a little bit of background? You’ve been at Carbon Black for a while through the VMware acquisition, focused on security, you’ve been doing this for 20 years. What are you working on, and what’s your focus area lately?

Rick McElroy: Yeah, definitely. Yeah, I have a weird story. I started my life as a marine, got out and went to school. And then I was like, ” Yeah, I love InfoSec. I’m going to start doing that, build programs.” Did a bunch of red teaming and vulnerability assessments. I mean, I think the podcast I was listening to that you guys did, you guys were talking about Saint, I think there was some early Nessus discussions in there. Yeah, Retina if you remember those folks. And inaudible scan along the way, and all of that good stuff. And so building programs, managing homes. And currently, my focus is really around prospects and customers, helping them mature, what they’re already doing. I have a huge focus around automation and orchestration. Because I think we still have that cyber skills gap, and it’s one of the ways to fill it. So I talk a lot about helping manage the burnout and stress of your teams, being more efficient, and then automation wherever we can.

Dan Mellinger: Yeah. That makes a ton of sense. And automation is a topic that is near and dear to our hearts here at Kenna Security. So excellent. Well, with that context in mind, let’s jump into the main topic, which is cloud infrastructure. Security for the cloud has been changing and evolving over time and specifically as it relates to vulnerability management. So I figured we start with Rick, what’s the current state of cloud security vulnerability management? How are teams and people thinking about this as they’re moving and or are starting to implement stuff in the cloud?

Rick McElroy: Yeah. I mean, it varies. I’ll be honest. The largest organizations on the planet, I think, are probably doing the best. They’re able to actually drive metrics, they have quality assurance teams. So it’s not always a bottleneck in InfoSec, and certainly not a bottleneck in IT. But we’re making changes to code. So you got to put that through configuration management, change management, QA, all of that good stuff. And so I think there’s some opportunities there as well, to start looking at that pipeline from identification to remediation, and helping that out. Generally speaking in the cloud, I would say, team’s number one complaint, don’t have visibility, can’t necessarily even scan. It depends on the contract. It depends on hosts. I think there’s still some discussions inside of organizations, well, who’s responsible for something like a firmware patch? Is it I’ve bought the hardware, but it’s hosted in someone’s data center, so is it me, is it them? And so I think we’ve gotten a bit more mature on those and certainly, the largest players are pushing that ball forward, which is great and beginning to be transparent about those processes, how they do them and then their time commitments on all of them.

Michael Roytman: I want to push sideways not back on the large players being the best at it. So we did a research report with the CMT Institute, we looked at vulnerability patch rates across, I think about 110 customers opted into this. And we found out that the capacity for remediation stays at about 10% of your vulnerabilities a month, regardless of the size of the organization. This was a mind blowing chart where organizations that have 1000 vulnerabilities and organizations that have a million or 10 million of vulnerabilities. The average patch, monthly close rate is what it was actually, of vulnerability. So sometimes a patch will close 10 months. Stays at about 10%, top performers do about 20% a month capacity wise. But the line of best fit is like a straight shot power law, as you get bigger, of course, their teams are better. But the problem gets exponentially bigger. So capacity stays at about 10%.

Rick McElroy: Was there any indication of the bottleneck on that? Why isn’t that larger?

Michael Roytman: Right. Well, we did some follow up surveys to try to uncover what it is that leads people to be top performers versus not. So my gut tells me that it’s a question of security versus technology risk. Some things are actual threats to an organization and need to be remediated. And those are the things security teams like to close out somewhat quickly. Some things are vulnerabilities that might be within PCI scope, even, but they don’t really pose a risk. It has to be an interaction of three or four vulnerabilities together, you already have to be on the network. Some large banks we work with have called those tech risk. That’s not even a security problem, we have to get to that, we have to do that. But IT ops can get to it whenever they get to it. It might take three months, six months. It doesn’t escalate to a P three or a P two or P one where you would actually take security prevention. But my question to you is, if larger organizations have these systems figured out, and they’re not fully automated yet, that’s why we are in the software business building these tools. What can smaller organizations, or ones that aren’t scanning all of their asset inventory yet learn from those larger organizations, so they can leapfrog into a more automated world where their capacity might start out at 20%?

Rick McElroy: Yeah. I mean, number one is think about the entire pipeline. So it’s not enough to have one InfoSec person or multiple InfoSec people say, ” Here’s the list of things.” That has to be discussed. Because to your earlier point, just because it’s vulnerable, doesn’t mean it’s exploitable. Which is how I would view tech risk versus a security risk. And then it’s so what’s realistic in our environment? I think, looking at the data sources, you need to put that picture together, that’s a good way to go there. For a lot of smaller shops, simply put, they’ll probably run them on someone’s cloud who’s handle it anyway, except for their endpoints. So for those types of shops, I say just turn on patching. It’s fine, Microsoft hasn’t broken anything in a while.

Michael Roytman: I feel like you’re baiting me here. You’re saying except for their endpoints.

Rick McElroy: Right.

Michael Roytman: We should go there. We should go there now.

Rick McElroy: Yeah, I mean, generally speaking. I mean, look at modern startups. It’s like they have no infrastructure. Their infrastructure is a laptop. I mean, that’s the assets that that company owns. Everything else is a subscription model. And so-

Dan Mellinger: Yeah. Service from Google or VMware or whatnot, right?

Rick McElroy: Yeah. And so in a lot of cases, if it’s a tech startup, it’s probably someone there, that’s actually doing the support. But maybe they have a friend that’s providing IT support. But my general advice to those types of organizations is, turn on automatic patching wherever you can. Again, I’d rather troubleshoot a bad patch than I would a malicious actor on an endpoint. But as you start to mature. So let’s take this example, because I like this, a company is successful, they start to grow. Now they’ve got hundreds of people, they’re probably running infrastructure and multiple cloud environments. They’ve probably hired an InfoSec person to get an assessment, like, ” What do we have? What’s our footprint? What are the things that we’re going to need to address immediately?” Maybe they have regulations that are coming in at this point for whatever their business model is, which generally dictates that you do something like vulnerability assessment management and then have a path for maturity. But I think if you start to think of all the areas, so put myself in their shoes as I built programs like this, thinking of all the areas where it’s like, we’re down, I need humans. I need the humans to actually analyze the final data set, which is, this group of systems post this group of risk and is this risk actually going to lead to an exploit? And I do need some humans to come and look at that today. Call that a steering committee, collaborative meetings between information security on a certain business inaudible systems, all that good stuff, but-

Michael Roytman: Well, so I think the key there is that it’s a committee of security business, probably IT as well. I think what a lot of organizations get wrong, and not to toot our own horns, but I think why the Kenna Security VMware partnership is especially important is that that final data set is usually defined as a list of your inaudible or Tenable Outputs completely. And that is a whole bunch of noise that overwhelms most folks and a committee isn’t going to go through that. So the final data set has to be something tractable, manageable and already have intelligence baked in where it’s small enough to use people, because like you said, you’re going to have to use a committee of people, that’s your most expensive resource.

Rick McElroy: Well, and think about the time. Time’s the finite resource. I say this to teams all the time. It’s like, what are you spending your time on? Do I want to know? I don’t want to know. I want an intelligent system, that gives me that output. Gives me the data I need, so I can sit down with a team, explain it to them clearly. They’re going to push back in a lot of cases. Especially if it’s their own code, that they’ve written. And then I think, working with IT to make sure do I have resilient systems? It’s one of the cool things about Carbon Black coming over to VMware, is starting to look at this problem and say, ” Well, look, if I have enough orchestration inside of a data center,” ransomware prevention detection response looks a whole lot different. I would argue vulnerability management in that world starts to look different because my risk of actually taking down a production system, if I’m doing it right, is very low. I have the ability to clone applications, move them to multiple data centers. So I think security, actually learning from the IT teams on resiliency is super beneficial to us as defenders.

Dan Mellinger: That’s actually really, really interesting. Because normally we’re thinking about, to your point of throwing IT, security people, room, big list of things to deal with and now you’re negotiating, ” What can we do? Why should we do it? We don’t have enough time for this, this represents too much technology risk,” yada, yada, yada. And that’s normally it’s security trying to dictate to IT, ” This is what you guys need to do. And here’s why.” And you’re saying there’s actually the inverse relationship where it like, ” Hey, why don’t you take some of our lessons learned on creating more resilient systems, things that we can afford to turn off if there’s a security issue, and spin something else up instead?” Is that-

Rick McElroy: Yeah, I’ve made-

Dan Mellinger: …what you’re getting to?

Rick McElroy: …a ton of mistakes along this journey. I didn’t get to this point, without making a huge amount of mistakes. I’ll give you an example, started a new organization, we had no idea what we have. They went from 100 employees to like 10, 000, in a ridiculous amount of time. It was just crazy. It was awesome growth, I got to be part of it, all that stuff. So imagine, you have decent relationships with IT, you go out to coffee with them and do the social engineering things that we do in security, to try to get them to do some security work. And then you buy a vulnerability scanner, because you need the info. I’m a big believer in known state and ground truth. And now I’ve got a list of things. And I’m smart and my team is pretty smart. And so it’s like, ” Well, we got to get these things out of our work bucket into someone else’s work bucket. We did our job. We assessed everything in the environment.” ” Okay, how can we automate it?” ” Cool. Let’s start to write some ServiceNow code that does some automation, that instantly stuffs them in the tickets, in the IT team.” ” Well, what is the overall impact of that?” Their metrics and dashboard were hosed. So you can imagine the political capital.

Dan Mellinger: Yeah.

Rick McElroy: Or I should say the political will that comes with that. One, look, our intention was not… My intention was to make people aware, drive an outcome, have discussions around 30, 60 and 90 day patching cycles and exceptions, and all that good stuff. But I think there are some better methods than brute forcing 10 or 15,000 tickets into someone’s crosstalk-

Dan Mellinger: So you’re admitting that you have opened ServiceNow ticket floodgates on IT teams before?

Michael Roytman: How many-

Rick McElroy: Yeah, absolutely. And I’m still friends with them.

Michael Roytman: What’s the largest number of tickets that you’ve opened?

Dan Mellinger: I’m surprised. No, they appreciated the coffee. Yeah, no, that’s definitely interesting. I mean, going to that footprinting thing, how do you… I know from our perspective, even looking at CVEs to prioritize, it’s not always intuitive. And sometimes you want a human in the mix to interpret the results. I think where what’s the current status, and maybe this is better for you, Michael, of automation today when it comes to this kind of prioritization? How much of that can we do today from an automation standpoint without overwhelming IT teams?

Michael Roytman: So I think the better question is, where should the automation be? So I’m hearing this story and I’m thinking, ” Well, you’re doing the right thing. You’re essentially creating an awareness situational report for the entire organization, it’s just that the situation looks pretty bad.” So I’m going to coin this term, it’s going to be on the RSA show floor all of next year, if there is a show floor, intelligent awareness. Because there’s a strategic way to drive that awareness where more things get done, even if you’re not presenting the entire landscape of 200 million vulnerabilities across your organization. So I think some of the automation is on that piece of, we have 200,000 findings for these 10, 000 assets. But we’re only going to present these 2000 because we did some automation on an intelligent analysis situation. It makes me think of I helped out a friend with a Air Force startup that did radar data analysis. So if you look at our radar around the US city, there are 1000s of planes flying in the sky every day. 99% of them are normal air traffic that nobody should be worried about. Once in a while, there starts to emerge some kind of data indicating maybe this one is taking the wrong trajectory. Maybe it’s a hobby drone that’s in the wrong place. Maybe this plane’s approach is a little off. So there are these little indicators of intelligent data about them that can tell you, ” Hey, you don’t need to look at all two million planes in the sky, you can look at these 100 and then let’s bring that to some people who are actually going to act on it.” The automation comes from machine learning algorithms that are analyzing trajectories of airplanes. It’s a physics problem. The same thing exists in security, there’s a physics problem, science, hard component of what’s the probability this thing is going to get exploited? Do these exploits exist? What’s the chance that one gets written? Is the code already being exploited somewhere else? And that problem really doesn’t need a human input. It needs intelligence.

Rick McElroy: Well, what of the chain of, do I have mitigating controls upstream?

Michael Roytman: Right.

Rick McElroy: Which is where the humans come in to go, ” Well, wait a second, we’ll move this from a high or a critical down to a medium because I have upstream defenses against remote CodeCodex built.” Okay, so let’s assume they have to come on- site and stick a USB, stick a… Awesome. As far as I’m concerned for an enterprise security program, that’s a win. So I do think you’re right, I think, generally speaking and it’s interesting, I’ve been having some other automation discussions. But it’s plateaued out there, inside of InfoSec. We are taking some bi directional actions with some tools, we’re automating tons of data analysis. But generally speaking inside of teams, what they’re doing is automating the workflows.

Michael Roytman: Yeah, yeah. I was just thinking that as you were saying, and it’s like you can automate cutting tickets all you like, but did that really increase your capacity or did you just cut more tickets?

Rick McElroy: Yeah. And then you’re just moving the… Honestly, you’re just moving the noise to someone else’s bucket. Because now some overwhelmed admin for vSphere is like, ” Well, wait a second, these are all critical?” And then you get in that panic mode. I think it’s more appropriate to have a discussion up front around, what is the management of this going to look like? What are the standards that we’re going to operate under? What are the goals that we’re going to transit? And then try to get some wins along that way to prove it. So maybe take some easier environments to prove out the model before you go to your production application that makes all the money in the company. IT will have more confidence in what you’re trying to do. I think your team will. And then you won’t lose credibility when there’s that day, when you walk in and you go, ” Well, wait a second, eternal blue. This is real. Blue key. This is real.” And those are probably things you might have to move on. And this is what’s awesome about Kenna, they give you the ammo to do that. I think to be more accurate about your approach to IT, one you have to go in and say, ” Yo, we got to do emergency patching, and it’s that day.”

Dan Mellinger: Yeah. Well, I think with Kenna one of the things we found with IT teams, when they’re all working off the same data set, it doesn’t matter who finds it right. BlueKeep, right? Good example, ” Hey, can I flag this? It popped up.” And then the vSphere admin who’s overloaded go looks in his Carbon Black interface and he was like, ” Oh, wow, yeah. Okay. Yeah, we need to patch this. I can see that. I understand, we’re all working off the same data set here. I can go do that.” So now it’s more of a nudge, if anything, or IT and admins are empowered to do this on their own.

Michael Roytman: I think it’s a natural shift in the philosophy of how we build security products too. 20 years ago, 10 years ago, we were really concerned with building the right sensors so we could capture the right data. And I think a lot of organizations are still stuck in that mindset of how can I detect more? How can I show more to the security practitioner? But that’s not the problem security practitioners are facing. They’re facing the problem of how do we only precisely show to IT ops that which is actually risk? How do we cry wolf less? That’s a signal to noise problem. That’s not a data collection problem. And of course, we should keep getting better at data collection, we just need to understand where it fits into the life cycle of vulnerability management.

Dan Mellinger: Oh, it’s interesting, too, is I think part of the catharsis that the teams can reach is like what Rick said, ” We automated this, we accidentally cut way too many tickets.” But having security understand, ” Hey, yeah, we know. We’ve been data overload. I’m sorry, my bad, but we have this new process. It’s based off of this, this and this, we can all come to the same conclusions together. And let’s work together on this, it’s really powerful.” I think that’s cool. And we’re actually starting to move now, I think, into this near term, starting to remove humans from the remediation loop. That’s where we’re trying to get near term, right? I think.

Michael Roytman: Okay. So Rick, question for you, then. Where can we remove humans from the remediation loop? You’ve ran programs before, you see how your customers are using your current products. Is there a segment of the infrastructure that folks are automating the patching on successfully? Is there a specific type of organization that’s doing it well? Is there a specific type of, I don’t know, technology, infrastructure, that just you can’t do that on because it’s too much critical?

Rick McElroy: Yeah. I’ve seen a high amount of successful in teams that have a DevOps mentality. And I know, we’re on a security podcast. So we’re probably going to hear about saying DevOps. But the reality is-

Dan Mellinger: Hey, shift left. Shift left.

Rick McElroy: Yeah, look, the reality is, I deal with reality, and it’s a reality you have to deal with. But when you start to look at some of these tools that are out there, we’re already doing smoke testing. Is my application up? Okay, well, it’s a very rudimentary way of maintaining availability, but it’s like how much more granular can I go with that? Awesome. Well, if I have Python coders, and I have some DevOp folks that can start to break the application apart, and test the individual pieces, then I say why… Because all the orchestration tools are there, from an IT perspective, whether you’re using SCCM to deliver it, whether you’re using MSI push, whatever you’re using to do that. That that’s all there. And so then I go, ” Well, look, I can orchestrate data centers moving based on a failure of a network connection.” Okay. Well, if I can do that, then I can clone systems. How many of these systems can I clone into Vmworld? Infinity times.

Dan Mellinger: Yeah.

Rick McElroy: So if I can clone a production server, and I can make that look like now I have a dev environment, I have a QA environment, I have team, the proper things you should do anyway, to do testing, well, can I start to swap those? Meaning, what I’m going to do is I’m going to walk my patch through my test environment, I’m going to walk it through QA, I’m going to walk it into non prod. And then can I just swap non prod into prod if it’s working? I say the tools exists to do that. Now granted, again, you’re going to see a lot of that come from very large shops. That data. They have the engineers to do it. So then I say, okay, commercially, what do we need to do to make this available and democratize it to everyone? Well, vendors, we have to get together and then say, I think we need a little bit of an evolution with what we’re doing with API’s today. A little bit. And I think that would help. But yeah, there’s tons of opportunities to test your code along the way. I mean, I’m not going to plug any specific vendors. But dynamic testing exists. There are some great quality assurance tools. And then I think a lot of the IT management tools have been built into it.

Michael Roytman: Well, so I think what you’re talking about is that tooling has the capability to support any kind of workflow that you might have. On one workflow, I actually see this all the time at our customers where the patch verification is actually the bottleneck in automating something to restore a tool. We might have identified 100 vulnerabilities with exploits across your network, you want to auto patch them. The thing that’s stopping large customers from doing that is that there’s another process, which is largely manual, which is testing that pattern, verifying it and then deploying it. So you just walked us through excellent use case for how we can automate that. I think a lot of vendors have built the tooling to allow folks to build their processes. But now is the time when we have to walk backwards and say, Hey, we know that 90% of organizations have a patch verification process. Can we automate 80% of that patch verification process for them? Certainly on let’s say, Windows systems.

Rick McElroy: Yeah. I think sometimes that we’re the bottleneck. Everything’s on its own maturity arc. But I’ve been really thinking about why are we, and I’m going to say stuck overall as an industry, in security with automation. And some of it might be some IT admins who go like, ” Oh, if I do that I’m going to lose my gig.” But I can just say, whether it’s data center automation, security testing. I haven’t been laid off because I automated something. In fact, they just give you more work to do. So it’s like a worker… So I hope, if there are people who run IT operations, don’t be fearful that you’re a patch manager today, and you’re going to lose your job, I can almost guarantee the company needs you working on I think some more important digital transformations than patch management.

Dan Mellinger: That’s interesting, because we were all chatting on this line before we hit record, but what’s the hard thing to do for businesses? Scale and logistics. If you’re able to enhance scale and or make logistics work more smoothly, because you can automate things, effectively, you’re going to have a valuable position in any organization. Because that’s literally the hardest thing to do as a business is really grow beyond that.

Michael Roytman: Not to mention, that we just talked about the capacity constraint. So if an organization’s capacity is really 10%, the automation isn’t something that means you’re no longer needed, it’s quite the opposite. It means we need you to scale that 10X, on average, to what it was before, and automation is only part of that piece.

Dan Mellinger: Yeah, that’s super interesting, Rick. I haven’t heard anyone articulate it quite that way. So I’ve never heard that point.

Rick McElroy: This is a hard question. But do you feel like if that was my team’s 10%, right? And I took that 10% and I was like, “I’m going to stop doing this work. And I’m going to spend the 10% of the time on some automation stuff.” Do you generally feel like that would yield more fruit as a path? Or should you look at maybe dedicating 1% of the time? As somebody who’s approaching an automation first mindset, it’s hard if you’re not doing a Greenfield build. Right?

Michael Roytman: Yeah.

Dan Mellinger: Yeah.

Michael Roytman: It’s much harder to renovate a house than to build a new one, for sure.

Rick McElroy: Yeah. My advice was always if you did it five times, and the results were the same, it’s probably a good candidate to automate. But that’s very rudimentary advice.

Michael Roytman: And I want to be controversial and say, yeah, just stop patching and go ahead and automate and you’ll get the results and it’ll pay dividends later. That’s not the reality for a lot of organizations. And as we were talking, I was thinking about, ” Okay, so what’s the real breakdown?” How much do you need to manage 20% of the risk? What’s the 80-20 rule here? And well, we know about two to 3% of vulnerabilities are actually using active exploits. What if your current team’s capacity is 10% and you shrunk that down to five or six months? And you use that six months to do an automation deployment which then increases to 25%. And you look at the integral of that function, you’ve just increased your capacity for X.

Dan Mellinger: Yeah. Well, and I was also going to bring up that, Michael, the data that you brought up in the beginning, that 10%, that was your company’s ability to reduce vulnerability debt. So patch more things that came in, in any given 30- day period. But that data set’s a little biased it’s kind of customers’. So that’s who we were looking at, and when we broke it down, the biggest correlating factor too, was setting SLAs and using Kenna as your risk based metric. So there’s an argument to be made that 10X is a baseline for the earliest type of programs, people systems and that 2. 5X, the 25%. People were more likely leveraging more automation via Kenna and other people and also setting processes to that. So they were setting up stronger programs from the get- go and leveraging automation and that was yielding that 2. 5X increase in productivity.

Michael Roytman: I don’t think it’s because, I don’t want to say something bad about our product. But I don’t think it’s because our product is very mature on the automation scale yet.

Dan Mellinger: No. No, no.

Michael Roytman: The VMware Carbon Black partnership is probably our first foray into really doing that. And it’s early days. So I think what you’re seeing in that data set is that the folks that are leveraging risk based vulnerability management and prioritization aren’t playing the vulnerability debt game. They’re not playing the we fixed 10, 000 out of a million game, they’re playing the we know that only a small subset is responsible for most of the risk. So almost by default, they have more time to then devote to other automation activities, whether they’re within Kenna or somewhere else. In the future, I hope that we can automate some of that process for them. But I think today, we’re just giving them back time to spend on increasing capacity elsewhere.

Rick McElroy: Yeah, that’s a super good point. Because as an example, we have the number one app control product on the market. So we see this as well. Generally, I would say applied towards legacy operating systems and things that there’s just no support for. But we’ve actually seen tons of adoptions where CISOs are like, ” I need to buy myself some time.” So this idea that like, ” Hey, I can actually harden this thing. I can inaudible attack service out. And then I have time for a manufacturer to come up with a software update, because maybe it’s an ICS system, I have time for my teams to rewrite some code.” So for me, I always keep that as a handy tool in the toolkit, is there are other mitigating controls that will help you by that time that your team needs. So it doesn’t become as urgent when the rest of the world is getting hit by Heartbleed. And you go like, ” Boss, we’re good. We had that covered for these reasons.”

Dan Mellinger: Well, I mean, I think it’d be a good time, actually, let’s transition. What does five years look like within this scenario? Where are we going next in regards to source systems or SIM, automation, pulling people out of the loop as much as possible and building that scale into VM, cloud based security, all that fun stuff?

Rick McElroy: I think I’m going to keep my optimistic hat on for this one.

Dan Mellinger: I love the optimistic hat, that’s a fun one.

Rick McElroy: I think we have enough data as an industry, to Mike’s earlier point, we have enough data to determine the bottlenecks. And so then I go, like, ” The bottlenecks are ripe for automation.” So it’s like automating identification at this point, that’s table six, I’m sure you guys probably know that. You can’t not have some automation when it comes to detecting systems and vulnerabilities. But cool. Okay. How can we provide more meaning behind the CDEs? Because I think the data exists to start to put this together. And this is where I’m really interested in picking Mike’s brain. I’m not a data scientist, but I hang out with them sometimes. Is this idea that I have these large datasets, because it’s either my data lake or someone else’s. Generally speaking, and I’m going to look at this with my Carbon Black hat, from an endpoint perspective. I have so much data on an endpoint already. I mean, I got a telemetry, they got processed data. So how can I use that data on our side, we’re going to look at malicious and abnormal. That’s generally where we’re going to apply that to. But I’m keenly interested in this idea of probability models, meaning if I’m a business owner or if I own a car, it doesn’t matter the thing. I just want to know, or a house. What’s the likelihood that someone bad is going to walk by with a crowbar? Because then I can make some intelligent decisions on that. I can choose to put in better locks. I can choose to park inside my garage instead of on the street. But if I don’t have the data to get that awareness, I don’t know where to shift that 10% of time when we know it’s precious. And then, oh, shit, this matters.

Michael Roytman: I remember a discussion I had with Ben Johnson, who’s one of the founders of Carbon Black.

Rick McElroy: Yeah, yeah, yeah. Yeah, Ben over at Obsidian now.

Michael Roytman: Right. It was, I want to say eight years ago, seven years ago? This was in an incubator in Chicago, when Kenna was maybe 15 people and Carbon Black was maybe 25, back in the days. And we were talking about, and this never came to fruition, I mean, it did now six years later. What would it look like if a customer had access to everything that Carbon Black was at that point, providing on- premise in the cloud and then piped back into Kenna, what could we do with that data? So what you’re saying is absolutely right. It’s if I look at the state of a system, I know that there’s technical details about that system. But there’s also time based inferences we can make about that system based on past probabilities, based on that system, based on systems that have looked like it and have gotten breach in the past. So today, most IT op software defines a system as its current state, not it’s future state. And I think that can change. But the real value of that is to walk backwards and say, ” Carbon Black has found an intrusion has occurred. Now you’ve got to go do something about it.” The thing that most analysts would do is then look at that system and say, ” Based on these states in the past, there were 400 different ways this malware could have gotten in, I got to figure out the right one, I’m doing an investigation now. And this is the most expensive time on the detecting response side.” It’s not the sock analysis time, it’s the investigation time. What if instead, you could support those decisions for those investigators by saying, ” Hey, there’s 400 different things that have happened over the past six months on this machine, that could have been the thing. But based on our intelligence data, and based on our probabilistic model, we think the highest probability is these five Adobe Reader vulnerabilities.”

Dan Mellinger: Start here.

Michael Roytman: Yeah. If you could save them that week of initial investigation time, that’s a near term goal. I don’t think that’s five years. I think that’s two. But that’s marrying the data that we have on the vulnerability management side, with the data that you have about a systems current state, and essentially building a model for decision support, not on remediation, but on investigation. So it’s flipping the script.

Dan Mellinger: Do you still use Adobe Reader, real quick?

Michael Roytman: People still get hacked on Adobe Reader. On a daily basis.

Rick McElroy: All day. Yeah, all day, it’s amazing.

Dan Mellinger: No. It’s funny, we’re doing this little project for 10 years of vulnerability. So I’m looking at all the top rated CVs that we’ve ever rated. And I’m looking at 2010 and 2011. And it’s all Office, Win sErver, Adobe Flash, Adobe Reader. That’s all it is. And lately, not so much. I guess people are starting to transition to just opening PDFs in Chrome or whatnot. But yeah, it’s a side note.

Rick McElroy: You’d be surprised.

Dan Mellinger: Yeah. That’s interesting.

Rick McElroy: There’s a lot of law firms handling documents insecurely.

Dan Mellinger: That is very, very true. Yeah, I think one of the tops in 2018 is a Foxit remote code execution. Anyway. So Michael, you’re talking about basically just saving time. Giving someone a lead on where to start so they don’t have to go investigate 100 other possibilities?

Michael Roytman: Yeah. I mean, look, a machine learning model is absolutely useless, unless it provides decision support value to somebody. So they’re making the decision for someone. And I think that’s risky when it comes to security or it’s supporting their decision. We’ve spent 10 years at Kenna supporting the which vulnerability should I remediate decision. Building models that tell you, this is the one that’s likely to cause compromise. That’s not the only place where people make inferential judgment about vulnerabilities. Now, that we’re walking into the IT ops, detecting and response side of the house, the evolution from EDR to XDR. You can look there and think about, ” Well, okay, this is an amazing tool. But what do people use this tool for?” There’s a set of decisions they make, and consequently, there’s a set of machine learning models we have to build that save them time. And I don’t care if we save them 20% of the time or 90% of the time, those models will get better over time. The key is that we now marry these two data sets and start building those models. And I think we’re in a good place to do that. I think there’s a lot of the industry has the data to do it. I think it’s a combination of the right data sets, right place, and then the gusto to actually go and build those models and deploy them to customers.

Dan Mellinger: Interesting.

Michael Roytman: We just came up with a product roadmap for 2021 if anyone is listening.

Dan Mellinger: Oh, apparently. Don’t tell anyone. Oh, wait, this is public. Real quick, you mentioned EDRs conversion to XDR. What’s XDR? What’s the difference?

Rick McElroy: Yeah. So I mean, I can give you the official definition, it’s Extended Detection and Response. Here’s the way I put it, it’s what I need to do detection and response. So you’ve seen niche products in that space. Network detection response, end point detection and response. What XDR really represents is a flag for vendors, for teams, as they’re starting to think about detection and response holistically. Well, I need multiple camera angles if I own a bank, and I have a vault. I need the street camera, I need one when they come in the door, I need one in the vault. All of these things. And then someone watching all that stuff. So essentially, all it represents is being able to contextualize this picture of what will the attackers do? And then on the vendor side, what all of the vendors are working on. So we’re working on it, a bunch of our competition’s working on it. It is who can have all of these disparate controls, talk the same language and then start to drive action, in a meaningful way? So moving away from things like API calls. So that, at least in a VMware world, like ours is just going to be software code that runs as part of vSphere. So that’ll represent how does EDR start to get into vSphere? How do we start to integrate that with things like Lastline and inaudible and all the other products, but really, all it really means is… And if you’re an operator of a sock today or you’re there, you’re probably already doing a bit of XDR. You’re logging Windows event logs, you’ve got data sources that are on the network, you’re putting them into a SIM. But what’s cool about XDR is folks are going to start working on XDR analytics. So again, to Mike’s earlier point, analytics have been applied in all of these different niches. So I’m going to apply it over here for this purpose. But now it’s like, oh, let’s holistically look at all of these TTPs, the entire chain and then be able to put analytics, which does represent in my humble opinion, and I know people hate XDR. But I’ll draft behind great marketing language to get better security, I think it does represent that great push forward. Because the vendors now have to think about this and say, ” Well, why don’t we just build this into the operating system? Why don’t we just build this in to the infrastructure itself instead of bolting all this stuff on?” And yeah. So I’m hopeful. I’m really hopeful about it.

Dan Mellinger: That’s interesting. So leveraging the tools that already exist, as long as we can piggyback off of?

Rick McElroy: Yeah. They just talk different languages. It’s like, yeah, we got together on threat intelligence sharing. And generally, that’s the same thing.

Dan Mellinger: STIX/ TAXII, right? That’s how everyone communicates.

Rick McElroy: Yeah. But it’d be cooler if machine language just spoke threat language, and then a machine could go do something about it, which I think speaks to the future, future discussion that we have teed up.

Dan Mellinger: Yeah. Well, I know we’re getting short on time. So I don’t want to miss on this. Because you brought up some cool stuff when we were prepping for this, 20 years from now, what do you see? We’re talking about automation-

Michael Roytman: That’s such a tiny question, Dan.

Rick McElroy: I’m tired of that point. I’m not the old dude in the room still screaming about passwords.

Dan Mellinger: He’s just trying to get some stock that’s on me, that’s all.

Michael Roytman: The old man screaming at the cloud, ” Two factor authentication?”

Rick McElroy: Yeah. I do work for a very large virtualization company. And I would say this conversation predates, I think, the acquisition of VMware, but just something that I think if you follow technology, the idea that we’re abstract in all the things. Modern operating systems, why can’t I just do that in a container? So when I start putting on that hat, I start thinking about things like, well, kids 10 years from now aren’t going to care if it’s a Windows OS, or an iOS. If you ask a programmer today, they’re like, ” Oh, I’m a full stack programmer in AWS.” Oh, so you know Linux? ” No, I program in AWS.” This is how we’re crafting education, all of this stuff. So I think we’ll see further abstraction. I think this idea of software defined security is very interesting, especially if you control infrastructure and create infrastructure services. I think that’s intriguing to me. Because it actually represents a way to take a lot of this bolted on stuff that we put in, the security, deception is an example. I want an operating system that does deception as part of it. I just want that. And then of course, the attackers will innovate, yada, yada, yada. But I think we do need to leap forward in how we think about operating systems and our security stance, for sure.

Dan Mellinger: Michael, any thoughts from your end as we close this up?

Michael Roytman: Yeah. I mean, I’m thinking about all of this instrumentation that we’re building now. I think Rick’s point is spot on. It’s how people use it that ultimately makes something more or less secure. And I think we’re just now learning some of those use cases. Some of them are coming from other vendors entering the market, like analytics on top of XDR. But there is a complete picture. There is a complete set of sensors that gets you everything you need, the operating system of security being let’s capture everything that we need to then make changes to that system, proactively or reactively, whether it’s vulnerability management or the sock, it would be awesome if we could do that as software. If a security engineer didn’t have to cobble together 14 different tools in order to then translate that somehow to the scripting API call. If instead, they could issue commands the way that we do in Python or Ruby. 20 years is a long time. I’m hopeful that all of this hard work that we’re doing now, which is the my new signature capture, translation of software to vulnerability, millisecond timing of the state of a system, all of that is the groundwork, and on top of that will actually be the language of actual security. Hopefully.

Dan Mellinger: Interesting. Yeah, that takes me back to our risk episode that we did. I mean, when we think about it technology, as we know it today is relatively new. We don’t have a ton of data yet on it. And so it’s humbling to think that we’re actually collecting and trying to make meaning out of that data right now. That’s super interesting to think about. Any closing thoughts, Rick, last thing, before we sign off? I know we’re a little bit over on time.

Rick McElroy: Oh, it’s the end of the year, I hope everybody made it through safe. I just wanted to thank everybody, for all their hard work. It’s very tough being behind the scenes, but committed to be, protecting everyone’s data and keeping them safe and keeping patients, being able to be treated. So I just always thank the community, because we’re pretty awesome. And we do a whole lot of stuff that goes on. Thanks.

Dan Mellinger: Absolutely. Well, we appreciate that. And we appreciate you being on the show here, Rick. I’m going to go ahead and link to your Twitter account, by the way, so if people want to yell at you about this episode, they can do it directly.

Rick McElroy: Awesome.

Dan Mellinger: You can always yell at me directly as well. Michael’s pretty prolific. I’ll also linked to VMware and Carbon Black and the XDR page, just why not because I think it’d be good content-

Rick McElroy: I appreciate that.

Dan Mellinger: … tohave as well. Yeah, absolutely. And we thank everyone for listening. Have a great day.

Read the Latest Content

Podcast

Around the Virtual Table with Chris, Jeremiah & Ed

We have a special around the virtual table with some of the biggest names in cybersecurity discussing a wide range of topics like securing remote workers.
READ MORE
Employee Spotlight

Employee Spotlight: Mezcal with Michael Roytman

Michael Roytman, Chief Data Scientist at Kenna uses his expertise for data science to help make companies safer in the face of security vulnerabilities.
READ MORE
Risk-Based Vulnerability Management

Kenna Security and VMware Carbon Black Collaborate to Provide Risk Prioritization to Cloud Workloads

Kenna Security’s core vulnerability management platform, Kenna.VM functions as a natural extension of VMware Carbon Black Cloud Workload.
READ MORE
FacebookLinkedInTwitterYouTube

© 2022 Kenna Security. All Rights Reserved. Privacy Policy.