On this episode of Threat Vector, host David Moulton sits down with Andy Piazza, Senior Director of Threat Intelligence at Unit 42, to unpack the good, the bad, and the ugly of AI in security. We explore how AI is accelerating detection and response, where it’s already saving thousands of analyst hours, and why human-in-the-loop still matters. We also examine the darker side: LLMs in command-and-control, deepfake-driven fraud, model drift, and data governance blind spots. For security leaders evaluating AI, Andy shares practical questions to cut through hype, real metrics that matter, and a blueprint for building trust. This conversation is essential for decision-makers aiming to secure AI everywhere while strengthening identity controls and SOC workflows.
Protect yourself from the evolving threat landscape – more episodes of Threat Vector are a click away
Full Transcript
[ Music ]
David Moulton: Welcome to "Threat Vector", the Palo Alto Network's podcast where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends. I'm your host, David Moulton, Senior Director of Thought Leadership for Unit 42.
Andy Piazza: The stuff that you struggle with is the stuff that sticks with your brain. So if we're making everything easier, we're going to be less intelligent. Then there's also the argument is, like, Oh, well, we can use AI to really, you're still going to need your experts. We're going to empower our experts. What happens in five or ten years when we don't have any new experts getting promoted up and those guys are retiring out because we've replaced all of our juniors with AI? We're going to have a huge problem in the industry where, we already have a huge staffing problem in the industry. It's hard to skill up all the things you need to know in cybersecurity. And now we're going to get rid of the very few junior jobs that we had and replace those with AI? We're going to be stuck in five to ten years. [ Music ]
David Moulton: Today I'm speaking with Andy Piazza, Senior Director of Threat Intelligence at Unit 42. Andy is a seasoned threat management professional with over two decades of experience spanning security operations, cyber threat intelligence, and malware analysis. He's also known for his leadership roles at IBM X-Force, BsidesNoVA, and the US Army, and holds numerous certifications. You may know him as a DEF CON Goon and a frequent speaker at threat intel conferences across the country. Today we're going to talk about "The Good, the Bad, and the Ugly of AI: How Artificial Intelligence is Reshaping Cybersecurity From Supercharging Threat Detection to Enabling New Adversary Tactics". Here's our conversation. [ Music ] Andy, welcome back to "Threat Vector". You've been a Goon at DEF CON and now are leading contests there. Plus, you've spent years on the frontlines of cyber threat intel. You've seen the culture and tech evolve in tandem. What's the biggest shift you've seen at the intersection of hacker culture and emerging tech like AI?
Andy Piazza: Well, I think with hackers, you know, throughout our history, when emerging tech comes out, we're usually some of the first to adopt it. And by adopt it, I usually mean break it. But understanding that, you know, from a hacker perspective, we really enjoy just getting hands on things and understanding how they work at its core elements, and then trying to see if we can make it do things it's not intended to do. So, it's not always, you know, breaking things or hacking things to be malicious, but we ultimately just really want to have an understanding. And a lot of us are on the good, good guys' side of things, and so we want to understand how we can secure those things as well. You know, from a DEF CON perspective, we're seeing AI come out in a number of different fun contests like making music and making art. But also seeing the shift in culture of DEF CON where it was very much kind of the old guard and, you know, everybody's under their hacker handles and, you know, we're very anonymous when it comes to photos and stuff like that. To this year, we have a social engineer, social media scavenger hunt where they're going to go do traditional scavenger hunt things that we've done for 30-plus years at DEF CON. But part of their challenge if they want to make points is they've got to upload videos and snippets to social media. So, I think it's cool to see how they're going to interact with social media. And a lot of them will use AI for, you know, kind of deepfake videos to really kind of modernize DEF CON this year. So, it's cool to see that culture kind of shift as we bring younger blood into the community.
David Moulton: Andy, let's shift gears into AI. You mentioned it earlier. Obviously AI is transforming cybersecurity defense. From your perspective at Unit 42, what are some of the promising new cases of AI in threat intel or security operations today?
Andy Piazza: Yeah, so I think, you know, AI is really promising. And so it's really easy for me to be a doom-and-gloom guy, especially as a threat intel person. I see the worst in security every day. So I want to start with, I do see it as promising. We've had some interesting gains from just internally. I have a teammate who built a really cool tool that helps us analyzing phishing kits. So, if you don't know, phishing kits are something that bad people on the Internet can buy. Someone has basically pre-configured all of the files you need to stand up a phishing, a credential harvesting server. So those phishing kits can include thousands of files with thousands and thousands of lines of code. My team used to manually go through those and review those. We'd look for hacker handles and different code similarities and things like that. And now with the new AI tool that we have built in-house, we throw the phishing kit at it, it categorizes it and compares it to all the other known phishing kits we've already manually gone through. And it'll give us, you know, six or seven different files that says, These have an 80% or more similarity to these other ones. And now we have a much narrower set to pull a couple files up and go, Yeah, this is probably the same phishing kit. We've also used that to pull out indicators of compromise much faster, because we know how the different phishing kits are structured. So those URL paths, where those actual credential harvesting sites are going to be. And it's great because we're talking thousands and thousands of hours are now done in, like, 20 minutes of computing. And now my team has actionable intelligence and they can go hunt on our telemetry for those file paths and URL paths within, like I said, maybe an hour of getting that phishing kit, instead of two weeks. And sometimes we're getting ahead of those things because we can get them through different sources before anybody's even stood up the infrastructure. So, we're getting those into our products and blocking infrastructure before it's even been registered by bad guys. So, AI saved us a bunch of time on there. But, flip side, we do still have to validate a lot of that. I'm a big believer in human loop. With any automation, whether using AI or, you know, dumb automation, for lack of a better word, you know, we still see it. What some would call hallucinate, I know you had a guest a few months ago was talking about, she called it lying, not hallucinating, because the AI actually knew that it was giving a false answer. So, I really like that perspective. Like, we teach people, you ask me a question and I say, I don't know the answer. That's how we teach people to operate. But for some reason, we're teaching AI to come up with answers. I know there's a lot of people that are frustrated with early deployments of, like, Apple Intelligence or Alexa because it wasn't giving them answers. Well, I'd rather have that than tell me to add gasoline to my spaghetti to make it spicy, right? An actual real-world example we've seen, like, that's a horrible, horrible thing. Like, so we still we still want to validate. But, you know, we've done some really cool stuff like building our threat actor profiles. It takes a lot of reading and reading and reading of, you know, 10-plus years for some of these groups that have reporting out there. Now we're able to use an LLM, point it at a couple of trusted sources, and it brings in a profile. We can skim it. We can make sure it's edited properly. And we've even seen AI editors that leave hanging sentences and have bad grammar because as smart as we think these are, it's not actual intelligence. It's not humans thinking, right? We're electrocuting rocks. That's all a computer's doing. So, like, we still need a human in the loop, but I think it's more of a gaining, like, you know, 30 to 40% gains rather than the 99% gains that some of the hype train is telling you you're going to get.
David Moulton: Yeah, it's good to hear that your team validates. And even if it's a matter of speeding you up so that you can get ahead, that your team's still in there. I worry about what are the attackers doing to get smarter with AI?
Andy Piazza: So I think, you know, we've seen a lot of threat researchers, thankfully, security researchers in the space, more than bad guys. But, you know, I think it was about a year and a half ago we saw a researcher package come out called Black Mamba. It was a piece of malware that reported the command and control operator with an AI LLM. So normal dumb malware checked in with a C2 server, a command and control server. And instead of a human being like, Oh, pull these files down or do these other actions, they were using an AI in that model. So again, it was just researchers. But now just a couple weeks ago last month, our good colleagues in Ukraine, they just released a malware family, an article about a malware family that Russia threw at them that they believe was replacing the C2 operator with an LLM. And what scares me about that is the scalability of attacks now really starts to get realized. You know, you think about the solar wind reach. We believe thousands of organizations had that back door, but only a handful really got interacted with by the bad guys because they were limited by human resources.
David Moulton: They needed to sleep.
Andy Piazza: If that C2 server is now running an AI that doesn't need to sleep, it can interact with a thousand different compromised, or 10,000 different compromised companies, pull all that data, exfiltrate it all, put it into a database, and then that operator, one or two people, instead of thousands of them, can use an LLM and say, What financial data is in here? What legal data is in here? What personas are in? And they can just use plain language. They don't have to be data analysts anymore. And one or two operators can now pull, like, real-level intelligence out of things with just plain language and not have to be data scientists or spend thousands of hours comparing documents and reading through target information. They can just dump it all into a big database and let an LLM answer it. You know, I go back to, like, the OPM breach. Around the time of the OPM breach, some major airlines got popped. Some major hotels got popped, there was a lot of data coming out. If that threat actor has that database and they've got an LLM on it now, right, and a couple of those hotels and airlines were believed to be ones that were being used by the US government for certain agencies. I dump all that into an LLM and go, Which one's the CIA operator? And just type that out as a question, and it dumps me out a bunch of personas out of those databases. That scares me.
David Moulton: Have you encountered any specific examples where generative AI tools were used in an active campaign?
Andy Piazza: Yeah, so we're seeing a lot of generative AI, for the most part very scammy stuff, but we're also seeing it with, like, the North Korea stuff, doing some of the personas. We actually have an interesting threat research article out, you know, this is cyber time. It was either last week or six months ago. I think it was about six months ago we did it. We looked at a bunch of the scammy sites. It was the celebrity, get rich, Bitcoin, those types of things. And they were using images of, like, Trump and Musk and a bunch of other celebrity types. And what was cool was when we looked into it, one, you could pretty easily tell it was gen AI-type stuff and some video clips and some images. But when we started, we really want to go, Can we apply traditional cyber threat intelligence modeling to track this? And when we started looking at the infrastructure, just like we would look at phishing infrastructure or command and control infrastructure, we realized it was probably only one or two groups behind all this activity because the infrastructure was all shared and the registration information was all shared. And it was really cool because it was like, Yeah, everyone thinks AI is this big, scary, unknown weapon. But in reality, it's still running on the same technology that we're used to. It's a little bit different, right? And it's a little bit black box. You don't quite understand how the AI works. But really, it's being stood up on domains, it's being stood up on IP addresses, and it's, we're able to use our traditional methods to still track that infrastructure and say, This is probably a cluster of activity related to a single group or two groups based off of some of the patterns we saw. So, you know, I say all the time, there's no silver bullet in defense, but there's also no silver bullet in attack. Like, we can still track this stuff and chase bad guys the old-fashioned way.
David Moulton: Andy, let's shift into the ugly, AI hallucinations. And you mentioned it earlier, that's a great marketing term for machines that lie. Model drift, data poisoning, the risks aren't just theoretical. What's your view on the most dangerous or maybe the most misunderstood threat vectors within AI systems?
Andy Piazza: Yeah, so when I talk to CISOs and CIOs, you know, I try to highlight, like, when we write a threat research article on jailbreaking AI, we have to use quasi-safe scenarios, because we don't want to interact with customer data. So if we're going to break an AI chatbot, it's going to be, like, Write me a phishing email, write me malware. And I know that sounds super malicious and weird to say semi-safe, but that's a lot different than, Dump out all your customer data. And that's the thing I'm trying to tell CISOs and CIOs. You're deploying a chatbot to your website. Is that chatbot also the same model that's tied to the backend database? Because if I jailbreak your website, I'm not going to use it to write phishing emails. I'm going to basically do, like, a traditional SQL injection. Again, new technologies, old again type of thing, vice versa. Like, I'm going to go in and go, Give me all your customer data. I was just out in California talking to some county CISOs and some school CISOs, and we were talking about the governance model around using a chat system on a government website, like a county website. You don't know what that user is going to come and what questions they can ask. So now it's got to be, like, HIPAA compliant, because what happens if I go in and I go, I have this disease, what kind of medical services you have? You can't predict what kind of questions now that it's a freeform field. Now that box has to be HIPAA compliant. I could go in there about having to pay a fee or something. So now it's got to be, what, PCI or, you know, financial compliance. Like, all of the compliance models now apply to this chat bot. All because you wanted to save a user 30 seconds from finding the FAQ or just finding the resource on their own. I just think about how much additional risk and governance that is now because everyone's just being pushed to adopt a technology they barely understand. I think that's really scary.
David Moulton: Andy, how do we build trust into AI systems when they can be manipulated and so easily deliver false positives?
Andy Piazza: Testing, testing, testing, and human in the loop, right? Like, you know, just like when everybody was hesitant about going to the cloud, we saw business units adopt the cloud way before security operations or even IT were aware of it. We're seeing that with AI as well. I'm hearing here on the floor, walking around Black Hat, like, Oh yeah, we're officially using this model, but I really like this other one, so I just pivot to my personal computer. And, you know, I never put corporate data in there type of thing, but yeah, I'll go to my personal computer and use ChatGPT, because we use Gemini, and I didn't like Gemini's answer. And I'm just like, You are a security professional, and you just told me you went around your security controls.
David Moulton: So you come at this from a security standpoint, and somebody who's a defender, and I look at it and I see the same thing. I've done the same thing where I'm like, Oh, I didn't get a decent answer, or I'm blocked and I want to get something done. So I'll look to, you know, another computer, my phone, my personal laptop to go try to figure something out. And to me, that's a really simple and incredibly difficult problem to solve for, right? Like, if you put a mandate in or you may get a block, humans are clever and we'll get around it. And so it's like, How do you make it easier to do the right thing? Right. Just like when we did cloud and any other technology. And to give someone, even an internal AI model, but say, it's not, you can use it for general queries, but you can't use it for customer data yet. Well, customer data is the most challenging data. So you get the stupid stuff, that I call the stupid stuff, but, like, hey, do you want me to summarize this email? Well, it's three sentences. If I need AI to summarize these three sentences, you could probably fire me already, but I need it to do the hard things, but I'm not allowed to apply it to the hard things yet. So yeah, in a security mind, like, we do need to get faster at helping a business unit to adopt the technology, but there's just so much, you know, we use the term, like, black box. There's just still so much unknown about not only the hallucinations and the data modeling itself, but the governance of it and the discoverability of it, right? There's a lot of risk and, legal risk involved in you operating with customer data. Did all of your contracts need to be rewritten because of this new technology? For every one of your customers, you know, you'd have to have Legal go through and review all of your contracts to see if you can do that thing with that data. That all just takes time, unfortunately. Yeah.
Andy Piazza: There's two problems I have with the way we're adopting AI is a business world right now. One is I don't care what technology stack I'm using. I say, you know, I'm pretty smart when it comes to technology. But when I talk to developers or architects and stuff about a project, I intentionally go, I'm a user, I want a green "go" button. I don't care if it's SQL, NoSQL, graph database. And that's my problem with AI, is it's not being built inherent into my tools. You're giving me this blank prompt and being like, It's super powerful. And I'm like, I just see a blank page. Like, it's hard to visualize that. Then the other piece of the way we're doing this rollout right now across, you know, many, many companies is everyone needs to learn AI. That's not my software developers or engineers. That's everyone. So now I've got HR staff and finance staff and all these non-technologists who are taking parts of their business day to go learn a brand-new technology with zero rollout and training. Like, where is the AI trainer who comes in and shows you, Hey, do you know you could actually write up your questions and it can record a podcast for you now? And you're like, Well, don't do that because I like my job. But they could probably help you edit your podcast faster or web videos. You could probably do some really cool graphics in the background. But for you to stop and learn that is very, very expensive when it's every employee across every company trying to figure out technology by themselves. Like, that's a really weird way to adopt technology. [ Music ]
David Moulton: None of us spent our undergrad learning prompts for AI. None of us are necessarily world class at that. Some are better than others, right, but it's still a weird moment when you're going, like, That's not in the 80% of the value you bring as a marketer, a storyteller, a thinker, an editor, whatever it is, to go and run at the, you know, the AI system with a bunch of prompts and try to build that out. And then, of course, we're not seen in the organization as a engineering or a software function, so when we have a, you know, a need, Can we get Google Cloud Platform? Or can we get something that helps us distribute a script? You're in marketing. Man, what are you doing? And you're like, Well, if we're going to scale this thing, we need those tools, too. We need that access, too. And I can only imagine our InfoSec team and our CISO going, No. Absolutely not. This is not going to happen. And yet that's the edge that we're bumped up against, because the push is go deploy AI tools in everything that you're doing on a content marketing team, which is a storytelling team.
Andy Piazza: And that's why I say I want to see AI built into the tools that I use, right? So we see it in our platforms, right? Not to get too corporate, but, like, I want a quick summary of the alert in human English instead of techese, right? Like, we can do that with AI. Where you're, like, from a content creator perspective, you upload this episode, you should be able to, native in your video and audio editing app, be like, Pull out social media clips, and it should be able to go and find the spicy take or the cool line, and be able to drop that as a clip or drop that as a social media picture or whatever. Like, that's good AI, not going to a blank screen and being like, How do I use you? Like.
David Moulton: Right. So, Andy, if AI takes over lower-level roles, what do the next generation of managers and leaders, where do the next generation of managers and leaders come from?
Andy Piazza: Well, I think we're in this weird, almost like an AI version of, like, Dunning-Kruger effect where we're thinking it's, like, much stronger than it is. And I think just like we've seen in the past with moves for offshoring and onshoring, I think we're going to see, you know, we've seen a lot of companies cut and be very open about cutting because AI is replacing staff. I know a number of companies that aren't cutting, but they're saying, Before you get additional headcount, you've got to prove, you know, that AI couldn't do it and automation couldn't do it. I think we're going to see that for the next few years where people really, there's a lot invested and there's a lot of opportunity available with using AI. And so I think the dream's still there, but I think about three, four years from now, we're going to see the swing back towards more human-centric and realizing, unless, you know, AI blows up and is actually useful and hits all those realizations, I think we're going to see a shift back to humans. So, we're going to be a little bit delayed in that career growth chart. But I am, for some reason, hopeful, which is weird for me, that I think we're going to learn some hard lessons. But listen, I would love to talk to every tech CEO in the world right now, and they're all watching their peers lay people off for AI, and be like, If you're the one company that goes to the market and says, We're not going that direction, we're going to invest in humans, there is rockstar talent available applying for every one of our roles. You know, 10, 20 candidates that you would love to be for a single role, a dream team. If there's a company that's out there that's got leadership that says, I'm going to hire all those people while you're laying them off, we would be crushing it in five years when all of them are trying to come back from this.
David Moulton: Yeah, I think that in investing, that's often said, zig when they zag, right? So if you see an opportunity where everyone's trying to get out and that asset's really, really depressed, you can buy in. And when it bounces, you're going to be ahead. And I kind of wonder if the zigzag here is, while everyone else lays off, you're going, it's counterintuitive, but go pick up the actual intelligence.
Andy Piazza: And it's interesting, too, right? If you look at business in history, it's like, you know, we talk to CISOs and CIOs a lot, and it's always, How do I compare to my peers in the industry and all of that? But it's like, that's not what's ever made a great company. It's the one that did the opposite of what everybody else that's made great companies at great times or made great moves. And it's like, let's start, you know, actually saying what we say, right? Like, think differently and do something different. If everyone's laying off and trying to adopt AI, like, let's invest in our humans and still invest in AI. I do believe in it, like I said, but it's 30 to 40% gains.
David Moulton: What's your advice for CISOs or security leaders that are trying to evaluate an AI solution? You know, how do they separate out the real innovation that's going to help them from some of the snake oil?
Andy Piazza: Always ask to bring the security engineer in the room, not just the sales bro. That's step one. And just ask real questions about, I think we're at the point now where companies should be able to talk about impact and metrics and KPIs and those types of things and not just the promise of what it's going to do at this point in AI. If somebody's selling you something, they should have some measurable real-world examples. And, you know, we see it all the time in our space where, like, you have an employee referral, is there-, or a customer referral that they're another CISO who's willing to get on the phone with me, make sure they're not being paid or compensated in some other way, because there's some companies in the industry that will do that. But talk to your peers, you know? You know, CISOs, the CISO network is even probably tighter than the threat Intel network. Talk to your peers. Are they getting real value? Just like, you know, I use it all the time for security awareness training about phishing. Sense of urgency is the number one sign to me that something is a scam or fraud. We've got people out there right now on LinkedIn who are like, If you're not already in AI, it's too late. That sounds like a scam to me. Sorry, bro. You know, I do believe AI is promising, but it's super beta. If you go on, like, there's a really good tech description, or discussion on subreddit right now, where someone clearly completely in the tech developer side is like, I don't know why all these enterprises think AI is ready. Like, we're still so beta, blah, blah, blah, blah. Like, it was completely oblivious to the other side of the messaging that you're too late if you haven't adopted AI already. Your competitors already beat you, you might as well sell your company, you're done. Like, that's a scam. Anything that has a sense of urgency, people should be like, All right, I need to slow down and think about this. Is that true? Are we getting value from AI? Are we getting the value that was promised at its current rate? What about when it's no longer subsidized and I've got to pay 10 times as much? Am I really, is it really worth firing those 10 humans when that price is going to quadruple soon, or worse? I just, I would encourage every CISO and CIO who's being pushed to do this, ask the question, Who's telling me to do that, right? The old, I can't speak Latin, but the old legal term of who benefits, right? Like, who's benefiting from that message?
David Moulton: Follow the money.
Andy Piazza: Yep.
David Moulton: What questions should they ask the vendors or the internal teams specifically? Like, what are those things that help you really suss out the --
Andy Piazza: So from the business impact, you know, as I said, get the metrics, give me some real-world case examples. How much time is being saved? How much money is really being saved? You know, am I really getting value from this? But, you know, from the CISO perspective, more about the security, like, what is your governance model? Is this going to your cloud? Is this on-prem? If it's going to your cloud, is it mixed, you know, tenant data, or do I have my own dedicated tenant? What are you doing to secure your systems? Do your admin, like, I'm at the point now if I were to become a CISO, every one of my contracts, like SaaS contracts, will require physical MFA for all of my vendors. Like, every one of my contractors, whether, you know, a SaaS platform, whatever, that they require physical MFA for all of their users, not just administrators, because that's how much I believe in physical MFA, multifactor authentication, to stop stupid stuff. Like, those would be the things I'm looking at is what are your security controls? And not that stupid spreadsheet like all the companies are passing around. I forget the term of it. I ask all these security controls, I want to look across the table and get that person to tell me, you know, tell me you're securing my stuff. One of the things I've hated about security for a long time is this idea of, like, risk transference, right? Sure, that sounds cool when you're a CISO, but if I'm your customer, I don't give a crap who your third parties are. I trusted you with my data. I'm going to sue you. Like, you're going to lose brand. You're going to lose a customer if you screw this up. I don't care if it's your third party. So you need to make sure that the trust that I give you is extended to them and that they have the same level of security, if not higher than you.
David Moulton: Daniel Ford was on the podcast back in January, and he made this comment that has stuck with me. He's taking the risk for his customer, but he doesn't have to suffer the consequences when that risk comes to roost. And it's like, huh, risk transference. It's the end customer that eats it.
Andy Piazza: Yep.
David Moulton: Yeah, maybe the company gets hit, but in the end, whose data was lost? The customer. That's that wild moment, and it's a big responsibility. I like the idea that the MFA could be one of those pieces, physical MFA could be one of those pieces, because I think it comes down to how do you verify that the person on the other side of any transaction is actually who they say they are? And with AI, with deepfakes, with some of the scams, all of the urgency, it's tough. And maybe something more fundamental of just moving back to a token is the right direction.
Andy Piazza: When I ask, you know, I ask CISOs a lot, like, What are your crown jewels? And they'll say, you know, their Salesforce data, their customer data, their intellectual property, research and development, and all these things. And I'm like, No, no, your email. If you lock down anything at all with a physical MFA token, it should be your email first, because your email resets the passwords to all the things you just named. So guess what? My password manager and all of my email accounts require a physical token for me to log into the first time on any device. Because that's where all of my crown jewels tie back to is my password manager and my email is basically also a password manager, because I can reset all of my other accounts with that.
David Moulton: Andy, looking ahead, you know, you've had a front-row seat to some of the biggest evolutions in threat intel. Where do you see AI, where do you see AI's role in cybersecurity heading in the next five years?
Andy Piazza: I've been really, this really promising side of AI is where we see, you know, what's now being called agentic AI as of, you know, what, a week or two ago, but is the extension of SOAR, right? Security Orchestration Automation Response, but, like, smart SOAR now, as I'm calling it. I think the ability to scale our analysis capabilities very quickly, identify, you know, behavioral analytics, you know, variations of behavioral analytics if something is weird or off, especially as we look at something like a Muddled Libra actor who's not using malware, they're logging in as you, or logging in as an account. and using tools that are already there. They're not downloading malware. We see that with the Chinese, when they go into a number of organizations, they may use initial exploit to get in, but then they're abusing accounts and abusing, you know, we call it, like I said, living off the land, or LOL. That's really hard to detect from a security perspective. There's no malware, there's no malicious code, that type of thing. And then, you know, everything is always about speed and scale. You know, we saw with the red team exercise that we used AI for, for this year's Incident Response Report, what was it, like, under 25 minutes or something stupid. Most people don't realize, with the way log forwarding works and alert forwarding, there are SOCs that may not even have gotten those alerts to the SIEM in that 25 minutes. You know, that may be a 60-minute delay or a 45-minute or a 30-minute delay before an alert goes from the security device to the actual SIEM where the SOC analyst would even start the investigation. So, that breach was over before some SOCs were even alerted. That's the things that scared me.
David Moulton: Yeah, a few years ago at IBM, one of the CISOs there was saying that their golden metric was could they get the alert to their teams in under 60 minutes. And under 60 minutes was 59 minutes, 59 seconds, right? Like, you were stretching. And, you know, we were part of the services team.
Andy Piazza: And that's get "to" the alert.
David Moulton: Yes.
Andy Piazza: Not "close the alert".
David Moulton: No. And we were part of the services team, and we were being pushed to try to move our minutes down, which makes sense. And they finally were able to move and automate and optimize down to 60 minutes. And that wasn't that long ago. And now I'm seeing, you know, 25 and you're popped. And I'm going, it took an immense amount of work to get it to 60 just a few years ago. How do you get it to sub-25 so that you're not on the backside of, okay, now we've got to clean this up. We've got to go and, do we have a material incident? Is it a, you know, a brand problem? Is it worse than that? Did we truly get them out of our systems, right? And it's just wild to me that that's the speed that we are asking some of these teams to work at, but they're not even able to get to, as you say, log or alert forwarding for an hour.
Andy Piazza: By the way, it's still, like, the 1% of companies that even have a SOC, right? And then another, what, 10 or 20% that probably have a managed SOC. There's still plenty of organizations that don't have security operation centers at all. That, like, that's what our tools always need to improve, right? And that's the cool thing with this company is we've got the bad guy mindset, but we've got the good guy mindset, and we're working together to, like, write really good analytics and write detections. You know, as threat researchers, we work regularly with detection engineers. When a red team exercise happens, they're pulling that stuff in and making sure that we have detections in place. So it's not all doom and gloom. It's really easy for me to be the threat intel guy and be like, I hate it when we go in and you see a threat intel brief, you always, the coolest but worst case is possible. And then I'll get to the recommendations slide and it's basically like, If you had a billion dollars, here's all the things you should do. Like, we really need to get, meet customers in the middle.
David Moulton: Andy, I ask every guest this question. What's the most important thing that a listener should take away from our conversation today?
Andy Piazza: Definitely look at your identity hygiene as a person, as a human, as a person, as an individual. Don't think about it just from a corporate perspective. One of the best things I've ever seen a company do was they paid for a password manager that included family accounts. It's the only company I've ever worked for that improved my personal security, not just my corporate security. Because now I was able to extend those accounts out to my family, got my, you know, wife and children used to using password managers. I could share, you know, we had shared vaults, so I could share, you know, the Netflix password and not have to text it to them every time the kids forgot type of thing. I would just encourage people to think about that. Like, password managers are super easy now. You can use the free version of 1Password or LastPass. I highly recommend paying for the premium. Again, a couple hundred dollars could save you a $10,000 trip to Ireland that you never went on because somebody else did it on your identity, right? That risk-reward is kind of there. So yeah, I would say invest in your personal identity.
David Moulton: It is seamless. 1Password is the solution that I ran into, extended, foisted upon my family for the same reasons. And, you know, I find that password management, even within those password managers that are able to go through and say, This account's been compromised, this is a reuse, this one's not a strong password, and help you with that hygiene in and around your passwords. Gives you opportunity to start using, you know, pass keys and/or multifactor where it's provided. Sometimes, I don't know, it's the password manager that's flagging me and saying, you know, you can set up multifactor on this, and makes it very simple to do that. I agree, I think that's, like, low stakes.
Andy Piazza: I mean, you know, we look at the trends and attacks, we have the annual IR report, and everybody's report is wrong, so I'm not going to say just ours is wrong. But we talk about, like, initial access, and people will be like, Oh, X percent was phishing or identity-related, X percent was exploitation. That's initial access. Every breach involves an identity. If I exploit, do a remote code exploitation and I get root access, root is an identity. So every attack involves identity. [ Music ]
David Moulton: Andy, thanks for talking about the good, the bad, and the ugly of AI with me today. Fascinating conversation. Got into some spicy takes, but I think some important insights. And I can't wait to have you back on the podcast again.
Andy Piazza: Appreciate it, as always.
David Moulton: That's it for today. If you like what you've heard, please subscribe wherever you listen to podcasts, and leave us that review on Apple Podcast or Spotify. Your reviews and your feedback really do help me understand what you want to hear about. I want to thank our executive producer, Michael Heller, our content and production teams, which include Kenne Miller, Joe Bettencourt, and Virginia Tran. Elliott Peltzman edits the show and mixes the audio. We'll be back next week. Until then, stay secure, stay vigilant. Goodbye for now. [ Music ]