


Live from Black Hat 2025
This episode dives into the frontlines of cybersecurity innovation with Bennett Moe of CyberWire and Jim Reavis of the Cloud Security Alliance. From transforming data into actionable insights to rethinking cloud security in the age of AI, they share why fundamentals and fresh strategies are critical for today’s leaders.
Transcript
Raghu N 00:07
Welcome to The Segment, live from Black Hat 2025! I'm your host, Raghu Nandakumara, and today's episode is a little different. We're on the ground at one of the world's biggest cybersecurity conferences, where the insights fly, innovation is relentless, and everyone's got an opinion about what's next. In this special Black Hat edition, we're bringing you two insightful conversations from the show floor, each offering a unique perspective on where cybersecurity is heading.
First up, I sat down with Bennett, Moe, VP of Strategic Partnerships at CyberWire and a cartographer by training. Yes, you heard that right. A cartographer. As it turns out, his map making background offers a surprising and powerful lens for understanding cyber risk. As Bennett explains, just like maps, cybersecurity requires filtering massive amounts of data into actionable insights. It's not just about what you see, it's about how you prioritize what matters most. We talked about everything from the parallels between cartography and security visibility to how vendors are finally being more thoughtful about how they talk about AI. Here's our conversation.
Super excited to be joined by Bennett Moe, VP of Strategic Partnerships at CyberWire, and, very interestingly, a cartographer traditionally. So I think with visibility or lack of visibility, being a significant challenge for security organizations worldwide. Tell us what the security business can learn from cartography?
Bennett Moe 01:50
Yeah, I think about this a fair bit because I spent a lot of time in geo intelligence, autonomous navigation, traditional mapping technologies, and things, and when you think about maps and cartography. They're, by their very nature, an abstraction of reality. And so you have to take a lot of data and pick out the things that are important for the information that you're trying to convey and the users that are using it and what they're trying to do with it. And you're doing a lot of the same kinds of things. In security, you're ingesting a lot of data you have to prioritize that data and put it in a form that is manageable and consumable by people who may need to make decisions in high-pressure environments, and do that very quickly. So there's really interesting parallels with how to manage data, and that's something that I learned along the way with cartography, is that the difference between the data and the decision making that you need to make from that data, and how to translate those things into actionable insights.
Raghu N 02:46
Absolutely, and I think the other really interesting thing about cartography, which is absolutely applicable in the cyberspace, is that when we think about maps, it's not just a single map we're thinking about. It's you've got, let's say your geophysical map, your political map, your climate map, etc. And that same approach to mapping applies to the cyberspace, because you've probably got a map that represents your network relationships, your identity relationships, and so on. Right?
Bennett Moe 03:13
Absolutely, yeah, we're, you're looking at many layers of data, and each of those layers add to the one below it and above it, and distilling those things and picking out the ones that are most critical for how you're making decisions and how that fit into your operational structure and the things that you're trying to accomplish is critical to the mission.
Raghu N 03:31
Absolutely, right? And being able to sort of understand those maps and navigate them, and I guess, navigate between layers, because that's ultimately when we look at sort of how an attack develops, an attacker is navigating between layers in the map that they have in their head to get to their trophy.
Bennett Moe 03:48
Yes, exactly. And understanding how they're thinking about that data and how they're translating those maps, and how they're how they're looking at navigating around is important,
Raghu N 03:56
Absolutely. So we're a black hat this year, and I'm sure you've been to many, many black hats over the years. So, how have you seen things evolve at each visit here? And what are you excited about this year?
Bennett Moe 04:09
Always excited to see what new companies are coming in, because it's the faces change, the brands change when you're coming into a show like this. So I like to explore the startup city and see what new entries are there. What are the new technologies, what's the buzz? And even going from one show to the next, what are the things that are buzzing around? You know, AI is one of the hot topics. I don't see that changing in the near term, and how we deal with AI, and seeing how people are moving generative and agentic AI into really actionable applications, and getting past the buzz of the AI, and seeing what they're really doing with it, I think, is really fascinating.
Raghu N 04:44
Let's talk about that for a bit. So you work for one of the sort of reputable brands in the media in our industry. How do you feel that vendors are talking about their use of AI? Do you feel that. So there is enough depth for them explaining how they're leveraging AI to improve their products. Or do you think it's still very surface-level?
Bennett Moe 05:07
I think we're getting there. There's still a lot of marketing speak out there around AI. I'm seeing just in the last few months, or between, even between last black hat and this black hat that the way they're talking about AI and how, like, how they're looking at the applications and showing people how it's making a difference for their operations is, is they're getting much more into the actionable insight so they can, they can leverage and how they can improve their operations with AI, rather than talking about the big picture of what it can be. And even, you know, talking about AI when they're really talking about machine learning and application. So they're good, we're finding them, we're finding the message, but we're really getting to really some actionable applications of AI. And that's exciting.
Raghu N 05:46
That's exciting. You don't need to name the vendor, unless it's Illumio, in which case, feel free to. What is the most exciting use of AI that you have come across in cyber?
Bennett Moe 05:55
In my use, I see things like how folks are distilling information and using that to make processes larger volumes of data. So things that would take, usually take humans much longer amounts of time being able to speed to resolution, or speed to decision making, is I find that those applications very interesting, still exploring some of the agentic things that folks are using. So those are some of the things I want to explore while we're here.
Raghu N 06:21
Last question on the AI topic. Do you think that we are solving new problems using AI when it comes to cyber, or is AI very much the moment, giving us much better ways of solving problems that we've had now for many years?
Bennett Moe 06:37
Well, a little bit of both. I think with AI, it's moving so fast that we're creating new applications and solving existing problems. With AI, we're also seeing new problems that are coming about because of the development of the AI and solving those problems. But we need to move fast because we don't really know where AI is headed. So as AI develops in the larger ecosystem of AI, how do we respond to that from a security standpoint? Because definitely our adversaries are, they're looking at how they can, they can leverage AI for their uses, and we need to use that to stay a step ahead rather than be reactive. And to us, to a certain extent, we can't help but be reactive, because we can't foresee all the use cases. But as we're going to have to move fast to keep up with how it's being developed. Many sort of commentators, professionals, et cetera, like to say that we can only defeat the AI-powered attacker by making the defender have their own AI superpowers. Where do you stand on that? Gotta start with the fundamentals. I think, yes, that's I think that's true to a certain extent. But how we use AI is based on how we use the people around the AI. I think that a lot of what we need to do is make sure that we have the right people doing the right jobs, in the right roles. And that goes down, comes down to how we hire people, how we train people, how we identify the gaps in our own workforces, because we can give them the tools, but if we don't have the right people in the right places to leverage those tools. We can power them with AI, but they're not going to be able to fulfill the mission if we don't have the right people in the right seats doing the right jobs.
Raghu N 08:08
Absolutely, and you mentioned sort of doing the fundamentals. You very much focus on essentially the people's processes. But I think the same thing applies to doing the security fundamentals, security basic security hygiene, whatever the preferred term is, doing those exceptionally well. And I think that's even more important now as AI use proliferates, because that's the kind of gaps, the gaps in controls, the gaps in whether it's misconfiguration, whether it's a vulnerability. AI part, attackers are going to be able to exploit them significantly more quickly, at a massive scale compared to before?
Bennett Moe 08:41
Yeah, absolutely. And if we're not building our security platforms, our security plans are our models. From the basics that are already established, we know how to build a security program. Those things don't change. Like in the theory of building a security program, we want to layer on the things that will enable us to defend ourselves against these new threats and identify them.
Raghu N 09:04
Fantastic. So again, your role — you're working in a well-respected cyber industry media organization — when you look at how the nature of how we report on cyber has changed over the years, what have you seen as the evolution of cyber journalism? I don't know if that's a term, but I'll use it
Bennett Moe 09:25
. Sure. I'll go with that. One of the things that we do, we specialize in distilling knowledge for professionals who need to make decisions every day. Journalism has become easier because the publishing platforms have become more ubiquitous, so there are more sources of information for people to consume. The challenge is making sure that you're getting trusted sources of information. And there are some things coming online, there are more AI use in journalism. We don't use AI in our journalism. We're all human curated content, so we want to maintain that trust that we've built with our audience. I think that is important to. This Community Trust is a big thing with the cybersecurity community, if we want to make sure that they're getting the right information, I think there's a use case for AI in journalism as well, as long as there's a curation around that to make sure that the information is correct, that is properly notated, that it's authenticated, and we're giving people the right information, and more importantly, the information that they really need to know. There's so much information out there, no one person can consume it all. And we want to give people the things that are really important for them that they can act on, that are going to make them, give them the power to make better decisions about their job on a day-to-day basis, to protect their companies or their communities and the world at large.
Raghu N 10:39
And do you see because, of course, the general public now and the general news media are definitely far more aware of cyber threats, and we see more and more cyber attacks being reported in the mainstream news. With that changing the way that you approach the content that you create for the professionals that you are serving?
Bennett Moe 11:01
Not in a large sense, we've developed a way of distilling the information that we report and how we want to put in the things that we want to report. We've adapted to the things that people want to hear. So the our audience, in particular, wants to be educated about what's going on and get the information that they need to know on a daily basis. So we were focusing on that, not very often do we have to go back and reference, like mainstream media, when they've said something incorrectly. It happens every once in a while, but the reporting overall has gotten better. There's been a lot of turnover in media organizations. A lot of media organizations are getting smaller, and they're cutting out of cutting out their newsrooms, and in many cases, they're cutting the cyber reporting in their newsroom. So it's more important than ever that we're providing that service.
Raghu N 11:47
So in terms of the particular audience that you serve, and particularly like Security Leadership, CSOs, CISOs, what are the key pain points that they constantly keep going back to that they need to be better informed about?
Bennett Moe 12:02
Getting information to them in a way that they can consume it, that is tangible for their business. So, putting it in terms of things, how do they think about protecting their business, and how does the cyber policy and cyber security impact their bottom line? Like they're thinking about the whole of the business, and especially in the CEO/CTO level, they're thinking about, not just cybersecurity, but that's a that's a piece of what they have to consider, and how does that fit into the larger organization, and potentially looking at the risks and impacts of an incident and the power of the you know, if, in the case of a public company, making sure that they are, you know, compliant, and be able To lose reporting the right things, being able to report their shareholders that they have all the things in place that they need to protect their companies that's not going to affect their bottom line and their stock, right?
Raghu N 12:48
Yeah, and this is an effect that we're just discussing with one of our sales leaders earlier today, exactly about that, about how we can help our customer leaders be able to connect more between security challenges and the business challenges, and that seems to be just a continuous theme. I know we're almost at time, right? So, before we before we wrap, tell us what as you leave Black Hat and take your flight back home. If you got one thing out of it that would make you really excited, what would it be?
Bennett Moe 13:19
To see some new application or technology that I haven't yet seen, and something that is going to make a big impact. It's going to make me say, Wow, okay, they got it that, that they're going to see, see something, see something new and generally, the level of discussion around, around the new technologies that are coming online, and seeing how that's evolving, and seeing the advancement and people and being much more application focused on how they talk about, like, the impacts of what they're of what they're doing.
Raghu N 13:47
Well, Bennett, hopefully, as you take that flight home, the memory in your mind is Illumio Insights. So that'll be amazing. It's amazing to be able to interview someone who's so storied in cyber journalism, cyber media. So thank you so much for your time here.
Bennett Moe 14:01
Happy to be here. Thanks for having me.
Raghu N 14:03
Next, I had the honor of speaking with a true pioneer in our field, Jim Reavis, CEO of the Cloud Security Alliance (CSA), and a member of the ISSA (Information Systems Security Association) Hall of Fame. Jim's been thinking about cloud security longer than most people knew what the cloud was. He helped shape the very frameworks we still rely on today. But he's not just reflecting on the past. He's sounding the alarm on the future, from systemic cloud risks to the seismic shift brought on by generative AI. Jim warns that our old playbooks won't be enough. We covered a lot in our time together, what it means to build a 100-year organization, how security leaders must become their company's smartest voices on AI, and why now is the time to reinvent, not retreat. Let's dive in. I am so honored to have a Hall of Famer. Hi, Jim Reavis, CEO of the Cloud Security Alliance, and a member of the ISSA Hall of Fame. Jim, welcome to The Segment.
Jim Reavis 15:09
Oh, it's a pleasure. Thanks for having me, and thanks Illumio for being a great supporter and sponsor Cloud Security Alliance.
Raghu N 15:15
It's a privilege to be a sponsor of the of the CSA and the great work that it does. But tell me, what does it take to become a Hall of Famer?
Jim Reavis 15:22
Well, you know, you got to get up every morning and you have to train, and you I think it's sort of like the thing they feel like you're getting towards the end of your career and might pass on. And so, you know, I've started really in aspects of what was called information security in 1988 and I think kind of, you put in your dues, and you go through a lot of different things, and along the way, maybe you have an insider too that's helpful to other in what we discuss information security and cybersecurity, it's always been a sort of community of helpers. And whenever I look at someone who's achieved a lot, you know that these first responders and that community has been behind them. And I certainly feel like I've been Big Ben cherry bad myself.
Raghu N 16:05
That's incredible. So if there's one experience from your time in the industry that you look back on and think, my god, that was the thing that made all of this worthwhile, what would it be?
Jim Reavis 16:16
That's certainly difficult to say, but I'd say there's like fire throw out a couple of things. It's like first time installing enough firewall to access the internet and just seeing all different people and systems and servers all around the world, and it just made the world seem so much smaller to be spending time with someone like Dan Geer, who's like, pointing out, you know, a person just such a visionary, and kind of put this perspective of like, of species and evolution and how information systems and technologies were changing so but yeah, some of those more early days Where everything seemed amazing. And we might be coming another era like that, where we can't feel like it's a miracle all day, and in the surprise sort of talk to people that are or my age, feels like everything that we've accomplished in cybersecurity, everything we've been doing is practice what we have to accomplish in the next two or three years.
Raghu N 17:21
Absolutely, I still understand, though, why is it that every single cybersecurity professional their first experience with cyber is configuring a firewall?
Jim Reavis 17:31
Same with me as well. Yeah, that was like, that was it. It was exciting. It was this. We started making internet, and it was like some early adopters started using and said, “Oh my god, I'm in this internet.” And it's like, connected to everything, and now they're the castles we had to start building the first castle.
Raghu N 17:47
Absolutely. And still here, 2030, 2040…years on, we're still securing ourselves from the internet.
Jim Reavis 17:54
Yep, yeah, I think that it's certainly very complex. This is a huge industry now, but we've got to go back and make sure our first principles are really sound, and we think about, like, who are the adversaries? What are the risks that we have? What are the threats? What do we think about in terms of protecting our crown jewels and looking managing breaches and things? Because you get a lot of nomenclature in the industry, a lot of the complexities, and I think we're gonna get back to first principles. And what are we really here trying to accomplish?
Raghu N 18:27
Yeah, absolutely, you have talked about the Cloud Security Alliance, the CSA, founded in 2009, probably ahead of its time, because, like, adoption of cloud was so much in its infancy, and the concept of cloud security is probably still a couple of years behind. Like, when you look forward from there or look back now, like, how have you seen the evolution of cloud security from those early days to now? Yeah,
Jim Reavis 18:51
So when you see a big trend coming, they’re very big, very certain about it. But the difficulty is knowing when it's going to happen. Back in both days, you could see this power compute essentially becoming a utility. Instead of like servers, you had to go personally set up, and you could see that was going to be significant, but it's hard to know how long the C chain is going to be. But what we knew we need to be proactive, and you need to think about solving tomorrow's problems today if we want to be a successful thought leading organization. And so we you that we're going to need defensible best practices for risk management, for controls, for compliance and all these sorts of things. And so wouldn't it be a good idea to start working on that before it so pervasively adopted? So yeah, I don't think it adopted as quick as it was adopted, because I thought, and, you know, some of the early large enterprises, like a Netflix or Capital One, they actually started creating a lot of the cloud security tools that ended up then the cloud security industry ended up doing a few years later. So, so we moved from this, let's, let's use that. Practices and the technologies have, and then the early adopters come along and create both cloud native security, really the daunting part of cybersecurity, of technology, it's cloud security. So it took a little bit longer. Now we're a place where it's very, I think, mature, and it's fairly reliable, and it has, like, fairly broad coverage, but the malicious actors are obviously just as agile, and it's impressive as they are. As this emerges with AI, then cloud is sort of a foundation to that. I like to say that that really cloud and AI got together and kind of baby and make ChatGPT, and it was sort of that, that moment where artificial intelligence that we've been working on for 50 years in different forms, was pervasively available to the community, full lot like that early Cloud journey where it computes a utility. Any single person has access to the biggest data centers in the world. Now, you have any person has access to any amount of intelligence. Intelligence being a utility. This is like, why I say this is, this is what we've been training for.
Raghu N 21:17
Absolutely, I think the GPT being the child of AI and cloud. I love that analogy. Going back, you said that when you founded this CSA, you said, right, we need to establish the best practices, et cetera. What did you see? Because that must have been triggered by essentially the assumption that this is going to be essentially a fundamental change in how we think about security, which is going to require, like, cloud security to become a discipline on its own. What did you see, like, what were the causes of that? What do you see like, back then, looking into the future, like, we're going to need something different, because this is a different problem to solve.
Jim Reavis 21:56
Yeah, when, when you move compute to a utility, essentially, you're, you're segregating the owners of the information from the physical storage of it. And so this bifurcation, and that is just hugely significant. I won't say that we've got it right now, but that was, like, very obvious that you had to think a lot more virtually about controls. And then you had some of these, like very, very early adopters that were saying, hey, I need to do some drug discovery analysis. And normally that's going to take six months and millions of dollars. We got to get compute and like me saying AWS, I was able to do that in couple hours for under $100 so that little sense of emergency, or yes, emergency, but that is for the key thing is that segregation that said, Yeah, we have to go from a lot of these physical, hard wired parts of securities firewall and everything need to operate in your virtual environments.
Raghu N 22:56
Yeah, I like sort of simplify it, saying the fundamental difference between cloud security and data science security is that in cloud, any single resource has a single misconfiguration way of being accessible on the internet, which is why, like your focus on it and the way you think about it have to be so different to how you think about data center security.
Jim Reavis 23:17
Yeah, and there was, there's been things like, very recently the CISO of JP Morgan Chase has talked about, hey, we are real worried about the systemic risk large cloud providers like one goes down. It doesn't just impact us. Maybe, if we had a failure in our own data center, impacts the entire world. And so we got a lot of work to do, sort of re-establishing, like, a lot of that sort of redundancy and very, very granular types of permissions that we can go do. And so it's, it's a race that we need to go do. And when I started in this, this was just sort of an odd area. Then it kind of became national security. That was, like, the most important part of the nation's security posture, cybersecurity. So it stakes our efforts to go for you?
Raghu N 24:02
Yeah, absolutely. So again, sort of a natural evolution. And you, you spoke about it, about that sort of coming together of cloud and AI giving birth to chat, GBT et al, the there's so much talk about AI, really securing of AI and securing against AI powered attackers. How would you split these two, which I think are different problems and your approaches to that?
Jim Reavis 24:30
Well, you have to be aware of, like, all the different implications, obviously, of AIT. The framing is really helpful to sort of these first principles, where you think about what is the data that I need to protect? So it's a lot of thinking about, how do we protect the information? Then we're obviously thinking a lot about egress, filter, anyone thinking about usage of AI, maybe our own models, those sorts of things. So very much a data, data governance sort of thing, what are the risks? Because no. Not only can people attack you by maybe manipulating AI systems like prompt injections and things corrupting it data poisoning, but they can also just use it as designed and create deep fix and then by that certainty factor communication. So, you gotta also really think a lot about those rats and those, and then have different sorts of fences there. Cybersecurity is going through this like with let's go reinvent our systems. How can we make a stock more efficient? How can we consume information and different complex so you see a threat report, the vulnerability report, you think you're okay, but then you make get information from a Zoom call notes or Slack message. Oh, actually, this system, I think is innocuous, is going to have all our jewels next week based on some conversations. So it's really important to see pretty holistically these different areas, most of what we have in security. It's not, obviously, AI is able to, yeah, but it's having that governance, threats, risks, data protection, idea, big picture of reorganization, and doing a lot of communication innovation. Can understand, how can we improve our organization, improve our psychosocial posture, and also really understand how the malicious actors are going to be able to use this, either by, you know, or as, yes.
Raghu N 26:23
So we think if I'd say to what would be if you woke up black hat, right, if you could go away and say, “Hey, we were able to solve this key problem coming out of Black Hat, what would that be?”
Jim Reavis 26:36
The key problem to solve would be, I think we're, we're really much this, like educational awareness side. So it's really interesting when you talk about the models and, like emergent features, the very, very smart scientists, researchers that create these are often very surprised about how they operate and the things that they do. So I think it's incumbent on cybersecurity people. This is this is my angelical cause. We have to be the smartest people about AI in our organizations. We have to understand it even more than people who are building AI applications in the organization. Because lot of times they just need to understand the API interface, and then you understand the usage charges and how all this works. But we need to understand the models, top to bottom here. You understand quite a bit of detail. We need to understand, like all the potential applications of it, how you test software. It's got to be different. You can't create standard test cases. Got to think more about simulations, things like that. So that's the big thing is, really, we've gotta, we've gotta learn and share. And that's, that's what a black hat, even as chaotic as it is, it gets more expensive, and everything else, it's like, we gotta be a great guy. We've gotta network and communicate.
Raghu N 27:56
And just, just on that point about communication, do you feel that like vendors, security vendors, could do a much better job about talking about how they use AI within their products, within their platform. Do you think that's an area of significant improvement?
Jim Reavis 28:13
Yeah, I think that the world sort of more and more treats like protecting those best practices as a routing error, and we're able to actually get a lot of that information so, and that's kind of what we do, is we have just a lot of the solution, prior community and different working groups, things like that. And so you are seeing secret sharing, and certainly if you're building some super secret patentable things, there's still a lot of that, but I think people are having more and more that are understanding of those practices. But we can always do more. The problem is that I find is people spent, like, time with chat GPT, maybe, or the models, like six months ago and that's the reason, probably today it's easy, like every model but interim reinforcement learning and things like that, the operator differently. So it's important just to have that continuous learning, continuous sharing, continue this discovery too. Like, let's communicate that in a challenge, because bad guys will use some of these discoveries. Yeah. So about what we're going to do, that's the openness the way to do it.
Raghu N 29:22
Absolutely. I mean, security, sort of security is no security at all. Right? So what sort of, as we wrap up, I mean, the CSA has been going for 16,17, years now. It's probably still very much at the start of its journey, given the way technology is developing and the children, cloud and AI. What would you like the legacy of CSA to be?
Jim Reavis 29:44
Well, as a non-profit organization, we sort of like think about designing these for not for a small acquisition. That's not going to happen that for 100 years. But kind of we, what we talk about in term of length, is to be a navigator for. For the community, whatever it is, how many robots are operating, it to actually look at the threats. It's always going to be evolving landscape and evolving best practices. But I'd like, for like, my part of it, to be remembered. Hey, really an open community, the solutions don't come top down. They come from the community, anybody in the world could have a great idea about a certain way to sort of navigate cybersecurity or make some course correction that's going to help all of us. So that's hopefully what will be seen as it's a great community hard workers that just try to solve these problems and share, share, very openly.
Raghu N 30:41
Amazing. Well, Jim, thank you so much for what you and the CSA do for our community, for helping credit your cloud and an AI and just looking forward to so much more coming out of CSA over the next few years, months, years, decades, 100 years, as you said. Thank you so much.
From maps to the cloud, from fundamentals to the frontiers of AI, today's conversations are a reminder that cybersecurity is a human challenge, just as much as it is a technical one, whether you're helping your organization chart a clearer path forward or sounding the alarm on what's around the corner. Visibility, curiosity, and resilience remain our best guides. Thank you for tuning in to this special episode of The Segment. We'll see you next time!