The Resilience Factor Podcast
Cyber resilience is fast emerging as the driving force behind critical business continuity in our digital era. Faced with brownouts, blackouts, and the ‘when not if’ nature of cyberattacks, how do businesses harness this resilience for stronger cybersecurity and networking?
Listen in to The Resilience Factor as our hosts — renowned social engineer Jenny Radcliffe and Zscaler’s own Kate Baker — explore how organizations and employees can fail forward to adapt with confidence.
Through trend-based discussions with cybersecurity experts and pioneers, real-life case studies, and practical advice, The Resilience Factor offers audiences insights on the tools and strategies needed to build business and personal resilience.
The Resilience Factor Podcast
S1 E10 Redefining trust and identity in the Agentic AI era with Sam Curry
In this episode, host Jenny Radcliffe welcomes Sam Curry, CISO at Zscaler. Sam reflects on the evolution of cybersecurity from a niche IT role to a vital business function, emphasizing the need for innovation as organizations adapt to emerging technologies, such as Agentic AI and navigating complex interdependencies for organizational resilience.
Cyber resilience is fast emerging as the driving force behind survival and success in a world of unprecedented digital transformation. Through trend-based discussions with cybersecurity experts and pioneers, real-life case studies and practical advice, the Resilience Factor offers the tools and strategies needed to build business and personal resilience in all areas of cybersecurity and networking. Not only that, but you'll get to hear from a range of industry-leading professionals and experts at the very top of their game. Join us as we build a vital resource to drive organizational resilience within a fast-moving security landscape. I'm Jenny Rack of the People Hacker. This week we are joined by Sam Curry, CISO for Zscaler. With over 30 years in cybersecurity, Sam began his career in signals and cryptanalysis and was the first employee at Signal Nine Solutions. He led the first commercial Bluefish implementation and developed early VPN technology, which was later acquired by McCaffee. Sam has held key roles at RSA, EMC and Auburn Networks, among others. He holds 17 active cybersecurity patents, has a master's in counterterrorism and has taught at Harvard, as well as serving as a Fellow at the National Security Institute. Welcome listeners. We hope you enjoy the conversation. And welcome to you, Sam.
SPEAKER_01:Thank you, Jenny. Good to be here.
SPEAKER_00:Sam, you've held lots of different roles over the course of your career, and I think it's fair to say that you've been at the coal face of the cyber industry's evolution. Can you tell us a bit more about how it's evolved and about your current role at C Scale F?
SPEAKER_01:Well, I think the biggest thing about how the industry has changed is um how it has become more significant to business. Uh, when I began, there was a number we used to track, which is what percent of IT spend was cybersecurity. And it started out at about 1%, 1.5%. And I watched it creep up to about 8% in uh 2010 timeframe. And it was neck and neck with storage at the time, and then it passed storage. And at that point, I realized it was it wasn't commoditizing like other areas. Uh, something's a commodity where it's available everywhere and the quality is the same from everywhere. And so when that happens, you tend to get consolidation, you tend to get the this long tail of features, and then it only differentiates on price. But that wasn't happening in cybersecurity. In fact, today we have just as many startups as we had 10, 20 years ago. And so I looked around for a reason why that was the case, and I realized it's because we have an active intelligent opponent that demands innovation. And so what now I think we're up to about 20% of the IT spend goes on cybersecurity. It's the largest single domain that we spend money on as a part of IT, but it's really not an IT domain. It's really a risk domain. It's a business domain. It's about business integrity and availability, of course. Um, and a you know, classic definition includes confidentiality, but that's really how it's changed. It's gone from being an IT niche function to being a core business function in a world where you can't separate the business from the IT. I think I answered that question for you. Um, if I missed any part of it, though, tell me.
SPEAKER_00:No, I think that's a really good way of measuring it. I think that's a good way of looking at the evolution. And I guess the the next part of the question was about your role at Z Scale and now, because you've I mean you've had such a long and interesting career. So where are you now, sort of end of 25, start of 2026?
SPEAKER_01:I'm the internal uh chief information security officer, of course, but um the I think my career, the what I would characterize it as always being close to the problem that our customers have. So I've done product management, I've been a chief technology officer, and I've been a CISO in to varying degrees over the last at least 20-ish years out of the 30-year career. And the real superpower is when you're close to the problem. So I spend quite a bit of time understanding what our customers' actual needs are. And the rate of evolution in two ways uh is increasing. One is what the adversary is doing. Uh, we can talk about all the buzzwords in the world, but um, the rate at which they're getting more effective at what they do is increasing. And the number of technology disruptors that are coming along is changing. So my job is is is to get ahead of that. It's to, it's not to get make sure there's no risk, it's to make sure that we are at an acceptable level of risk and that we are ready to embrace new technology and to help our customers meet their challenges and their acceptable levels of risk going forward, which means it's even though it's an internal job, I have to be outside understanding what the what the landscape is and the adversary and what our customers are actually doing.
SPEAKER_00:You know, one of the things I love about talking to you, Sam, and we've spoken on a panel and a few other things uh recently, is what you bring to this because you don't just bring your technical knowledge, which is in massive depth. I didn't know about the patents and everything else that you have, but it's also that you take this like philosophical uh view on everything. And I I love the way that you can you sort of bring that that kind of thinking in. I mean, just for our listeners, sound coach Stephen Pinker, and we talk about existential things when we have the time, which we don't today. So the question is that over time, what's classed as being resilient as an organization will have changed. So um, can you think paint us a bit of a picture about what resilience was in the beginning of your career versus what it means today? It has the definition and the nature of resilience for businesses changed over the course of your career? What did it look like at the start versus now?
SPEAKER_01:Well, very much so. I think at the beginning of my career, uh resilience was very much a local issue. It was it was about how do I prov how do I make sure I have availability in my office, in my data center? How do I make sure I have power? How do I make sure I have um uh safety? Uh if you think about it, it was it was always intrinsic to the local local space. Um and so what you needed was you needed a connection out, yes, but you had to think about people coming to the office. That was what everyone did. And yes, we had VPNs, but that was an exception case. Um and now I think what we've got is a far more intricate uh uh sort of tying in of services. So rather than having chains of trust, we have mosaic of trust now, and frankly, interdependency. I'm reminded that in wartime, what you wind up with is a massive redundancy in supply chains, such that if part of uh of a supply chain goes down, things re-root. They go around the the obstacles or the missing pieces. But in peacetime, what you seek for is um you you try to have less providers, you have more, you have larger providers. There's a there's a there's a there's a tendency to have more single points of failure. And what we've had is long stretches of of um growth and um and and sort of fine-tuning. I I once described this as an as an innovator's dilemma, um another another take on it, where you have the CFO who always wants less suppliers and they want to get lots of bargaining power, but then you wind up with only two or three vendors. If one goes down, you're in trouble. And in wartime, what you do is you wind up having many suppliers, because if one of them gets taken out, then you you don't have a problem. In fact, the internet was designed such that if any parts of it were taken out in the event of, say, a nuclear attack, it could reroute itself. And so in the early days, it was very much islands, resilience. Resilience had to do with islands being available and then interconnectedness, and now we've got this mosaic of interdependency, such that when one piece goes down, if a cloud provider goes down or if it has lack of availability, the cascade effect or the domino effect can be massive. And I think we really what we've got now is APIs, more and more functionality is going through them, and we're getting more and more um more and more trust in new ways that we don't necessarily expect is there in these sort of highways of communication. There's new control planes that are emerging. And um, I think it's time that we start to pay much more attention to them.
SPEAKER_00:Sure. And I mean, it's so interesting to me that you say that because I I I worked in supply chain for a long time, and strategically, uh that idea of dependency that was always the argument, you know, if if you put everything with put all our um eggs in one basket, we'll get cost reductions. And then the minute there was a problem, and and the what the time I'm remembering for I used to buy chips and and and the the thing that uh I remember the tsunami in Asia just taking out like two or three of very niche vendors, and then what we had to do to route around it, and actually it was right and and and also during one of the Gulf Wars, I can't remember which one, but the military just commissioned a load of our parts uh and we just couldn't make any of our circuit boards, so yes, very interesting way of looking at it. Um, and I just when you say VPNs, I'm like Sam is responsible for some VPNs, actually, everybody. Anyway, one of the things I want that you are the expert on and and that we love to talk to you about, and really why one of the main reasons we wanted you on the podcast of many is is you is your view on AI, and which of course these days, Sam, unless we mention AI every couple of minutes, we get thrown out of the we get thrown out of the club, right? Um so the question is the impact of generative AI on the security industry is well documented with attackers becoming even ever more sophisticated. However, agentic AI is being touted as the next big thing. You cannot go anywhere without hearing people talk about this. I've listened to at least three things even this morning, Sam, about agentic AI and people's views on it. So the question for you is what sort of impact is agentic AI having on the security landscape?
SPEAKER_01:To answer that, we have to first say what agentic AI is, and I think everyone has some take on it, but it has to do with agency. Um, in other words, previously you would ask, say, a generative AI, you'd give it a prompt and it would give you an answer. But now instead you give an agent, you give it a task or a goal, you give it a goal, and it has the ability to assess and make uh and make decisions and to take action. In other words, it it can to some degree plot out a series of actions to accomplish something. And so it can go out and act on your behalf. Now, the implications of that are it's acting with your identity as context. And so suddenly we've got a number of issues. So, so one is that your behavior as a carbon-based unit now starts to look like a bot. And so from a security perspective, we have to tease apart and say, when is this Jenny versus when is this a bot that is acting on behalf of Jenny? And so now we have an authenticity problem, not just authentication and authorization, not just who are you, which we have to authenticate the eight the agent and then authorize the agent. We have to actually have chains of authenticity. So we need statement of intent, we need a bill of material, we need a bill of data, but we also need to have this notion of authenticity and it and a new form of trust model has to emerge. That's just the basics, because every product seems to be building this in now. And so the question is well, they're using it on offense. We can see that already. And on defense, we're starting to use it. So let's break this down. If the bad guys were already getting better at their craft, now that's increasing even faster as a result of generative AI and agentic. Granted. So we now need to, in defense, figure out how do we use it to get better in SOC operations, in uh general security and InfoSec. And we have to make sure the business can adopt this at a faster rate for the core business. So those are the the this is a big thing. It's it's it's pressure on us to adopt it within the practice, and it's pressure on us to make sure the business can adopt it and we can put guidelines around it so that they can adopt it safely and accelerate the business.
SPEAKER_00:You know, Sam, that this makes me I was thinking this when I've been sort of pondering um this topic. It feels like the security side of this, the defense, came after we realized that the bad actors were going to use it for offense. Is is that uh am I right thinking that?
SPEAKER_01:Technology doesn't have a flavor, it isn't naturally good or evil. Once the technology comes out, well, we would hope that's the case, just to say. Well, every every technology winds up used both for offense and defense. The same is true uh even in the non-high-tech world, right? The same is true if, for instance, when a new a new weapon is created, and you you just in classic warfare sense, right? But um in this case, we knew it would happen. Now the uses they would put it to weren't quite known, and it is asymmetric. The uses it gets put to in offense are different from defense. We could do better perhaps with red teaming and purple teaming, with more innovation there and say, well, how do we emulate it? But let's think about it. The attacker is putting 100% of their effort into using these things creatively to break in while we have many applications. We're busy thinking, well, so the average company only has a fraction of its total, of its total IT expense that goes into cyber, but then we have to think, well, we're gonna put some of this into things like security ops, we're gonna put some of it into basic infosec, some goes into GRC. It's not the same intensity. And we're we are we are spreading out our defense. And so a strategy is really required, and really risk centrism is important, and so is leveraging the community. So, yeah, I'm not surprised that it that it is being innovatively used on attack. And we knew that would come. But I think over the long term, the community um in the community and the innovation are so much bigger in defense that medium to long term it's going to favor the defender. Short term, innovation like this always favors the attacker to some degree. This is one of the reasons, and by the way, that a zero trust strategy is important. And I don't mean m marketing mumble mumbly, right? I mean that having less exposure, having less available to exploit, having less privilege generally, less function, having data in its lowest information, highest entropy state is the thing that is going to make sure that you're more resilient over time.
SPEAKER_00:Can I I just want to go back just briefly? What one thing that you said, which is something that I would have said, but again, I've been contemplating this since since our conversations more recently with various panels and things. I would have said the same thing that all technology is used for good or ill by humans, right? But this is different because this has got decision-making capacities. So for example, I was thinking about the level of agency an agentic AI might have. And then I'm thinking, well, what if it what if it assesses the instructions? Because it has sub-goals and goals as I understand it. And Sam, you have to forgive my lack of technical knowledge on this, but you're spawned.
SPEAKER_01:Yeah, yeah, no, it's okay.
SPEAKER_00:So if it has sub-goals, can it not decide to disregard, say, for example, the the the objective that it's set, or could it pick like a different tool to use to come up with a decision? So like I think that agentic AI is different to all the other tools in in in as much as it it's independent as well, it's not quite the same as any other tool. Is my is my qu it's sort of a question.
SPEAKER_01:It's a very good question. It's a in fact, it's maybe one of the most important questions because there's a continuum of autonomy. So if you think about it, how autonomous is the AI? Um, and and this raises questions of things like consciousness and and reasoning. Um, and whether or not we answer that question, we can answer it behaviorally. How many resources does it have? What is its ultimate motive? How is it incented? Um, and it's not just a jump. It doesn't go from one state to the next. No. And so what is the goal? How broad is the goal? How much autonomy does it have in being able to reach that? So far, agentic is what we call the low end of the autonomy scale. The high end of the autonomy scale, when it actually has much more independence, is where we start to get into AGI. We aren't there yet, but that's actually a continuum. It's not just, it's not that's another thing. And so what you're actually asking, I think, is at what point does it get to independence and and then what does that mean? And we haven't yet, as a society, even really begun to answer that. We haven't dealt with some of the ethical issues, we haven't dealt with the correct usage of those, but there is research happening into especially around consciousness and around reasoning, there is research happening into, well, what does that mean and how should we be treating this?
SPEAKER_00:But I was thinking about the trolley problem, right? But just for our listeners, in case you're not familiar, it's this question of, you know, you're in, this is how I remember it, but you're in this, you're in a train and the train divides into two tracks, and you have to decide whether to divert the train and deliberately kill someone to save the lives of, say, 25 other people on the other track, because you can't stop the train. So you either have to go left or right, left kills one person, and therefore you don't kill 25 people if you were to turn right. And I and what I'm wondering, and I was gonna ask an AI, what would your answer be, right? So I said, like, what would you do? What would an a genetic AI do in that situation? Um and the answer I got back was refer to a human. Yeah. An AI can never say, ever, justify killing anyone for any reason, even if, from a utilitarian perspective, that is the the best sort of answer.
SPEAKER_01:So you brought up utilitarianism, which is a very important point. So, first of all, we cannot say what is the right or wrong answer without stating up front what is an ethical framework. Utilitarianism is what is the most good for the most people. Or negative consequentialism, what is the least bad for the for the for the most people? You could also say, Kant, what is to do your duty? Almost every framework has at some point an almost irrational thing we have to agree. Yes, I agree that the most good for the most people is the thing, and you and I will say, okay, we kind of agree on what that is, but that's almost an irrational thing. Some people may say it's the Ten Commandments, some people may say it's doing your duty. What does that actually mean? We're gonna sort of fudge it, but everything will follow from that. Now, if we don't tell AI what framework to use and give it the means to do that, then we have to tell it when to spot situations like that and where to go for reference. But you can't do that in the moment, say, where it's driving a car. What if it was driving the trolley? Now, so dodging the question doesn't actually help us. So we're in a world previously with Gen AI where you ask the question, it goes, wait a minute, this is an ethical question. If you're lucky and you don't social engineer it, so whatever that means in a silicon world, okay, maybe, but when you're in a genetic, it might be driving the trolley. At which point, which way does it decide? And so now you've got to say, have a discussion where you say, let's agree on frameworks and how they mix, then you can start to put put the actual problems or use cases in front of it and test how it behaves. And that we have to do that. And avoiding the ethics question, and and I think it actually goes back to well, in the United States, it has to do with with with some of the amendments to the Constitution and the right to religious freedom. It has become a a we don't talk about it discussion that you have to have respect for religion and therefore you have to have respect for people's own morality and don't ask questions. But that actually is. Makes it more difficult. We have to have invite the conversation and have a safe space to do so. Because if we don't, there will be de facto ethical decisions done. And we'll sit there and go, well, why did it decide that?
SPEAKER_00:Yeah. Well, well, it just brought up this last point on this, on this kind of little, little diversion I've taken you on, but you were the best person to ask about this, I think. What how do we address a liability framework when actions are taken by autonomous AI and not a person? In other words, and there's a case at the moment, sort of a big case at the moment of um orders were given, right? I'm not going to get political about, but orders were given and someone's going to, you know, potentially be blamed for giving um, if you like, an order that the people disagree with. We have to address the liability framework. And I just want to say to listen listeners that this might sound like we're going kind of off on a tangent, but as you've heard from Sam, these are such important questions because we do have to think about the business case, but we also have to think about the rate and the velocity this thing is developing. And I feel like we are responsible for kind of training it and dealing with the consequences if we get it wrong. So just on that liability framework, Sam, have you got any thoughts on that?
SPEAKER_01:Well, I think ultimately it depends on the legal system. So not everyone works on, say, a common law or legal system as they do in the UK or Canada, Australia, the US. There are other systems. So it depends. If it's Napoleonic, it's different. But let's deal with common law first. In a common law system, it depends on jurisprudence, it depends on the case history. So that has yet to be established in this case. But there has to, somebody has to be legally accountable. And so you you can't say, well, it's the AI's fault, punish it. There's that that's not possible.
SPEAKER_00:But do you not think that might be a loophole though? The people's new players.
SPEAKER_01:So, first of all, I'm not a lawyer. But um uh but but I think that that there's no weasley way out of that. And people may be getting themselves into liability situations that they haven't thought through. And so they should set boundaries on what the AI can accomplish. And uh and so, you know, they cannot do transactions over a certain amount, they cannot, they cannot do things or, you know, uh take actions that have more than a certain type of of consequence. Otherwise, they're gonna wind up in situations where they are liable. It has to be traceable to someone. We are not yet in a situation where we have full autonomous AIs, and that's what I mean by we don't have the ethical framework. So there's ethics, and then there's legal frameworks, those are different, and then there's technical frameworks, that's a third. And those three things are out of whack at the moment. And so we've got to we've got to build the jurisprudence and we've got to actually go and try this. Uh, and I mean literally try this from a from a legal perspective, but the ethical discussion has to be happening now, and we should be having discussions like this in policy situations. So this is not trivial. This is, if we don't do this right, and if it's not done in a way that's informed by the technical systems, and by the way, we can build technically anything. So let's bring the technical people in and the architects. If we're not doing that, then um we're gonna wind up with laws that don't match the ethics or the technical systems, and that is a that would be a that would be a trapest.
SPEAKER_00:So I agree. I think we do need to bring the technical people in, and I also think we need to bring the odd philosopher in as well, and not just in France. The next question for you is how can businesses start to go about protecting themselves against these types of problems and attacks? And I mean, you mentioned zero trust. I guess that would be a good place to start.
SPEAKER_01:Yeah, so zero trust is a good, is a good way of essentially, I wouldn't say future, well, yeah, future-proofing. It's a way of giving yourself maximum resilience in the face of all disruptions that are coming. And it's not just AI disruptions. We also have things like quantum computing coming. We have nanotechnology advancements, synthetic biology, synthetic manufacturing, there's a robotics, there's a ton of things that are each going to potentially be disruptive to society and disruptive to information technology. So, generally speaking, having a low footprint, least privilege, least function, keeping information in high entropy states is a very good idea because you want the simplest infrastructure and architecture possible that can support complexity. Then you can embrace the new technology and you can rapidly assess the impact of it from a risk perspective. That's that's what you want to do. So to boil that down is you may need to transform your architecture and not just incrementally keep adding things to it as we've been doing for the past 20 years. And I said earlier it was ironic that I was, you know, had patents in VPN technology. Um, because in fact, that's the stuff that, you know, at Zscaler, we're actually helping people toss out the window and and simplify their architecture. But that but it's very important to start thinking in terms of how do you embrace a modern architecture so that you aren't going to be disrupted by those things as they come towards you.
SPEAKER_00:So this is so interesting. I I I I could talk to you for hours about how you feel about that, but I I guess the other the other part of this threat that that we want to know what your opinion is is you've got internal AI agents like within the business. So like there's an insider threat as well. I mean, how does that does that differ? I mean, how do we do that?
SPEAKER_01:Well, so when you when you when you let AI in, you have to think about what what function is it performing. There's there's a there's a there's a misapprehension that AI is this giant brain, and you just feed everything into it and it takes over and becomes the superconsciousness. That's not that's not how it works. It's actually a toolkit, and you should be applying it very rigorously and say, I'm using this tool for this function and this tool for this function. And you should follow the principle of try to keep as much data in memory as possible, write to disk as little as you can, use it only the data that you need when you need it, where you need it, and apply it to function. You and and apply it to the actual function at that moment and no more. And what you want to do is to have this minimalist approach to the data and the information. Otherwise, you wind up in a situation where you've got it's it's almost like data hoarding. So you really want to have a strategy about the data itself. And you, you know, you want to be very careful that you're not just simply collecting it and leaving it sitting around because that is a massive liability.
SPEAKER_00:And also, I guess the people you would your people would have to be trained about all aspects of the fact that this this new member of the workforce really. I think I heard quote someone described it as um a an intelligent but malicious, brilliant intern within the within the within the company. I guess we have to train our people and to understand the way that we're going to use internal AI as well. And that's to be quite robust, I'd imagine, given no data's perfect, no systems perfect.
SPEAKER_01:We essentially have an alien that we're letting into our networks, right? I mean, and I don't mean that I don't mean to denigrate aliens or interns. We we we we've we have these silicon entrants into the carbon workforce, and we don't fully trust the carbon units, right? We still do back checks on them, we still have to watch them, we still have behavioral analysis and looking for insiders, we still talk about the insider problem. So now we've got a silicon unit that we've we've only really been working with for two or three years, and they have access to everything. And so what we want to do is make sure that we understand what they have access to. This boils down to risk again. It's a it's a probability and it's a it's an impact assessment. So you say, well, where what does this agent or what does this AI have access to? What's the worst case scenario? And let's limit the boundary, let's limit the blast radius, and let's pay attention to it and avoid scope creep. That's a slightly different monitoring problem. And by the way, this is the exact same problem we had when we were talking about search engines or about cloud usage. And it's not that necessarily that those problems went away, it's just that we found ways to deal with them from a business perspective. But now we've got some more reasoning coming in with them, and they're coming within the boundary. So it is a little different.
SPEAKER_00:So, okay, it's almost at the end of the year, and you know what happens in Cyber Slam at the end of every year.
SPEAKER_01:Everyone predicts stuff.
SPEAKER_00:We all look for predictions, and uh, I was saying to the team at C Scale, this is one of the things that those of us in the industry laugh at because we all say people are gonna get hacked, AI is gonna get better. We all know that, but how do you think this particular space um and trend as in a gentic AI is gonna evolve or AI generally will evolve? Just just I mean, at an accelerated pace and velocity over the next year or so. Can you I mean, is it is it even possible to say anything other than it will just keep getting faster?
SPEAKER_01:And I think that's true. It will get faster, and you will hear the same things we've heard before, but I think the emphasis is now very much on supply chain. Uh, it's very much on APIs. And with Agentic, we're gonna see rather than APIs, which are generally between static, more static, more still items, Agentic is gonna be about um communication with more ephemeral and many more uh agents that are gonna be even speaking with each other. And so the emphasis is I think we've still got IAM as a problem, uh, identity and access management. And I think now we're gonna be talking about non-human identity a lot more, um, meaning uh not aliens, uh, but meaning machine identity. Um and I think it's very important to to realize that this is gonna throw off some behavioral analysis we've been doing over the years. Even in human fraud identity and fraud detection, it's gonna be very difficult a little while to tell the difference. And so this notion of authenticity, this notion of being able to tell who's acting on behalf of whom and new services around things like verification that an agent truly represents someone, or when agents are in conflict representing someone, is going to become a new space and quite an interesting one as well.
SPEAKER_00:Yeah, that's what we see. That's what I see from the um the social engineering side is that manufacture of a completely synthetic identity specifically designed, if you like, a spearfish directly into um particularly actually in financial uh areas in the financial industry, specifically engineered a synthetic identity that will perfectly match, say, um, the criteria for a loan or for credit cards and things, which might not sound that huge, but when you look at the velocity and scale that that can uh be produced at, that it that is massive. And it and it is social engineering, despite the lack of the carbon-based life form in there, because it's silicon-based, it's a but but it's at scale so interesting. I I also want to just say that like it can seem, you know, and I can seem sometimes when I talk about this that I think it's all kind of, you know, they're going to kill us all, kind of um they're coming after everything, and it's coming after everything. And and and it and it's people like yourself that kind of pulls reins that in a bit, Sam, and I appreciate that. You rein it in a bit and go, no, no, we can we just need to be smarter and we just need to think about these things carefully. Um and I know that you like you're a fan of Stephen Pinken, and I thought there was a great um I was thinking about it before I knew I was interviewing you, and I thought, you know, the the the quote for that would be that the human brain, I can't remember exactly it is, but the human brain's wired for survival. It it evolved to deal with life and death masses with and not the absolute truth. And I think it's so interesting that that we can apply that to this artificial brain that that we're gonna have to work with and get the best out of over time.
SPEAKER_01:Very well said. Very well said, yeah.
SPEAKER_00:So our last question is a question that seems a little bit silly, but we ask all our guests this question. I think you've kind of almost answered it or or certainly alluded to it. So the question is, what would you consider to be your resilient superpower? But you've already said that your superpower is being close to the problem. So is that your resilient superpower as well?
SPEAKER_01:I think it's that um yeah, I once I once had a a boss who said something that has stuck with me because as a chief information security officer, we all deal with incense and we all deal with crises. He said to me, now we know who the enemy is. And he said, In a crisis, I get calm, because that's when the the training muscle kicks in, and that's when the innovation kicks in. And and I've been through so many crises now. I think my my superpower, and I hope my team's superpower, is that when when the when the when the balloon goes up, so to speak, we get very rational as opposed to emotional. Uh, and we've done the thinking ahead of time to know what matters to us and how we're gonna act. I think that that's the best thing because we've done all the homework ahead of time.
SPEAKER_00:Absolutely. And that leaves you more space to make those decisions, I guess. Well, listen, Sam, it's such a pleasure to speak to you about this. You're such an expert on it, and uh I really look forward to this chat, and it's been brilliant. So thank you so much for being part of the Resilience Factor Podcast, Sam Curry. Thank you. That was fantastic, Jenny. So let's reflect on that conversation with Sam Curry. We heard about the impact Agentec AI is having on the security landscape.
SPEAKER_01:Now we have an authenticity problem, not just authentication and authorization, not just who are you, which we have to authenticate the agent. And then authorize the agent, we have to actually have chains of authenticity. So we need statement of intent, we need a bill of material, we need a bill of data, but we also need to have this notion of authenticity and a new form of trust model has to emerge.
SPEAKER_00:Bad actors will always use emerging technology to attack, but some thinks that in the medium to long term, the odds favor us defenders.
SPEAKER_01:This is one of the reasons that a zero trust strategy is important. I mean that having less exposure, having less available to exploit, having less privilege generally, less function, having data in its lowest information, highest entropy state is the thing that is going to make sure that you're more resilient over time.
SPEAKER_00:And if organizations start to use agentic AI, who is ultimately responsible for it if issues occur?
SPEAKER_01:So if you think about it, how autonomous is the AI? And this raises questions of things like consciousness and reasoning. Somebody has to be legally accountable. You can't say, well, it's the AI's fault, punish it. There's no weasley way out of that, and people maybe be getting themselves into liability situations that they haven't thought through. And so they should set boundaries on what the AI can accomplish. Otherwise, they're gonna wind up in situations where they are liable.
SPEAKER_00:Finally, Sam provided his prediction for the future of Agentic AI in 2026.
SPEAKER_01:The emphasis is now very much on supply chain. Uh, it's very much on APIs. Agentic is going to be about um communication with more ephemeral and many more agents that are going to be even speaking with each other. Even in human fraud identity and fraud detection, it's going to be very difficult to tell the difference.
SPEAKER_00:The Resilience Factor Podcast is brought to you by C Scalar, a leading cloud-based cybersecurity platform, revolutionizing the way businesses protect themselves from cyber threats.