A record 44,000+ cybersecurity professionals attended this year’s RSAC Conference, where AI was a key topic of discussion.
The debate around AI is extensive, with multiple areas for security teams to address. This includes rolling out agentic AI to automate key functions within the SOC, securing against AI-powered attacks, and secure deployment of AI within internal and customer-facing applications.
Expert Insights spoke with several leading industry experts and tracked multiple key panels on the subject of securing AI to bring actionable insights for your business. Here is a selection of the key perspectives we heard out at RSAC this year.
BTW – To keep on top of our coverage around AI and cybersecurity, make sure to subscribe to Decrypted, the weekly cybersecurity newsletter from Expert Insights.
🤖On Agentic AI and Automating Cybersecurity
- Rupesh Chokshi, SVP & GM of Application Security at Akamai: I do genuinely believe that AI and the models and the GenAI apps and agentic AI will be able to deliver business benefits. If you’re doing any repeated task, it can do better and it can do with more precision. But if you’re not thinking about security, if you’re not putting all the guardrails, then that becomes a deterrent. We want security to be an enabler.
- Patrick Joyce, Global Resident CISO at Proofpoint: From a security standpoint, we’re able to leverage LLMS and the massive data we have to help improve the performance of cyber. You’ll still need people in your SOC, but you’ll have agent-type automations in place that can do things much faster and better in a repetitive way than humans can.
- Benny Porat, CEO at Twine Security: At the moment, we’re in the “trust and verify” era; we cannot just hire AI agents that we trust to do the job and to replace us. That’s not going to happen soon, particularly because there are areas in the industry that we need to understand from an accountability and liability perspective. We are living in a world where we want people to be accountable for their actions, and as long as that’s the case, the jobs of the people are safe and we will always need them.
- Kara Sprague, CEO of HackerOne: I’m very, very excited about the potential impact of AI applied into this space, because most of the existing tools have the problem of false positives. We’ve got our most talented cybersecurity folks buried in drudgery of digging through false positives and trying to figure out, which ones they need to take action on and which ones they don’t. I think AI can have a huge impact in enabling those folks to be much more effective.
- Nicole Carignan, SVP of Security & AI Strategy and Field CISO at Darktrace: We’re starting to see the need for autonomous action more and more, because that isn’t a matter of human speed and it’s not a human versus AI issue; it’s the need for AI to be able to perform such advanced behavior analytics that it can defend against things that we don’t even know exist.
- Chas Clawson, Field CTO at Sumo Logic: AI will be embedded in every aspect of the investigation and the response lifecycle. From beginning to end. Yesterday it was human-led investigations, AI-assisted. Now, I would say tomorrow, or whenever tomorrow is, it could be literally tomorrow, it’s going be AI-led and human-reviewed, where the AI does the bulk of the heavy lifting, the correlation, the alerting, and then the human comes over top and says, can I validate this? Do I want to click, take some autonomous response based on what this agent is telling me to do?
⚔️ On The Battle Between Adversary & Defender AI Use
- Jen Easterly (former CISA Director): AI will be the most powerful technology of our lifetime. It will change everything, the way we live, the way we work, the way we approach every single problem. In cybersecurity, I believe in a world where AI can be used to detect attacks before they occur, to deploy countermeasures in milliseconds and learn from every attempt to breach them…[but] AI that can protect can attack. AI can prevent fraud can commit it. AI that can identify a vulnerability can exploit one.
- Rachel Jin, CTO at Trend Micro: The AI generated threat will be big. Attackers can utilize AI to innovate in a lot of different ways. For example, they can generate very targeted spear phishing emails by using AI to get all the information they need about their victim from their socials. So, AI will definitely help hackers to improve their productivity, and also improve the quality of their attacks. But it’s not all bad, because we are also leveraging AI! It’s always like this: attackers evolve using some new technology, and we just need to be better than them.
- Deepen Desai, Chief Security Officer at Zscaler: Bad guys are already using AI to do a lot of their activity, whether it’s phishing, malware generation, exploitation, recon activity, post-infection activity as well.
- John Hultquist, Chief Analyst at Google Threat Intelligence Group: The adversary is going to figure out new ways to use AI. And it’s going to help them scale their operations and it’s going to make them better. And I think that we are, whether we like it or not, now in an arms race with them. So we’re going to have to get better. We’ve always, by the way, needed to get better. It’s not been easy. I think the adversaries always had sort of an advantage over us. But also AI could be the solution that we’ve been looking for.
🔐On Security Risks Of AI
- Patrick Joyce, Global Resident CISO at Proofpoint: Most organizations have jumped into AI. The security [concern] is around how they’ve been deployed, how they’re being used, information is being educated, how systems are being informed and trained. I think, more importantly, how secure is the LLM? The change control around it, how is it managed, not just initially, but every second that it operates. Security, around LLM, security around the models themselves, is going to have to dramatically improve.
- Peter McKay, CEO, Snyk: Within six months you’re going see a major attack of a critical, some big organization, the government or financial institution, because everybody is just saying, just get the benefits of AI, we’ll figure out security later. In this at a time when you get less regulation, less funding, more cyber attacks, it’s the perfect storm.
- Ric Smith, President of Product, Technology, and Operations at SentinelOne: We never advise a customer to just like turn on the automation and good luck. It’s like jumping into a driverless car and ending up having to call support to get out. The reality of it is that you want to gain that trust. There’s always human in loop in the initial steps. And that’s what we advise customers to do until they gain trust that the system’s actually doing what they believe it’s doing.
🔮 On The Future Of AI & Cybersecurity
- Rupesh Chokshi, SVP & GM of Application Security at Akamai: “A lot of use cases are coming up with agentic AI. That is going to heavily leverage the API infrastructure of the companies. The APIs are already front and center and they’re going to be even more front and center. The industry as a whole is going to struggle a little bit because how do you figure out if there’s a good agentic or bad agentic? The intensity is going to be very increased. And a lot of it is happening at a massive speed.”
- Ben Kliger, Co-Founder, Zenity: AI agents are not going anywhere. They’re only going to be adopted even more. With a lot of macro-economic changes going on in the world, companies will look to be more efficient, to have a competitive edge. The way to do it is with adopting AI agents. So, I actually think it’s one of the things that are going to be more stable in our world.
- Simon Hunt, Chief Product Officer at Securonix: With the agentic push we’re in at the moment, we will go through this crazy hype cycle and then there’ll be a trough, but in that trough will be some very useful experiences. Next year, you’ll come to RSAC and it will all be about the experiences and the timesaving and the value that agentic AI creates, not about how the individual pieces of these solutions work.
- Deepen Desai, Chief Security Officer at Zscaler: There is a lot of potential in the agentic AI piece. I think it’s going to take a good couple years to remove the noise and see real benefits. I think we’re still early on in many of these areas. So, in the next couple of years, I’m actually excited to see where it actually delivers on what promises we have been seeing over the past few months.
- Nicole Carignan, SVP of Security & AI Strategy and Field CISO, Darktrace: I think we’re already almost there with a fully autonomous SOC analyst. Humans are always still going to be a part of it, but their roles are going to drastically change, and I think that’s really exciting. AI will offload SOC level one and two triaging, allowing humans to focus on strategic remediation, level three threat hunting, or proactive cyber resiliency tasks that have a bigger impact on risk reduction, especially as threat actors innovate with AI unsafely and unethically.
Subscribe To Expert Insights Decrypted
For more expert perspectives on the convergence between AI and cybersecurity make sure to subscribe to Decrypted, the weekly cybersecurity newsletter from Expert Insights.
About Expert Insights
Expert Insights saves you time and hassle by rigorously analyzing cybersecurity solutions and cutting through the hype to deliver clear, actionable shortlists.
We specialize in cybersecurity. So, our focus is sharper, our knowledge is deeper and our insights are better. What’s more, our advice is completely impartial.
In a world saturated with information, we exist to arm experts with the insights they need to protect their organization.
That’s why over 1 million businesses have used us to inform their cybersecurity research.