The 2026 RSA circus is over. The tents are packed and the elephants have been loaded onto the prepare.
However, it was an eventful week. There have been fleets of autos — Escalades, Rivians, vans however curiously, no Teslas — strewn with vendor names and tag strains, and also you couldn’t stroll anyplace close to Howard Avenue in San Franciso with out seeing, “AI-[insert word here like enabled, enhanced, native, powered, etc., etc., etc.]”
I spent the week talking with CISOs, cybersecurity professionals, expertise distributors, and repair suppliers. Listed here are just a few of my takeaways.
The CISO AI hierarchy is actual
Whereas each vendor communicated AI alternative gaga, cybersecurity professionals’ temper was one in every of trepidation. In truth, I got here away with a profile of three distinct CISO archetypes:
The proactive CISO (roughly 20%): These security leaders have been effectively conscious of the AI-driven enterprise and expertise modifications afoot and got here armed with an inventory of questions tailor-made to their particular enterprise necessities. Many of those executives introduced alongside security engineers and designers — an action-oriented crew. These CISOs had a good understanding about their group’s AI enterprise initiatives, in addition to their very own security wants. The purpose? Develop a buying record that aligns with their group’s technique and helps their governance fashions, coverage enforcement controls, and security expertise stacks.
The curious and confused CISO (roughly 40%): These executives know one thing is going on with AI of their group, however they aren’t certain what, the place, or how a lot is occurring. Their purpose was schooling — what dangers they face, what danger mitigation steps they need to take, and what’s accessible from the business to assist them cease the bleeding. CISOs on this class are considerably determined for assist.
The blissfully ignorant CISO (roughly 40%): Okay, this one is a bit unfair to CISOs because it’s extra about their organizations. There’s probably AI improvement and utilization the CISO and doubtless some executives are unaware of. They approached RSA believing time was on their facet, so that they in all probability skimmed by means of the AI rhetoric, shmoozed with distributors, and regarded for the very best cocktail events.
In my humble opinion, CISOs will cycle by means of this hierarchy rapidly over the subsequent yr. Blissfully ignorant CISOs will get wind of AI initiatives at their group and transfer on to curiosity and confusion. This gained’t take lengthy. Continuing from curious and confused to proactive would be the tougher transition. These CISOs should assess enterprise targets, lively initiatives, and consumer actions, then work with executives to develop a governance framework, create insurance policies, implement guardrails, monitor actions, and handle a versatile mannequin that retains up with present and future enterprise and technical necessities. A typical analogy heard at RSA is that firms should be capable to repair the airplane whereas it’s in flight.
Legacy security distributors have the within monitor on AI — for now
So far as AI expertise consumption for cybersecurity, most CISOs I spoke with have been open-minded whereas leaning towards their present distributors — not less than within the brief time period. This will purchase legacy security distributors a bit, however not a lot time.
Bear in mind what occurred within the cloud as we progressed from an absence of cloud belief, to “raise and shift,” to cloud-native? The identical factor is going on with AI, solely even quicker than the cloud. Bolting AI to present instruments gained’t work for lengthy, a yr at most.
You’ve received to get the AI foundations proper
I used to be inspired to listen to distributors describe how they began their AI transition by constructing an infrastructural basis — knowledge basis/context engine, clever management airplane, execution layer, companies, guardrails, and so on. — after which including practical brokers on prime of this basis. Cisco/Splunk impressed me with its improvement method and roadmap, whereas AI-based startups reminiscent of Summary, Crogl, and Sidekick are betting the farm on this system.
AI code is making an affect
Distributors are additionally all-in on utilizing AI-development instruments and seeing robust outcomes. I heard about venture acceleration together with employees discount. Constructing connectors is an effective instance. Axonius and Tenable, each recognized for broad expertise integration, are utilizing AI to dump lots of this tedious however obligatory work, releasing builders to work on performance quite than plumbing.
AI pricing stays a multitude
Whereas AI capabilities seem like baked into many instruments, I discovered that nobody is aware of find out how to worth their AI companies. Some are doing so by the token, some by the variety of customers, and a few are charging by the agent. The market will flush this out over the remainder of the yr.
Utility security is getting its AI makeover
Everyone knows the affect of AI on software program improvement. It’s clear to me after RSA that the identical factor is going on to software security. Anthropic’s Claude Code Safety is one instance, however I additionally received a view of the AWS Safety Agent, which offers software program testing capabilities throughout the software program improvement lifecycle — from design, to improvement, to runtime, to pink teaming.
Likewise, I met with an organization named XBow that focuses on autonomous offensive security primarily based on AI brokers. Primarily based on these developments, we’ll see a really totally different software security market at RSA 2027.
Few could also be ready for what comes subsequent from cyber-adversaries
There’s lively debate within the business concerning the affect of AI inside the menace panorama: Are present cybersecurity defenses enough or will AI tilt the battlefield towards adversaries?
After RSA, I consider each premises are true. Subtle companies with robust governance, danger administration, asset visibility, fashionable coaching, and sound hygiene and posture administration needs to be okay. Alarmingly, this can be a small share of organizations. Most others lack superior security expertise and enough sources. Adversaries armed with AI instruments and automatic workflows may have a discipline day right here.
Managed suppliers are advancing the AI SOC
Managed security service suppliers (MSSPs) and managed detection and response (MDR) distributors are pushing the envelope on the AI-enabled security operations heart (SOC).
Arctic Wolf unveiled its Aurora Superintelligence Platform and the Aurora Agentic SOC, which incorporates brokers for triage, alerting, investigations, and extra. I additionally met with Ontinue, an MSSP that gives companies on prime of Microsoft security instruments reminiscent of Defender for Endpoint, Defender for Azure, and MS Sentinel. It’s utilizing AI to determine what it calls “hyper-contextualization” to grasp all it will probably about its prospects’ enterprise processes and expertise infrastructure so it will probably enhance decision-making.
Microsoft cements its place
Talking of Microsoft, it’s onerous to level to every other vendor that may match its cybersecurity protection.
Not like others, Microsoft got here to RSA armed with AI metrics and proof factors. For instance, Microsoft offered particular metrics from a number of prospects that turned on its Defender brokers and saved tons of of hours of labor whereas enhancing accuracy and productiveness. I’m certain Microsoft has many examples to share.
Beware the cyber class killers
We’ve at all times seen cybersecurity by means of the lens of security product classes — EDR, firewalls, SIEM, CSPM, and so on. However multi-agent AI merchandise may tackle many of those duties concurrently, breaking down conventional product buckets and appearing as class killers.
CISOs should anticipate this and be open to organizational, course of, and budgetary modifications. Additionally, will multi-agent cybersecurity merchandise imply the loss of life of the Gartner Magic Quadrant and all different me-too vendor mapping merchandise?
Consciousness coaching step by step transforms
Coaching is in transition. I’m happy with this improvement. Consciousness coaching is being changed by habits monitoring and alter. Human danger administration (HRM) instruments from Fable Safety, KnowBe4, and Mimecast, amongst others, watch over customers and supply a nudge once they go astray.
Past artificial phishing, some instruments even present artificial deepfake coaching. HRM gross sales are restricted right this moment to progressive organizations, however I consider they’ll turn out to be a de facto commonplace as regulators and cyber-insurance firms see the sunshine and assist this coaching renaissance.
Safety claims possession of identities
Properly, partial possession, however this can be a step in the suitable course. I’m seeing attention-grabbing developments in areas reminiscent of passwordless authentication (I can’t consider it’s 2026 and we’re nonetheless utilizing passwords), browser security, non-human identification (NHI) security, and privileged account administration.
RSA additionally pushed discussions about AI-agent entry and motion management — detection, monitoring, management of shadow brokers, zero-standing privilege, and so on. AI can be a giant participant, serving to to ease the painful identification modernization course of.
As a cryptographer may say, with this text, I’ve tried to hash your entire RSA occasion right into a single key. I actually loved RSA 2026 (my twentieth) and stay up for subsequent yr. See you on the Moscone Heart from April 5 by means of April 8, 2027.



