When ChatGPT first got here out, I requested a panel of CISOs what it meant for his or her cybersecurity applications. They acknowledged impending modifications, however mirrored on previous disruptive applied sciences, like iPods, Wi-Fi entry factors, and SaaS functions coming into the enterprise. The consensus was that security AI can be an analogous disrupter, so that they agreed that 80% (or extra) of AI security necessities had been already in place. Safety fundamentals similar to robust asset stock, knowledge security, identification governance, vulnerability administration, and so forth, would function an AI cybersecurity basis.
Quick-forward to 2025, and my CISO associates had been proper — kind of. It’s true {that a} sturdy and complete enterprise security program acts as an AI security anchor, however the different 20% is more difficult than first imagined. AI functions are quickly increasing the assault floor whereas additionally extending the assault floor to third-party companions, in addition to deep inside the software program provide chain. This implies restricted visibility and blind spots. AI is usually rooted in open supply and API connectivity, so there’s seemingly shadow AI exercise in all places. Lastly, AI innovation is shifting quickly, making it laborious for overburdened security groups to maintain up.
Except for the technical points of AI, it’s additionally price noting that many AI initiatives finish in failure. In line with analysis from S&P International Market Intelligence, 42% of companies shut down most of their AI initiatives in 2025 (in comparison with 17% in 2024). Moreover, almost half (46%) of companies are halting AI proof-of-concepts (PoCs) earlier than they even attain manufacturing.
Why achieve this many AI initiatives fail? Business analysis factors to value, poor knowledge high quality, lack of governance, expertise gaps, and scaling points, amongst others.
With initiatives failing and a potpourri of security challenges, organizations have a protracted and rising to-do listing in relation to guaranteeing a strong AI technique for innovation and security. Once I meet my CISO amigos as of late, they typically stress the next 5 priorities:
1. Begin all the things with a powerful governance mannequin
To be clear, I’m not speaking about know-how or security alone. In reality, the AI governance mannequin should start with alignment between enterprise and know-how groups on how and the place AI can be utilized to assist the organizational mission.
To perform this, CISOs ought to work with CIO counterparts to teach enterprise leaders, in addition to enterprise capabilities similar to authorized groups, finance, and many others., to ascertain an AI framework that helps enterprise wants and technical capabilities. Frameworks ought to observe a lifecycle from conception to manufacturing, and embrace moral concerns, acceptable use insurance policies, transparency, regulatory compliance, and (most significantly) success metrics.
On this effort, CISOs ought to evaluate current frameworks such because the NIST AI Danger Administration Framework, ISO/IEC 42001:2023, UNESCO suggestions on the ethics of synthetic intelligence, and the RISE (analysis, implement, maintain, consider) and CARE (create, undertake, run, evolve) frameworks from RockCyber. Enterprises could must create a “better of” framework that matches their particular wants.
2. Develop a complete and steady view of AI dangers
Getting a deal with on organizational AI dangers begins with the fundamentals, similar to an AI asset stock, software program payments of fabric, vulnerability and publicity administration greatest practices, and an AI threat register. Past fundamental hygiene, CISOs and security professionals should perceive the advantageous factors of AI-specific threats similar to mannequin poisoning, knowledge inference, immediate injection, and many others. Menace analysts might want to sustain with rising ways, methods, and procedures (TTPs) used for AI assaults. MITRE ATLAS is an effective useful resource right here.
As AI functions lengthen to 3rd events, CISOs will want tailor-made audits of third-party knowledge, AI security controls, provide chain security, and so forth. Safety leaders should additionally take note of rising and infrequently altering AI laws. The EU AI Act is essentially the most complete thus far, emphasizing security, transparency, non-discrimination, and environmental friendliness. Others, such because the Colorado Synthetic Intelligence Act (CAIA), could change quickly as client response, enterprise expertise, and authorized case legislation evolves. CISOs ought to anticipate different state, federal, regional, and business laws.
3. Take note of an evolving definition of information integrity
You’d suppose this could be apparent, as confidentiality, integrity, and availability make up the cybersecurity CIA triad. However within the infosec world, knowledge integrity has targeted on points similar to unauthorized knowledge modifications and knowledge consistency. These protections are nonetheless wanted, however CISOs ought to broaden their purview to incorporate the information integrity and veracity of the AI fashions themselves.
As an example this level, listed below are some notorious examples of information mannequin points. Amazon created an AI recruiting instrument to assist it higher type by resumes and select essentially the most certified candidates. Sadly, the mannequin was principally skilled with male-oriented knowledge, so it discriminated towards girls candidates. Equally, when the UK created a passport picture checking software, its mannequin was skilled utilizing folks with white pores and skin, so it discriminated towards darker skinned people.
AI mannequin veracity isn’t one thing you’ll cowl as a part of a CISSP certification, however CISOs should be on prime of this as a part of their AI governance tasks.
4. Try for AI literacy in any respect ranges
Each worker, associate, and buyer might be working with AI at some degree, so AI literacy is a excessive precedence. CISOs ought to begin in their very own division with AI fundamentals coaching for all the security workforce.
Established safe software program improvement lifecycles must be amended to cowl issues similar to AI menace modeling, knowledge dealing with, API security, and many others. Builders must also obtain coaching on AI improvement greatest practices, together with the OWASP Prime 10 for LLMs, Google’s Safe AI Framework (SAIF), and Cloud Safety Alliance (CSA) Steering.
Finish consumer coaching ought to embrace acceptable use, knowledge dealing with, misinformation, and deepfake coaching. Human threat administration (HRM) options from distributors similar to Mimecast could also be essential to sustain with AI threats and customise coaching to completely different people and roles.
5. Stay cautiously optimistic about AI know-how for cybersecurity
I’d categorize as we speak’s AI security know-how as extra “driver help,” like cruise management, than autonomous driving. Nonetheless, issues are advancing shortly.
CISOs ought to ask their workers to establish discrete duties, similar to alert triage, menace looking, threat scoring, and creating experiences, the place they may use some assist, after which begin to analysis rising security improvements in these areas.
Concurrently, security leaders ought to schedule roadmap conferences with main security know-how companions. Come to those conferences ready to debate particular wants moderately than sit by pie-in-the-sky PowerPoint displays. CISOs must also ask distributors immediately about how AI might be used for current know-how tuning and optimization. There’s a whole lot of innovation occurring, so I consider it’s price casting a large internet throughout current companions, opponents, and startups.
A phrase of warning nonetheless, many AI “merchandise” are actually product options, and AI functions are useful resource intensive and costly to develop and function. Some startups might be acquired however many could burn out shortly. Caveat emptor!
Alternatives forward
I’ll finish this text with a prediction. About 70% of CISOs report back to CIOs as we speak. I consider that as AI proliferates, CISOs reporting constructions will change quickly, with extra reporting on to the CEO. People who take a management position in AI enterprise and know-how governance will seemingly be the primary ones promoted.



