The unseen assault vector: Mannequin drift and shadow AI
The brand new, most important threats in our prolonged provide chain at the moment are fully digital and nearly invisible to conventional controls. I’m not speaking a few easy phishing assault or an unpatched server. I’m speaking about dangers embedded within the very cloth of our vendor’s operations by way of GenAI adoption.
First, think about shadow AI. Your key software program vendor is utilizing a public LLM to quickly generate new code on your core product. They didn’t let you know as a result of it sped up their supply timeline. However now, that mannequin’s proprietary coaching knowledge, probably scraped from compromised sources, is woven into your manufacturing setting. If a third-party developer incorporates noncompliant code from an LLM, your enterprise is straight away uncovered to mental property, licensing and security dangers — dangers that present due diligence contracts merely can’t catch (see the dialogue on AI-generated liabilities within the Journal of AI Danger).
Second, we should acknowledge mannequin drift. A vendor’s core enterprise logic, like fraud detection or optimization, may depend on a deployed AI mannequin. Over time, that mannequin can drift in its habits on account of refined adjustments in its working setting or knowledge movement, probably exposing confidential knowledge or introducing biases that violate new regulatory necessities. This can be a refined systemic danger that an annual audit can’t flag. CISOs want to grasp that the availability chain danger floor is now fluid, outlined by the habits of exterior algorithms, not simply exterior firewalls.



