AI brokers are purported to make work simpler. However they’re additionally creating a complete new class of security nightmares.
As corporations deploy AI-powered chatbots, brokers, and copilots throughout their operations, they’re going through a brand new threat: How do you let workers and AI brokers use highly effective AI instruments with out by accident leaking delicate information, violating compliance guidelines, or opening the door to prompt-based injections? Witness AI simply raised $58 million to discover a resolution, constructing what they name “the arrogance layer for enterprise AI.”
Immediately on information.killnetswitch’s Fairness podcast, Rebecca Bellan was joined by Barmak Meftah, co-founder and companion at Ballistic Ventures, and Rick Caccia, CEO of WitnessAI, to debate what enterprises are literally apprehensive about, why AI security will turn into an $800 billion to $1.2 trillion market by 2031, and what occurs when AI brokers begin speaking to different AI brokers with out human oversight.
Hearken to the total episode to listen to:
- How enterprises by accident leak delicate information by way of “shadow AI” utilization.
- What CISOs are apprehensive about proper now, how the issue has developed quickly over 18 months, and what it’s going to seem like over the subsequent 12 months.
- Why they suppose conventional cybersecurity approaches gained’t work for AI brokers.
- Actual examples of AI brokers going rogue, together with one which threatened to blackmail an worker.
Subscribe to Fairness on YouTube, Apple Podcasts, Overcast, Spotify, and all of the casts. You can also observe Fairness on X and Threads, at @EquityPod.



