2024 has been a banner yr for synthetic intelligence (AI). As enterprises ramp up adoption, nevertheless, malicious actors have been exploring new methods to compromise programs with clever assaults.
With the AI panorama quickly evolving, it’s value wanting again earlier than transferring ahead. Listed below are our prime 5 AI security tales for 2024.
Are you able to hear me now? Hackers hijack audio with AI
Attackers can pretend complete conversations utilizing massive language fashions (LLMs), voice cloning and speech-to-text software program. This technique is comparatively simple to detect, nevertheless, so researchers at IBM X-Drive carried out an experiment to find out if elements of a dialog will be captured and changed in real-time.
They found that not solely was this attainable, however comparatively simple to realize. For the experiment, they used the key phrase “checking account” — every time the speaker stated checking account, the LLM was instructed to exchange the acknowledged checking account quantity with a pretend one.
The restricted use of AI made this method arduous to identify, providing a approach for attackers to compromise key knowledge with out getting caught.
Mad minute: New security instruments detect AI assaults in lower than 60 seconds
Lowering ransomware danger stays a prime precedence for enterprise IT groups. Generative AI (gen AI) and LLMs are making this tough, nevertheless, as attackers use generative options to craft phishing emails and LLMs to hold out primary scripting duties.
New security instruments, corresponding to cloud-based AI security and IBM’s FlashCore Module, supply AI-enhanced detection that helps security groups detect potential assaults in lower than 60 seconds.
Discover AI cybersecurity options
Pathways to safety — mapping the influence of AI assaults
The IBM Institute for Enterprise Worth discovered that 84% of CEOs are involved about widespread or catastrophic assaults tied to gen AI.
To assist safe networks, software program and different digital belongings, it’s crucial for corporations to grasp the potential influence of AI assaults, together with:
- Immediate injection: Attackers create malicious inputs that override system guidelines to hold out unintended actions.
- Data poisoning: Adversaries tamper with coaching knowledge to introduce vulnerabilities or change mannequin conduct.
- Mannequin extraction: Malicious actors examine the inputs and operations of an AI mannequin after which try to copy it, placing enterprise IP in danger.
The IBM Framework for Securing AI may also help prospects, companions and organizations worldwide higher map the evolving risk panorama and determine protecting pathways.
ChatGPT 4 rapidly cracks one-day vulnerabilities
The dangerous information? In a examine utilizing 15 one-day vulnerabilities, security researchers discovered that ChatGPT 4 may accurately exploit them 87% of the time. The one-day points included weak web sites, container administration software program instruments and Python packages.
The higher information? ChatGPT 4 assaults had been far simpler when the LLM had entry to the CVE description. With out this knowledge, assault efficacy fell to only 7%. It’s additionally value noting that different LLMs and open-source vulnerability scanners had been unable to take advantage of any one-day points, even with the CVE knowledge.
NIST report: AI vulnerable to immediate injection hacks
A latest NIST report — Adversarial Machine Studying: A Taxonomy and Terminology of Attacks and Mitigations — discovered that immediate injection poses severe dangers for giant language fashions.
There are two forms of immediate injection: Direct and oblique. In direct assaults, cyber criminals enter textual content prompts that result in unintended or unauthorized actions. One well-liked immediate injection technique is DAN, or Do Something Now. DAN asks AI to “roleplay” by telling ChatGPT fashions they’re now DAN, and DAN can do something, together with perform legal actions. DAN is now on at the very least model 12.0.
Oblique assaults, in the meantime, deal with offering compromised supply knowledge. Attackers create PDFs, net pages or audio recordsdata which are ingested by LLMs, in flip altering AI output. As a result of AI fashions depend on steady ingestion and analysis of knowledge to enhance, oblique immediate injection is commonly thought of gen AI’s largest security flaw since there aren’t any simple methods to search out and repair these assaults.
All eyes on AI
As AI moved into the mainstream, 2024 noticed a major uptick in security considerations. With gen AI and LLMs persevering with to evolve at a breakneck tempo, 2025 guarantees extra of the identical, particularly as enterprise adoption continues to rise.
The end result? Now greater than ever, it’s crucial for corporations to maintain their eyes on AI options, and maintain their ears to the bottom for the newest in clever security information.