Cybersecurity researchers have disclosed a brand new kind of title confusion assault known as whoAMI that enables anybody who publishes an Amazon Machine Picture (AMI) with a selected title to realize code execution throughout the Amazon Net Providers (AWS) account.
“If executed at scale, this assault could possibly be used to realize entry to hundreds of accounts,” Datadog Safety Labs researcher Seth Artwork stated in a report shared with The Hacker Information. “The susceptible sample will be discovered in lots of personal and open supply code repositories.”
At its coronary heart, the assault is a subset of a provide chain assault that entails publishing a malicious useful resource and tricking misconfigured software program into utilizing it as a substitute of the legit counterpart.

The assault exploits the truth that anybody can AMI, which refers to a digital machine picture that is used besides up Elastic Compute Cloud (EC2) situations in AWS, to the group catalog and the truth that builders may omit to say the “–owners” attribute when trying to find one through the ec2:DescribeImages API.
Put otherwise, the title confusion assault requires the under three circumstances to be met when a sufferer retrieves the AMI ID via the API –
- Use of the title filter,
- A failure to specify both the proprietor, owner-alias, or owner-id parameters,
- Fetching essentially the most the not too long ago created picture from the returned checklist of matching photos (“most_recent=true”)
This results in a state of affairs the place an attacker can create a malicious AMI with a reputation that matches the sample specified within the search standards, ensuing within the creation of an EC2 occasion utilizing the menace actor’s doppelgänger AMI.
This, in flip, grants distant code execution (RCE) capabilities on the occasion, permitting the menace actors to provoke varied post-exploitation actions.
All an attacker wants is an AWS account to publish their backdoored AMI to the general public Neighborhood AMI catalog and go for a reputation that matches the AMIs sought by their targets.
“It is extremely much like a dependency confusion assault, besides that within the latter, the malicious useful resource is a software program dependency (corresponding to a pip bundle), whereas within the whoAMI title confusion assault, the malicious useful resource is a digital machine picture,” Artwork stated.
Datadog stated roughly 1% of organizations monitored by the corporate have been affected by the whoAMI assault, and that it discovered public examples of code written in Python, Go, Java, Terraform, Pulumi, and Bash shell utilizing the susceptible standards.
Following accountable disclosure on September 16, 2024, the problem was addressed by Amazon three days later. When reached for remark, AWS instructed The Hacker Information that it didn’t discover any proof that the approach was abused within the wild.
“All AWS companies are working as designed. Based mostly on in depth log evaluation and monitoring, our investigation confirmed that the approach described on this analysis has solely been executed by the approved researchers themselves, with no proof of utilization by another events,” the corporate stated.

“This system may have an effect on clients who retrieve Amazon Machine Picture (AMI) IDs through the ec2:DescribeImages API with out specifying the proprietor worth. In December 2024, we launched Allowed AMIs, a brand new account-wide setting that permits clients to restrict the invention and use of AMIs inside their AWS accounts. We advocate clients consider and implement this new security management.”
As of final November, HashiCorp Terraform has began issuing warnings to customers when “most_recent = true” is used with out an proprietor filter in terraform-provider-aws model 5.77.0. The warning diagnostic is anticipated to be upgraded to an error efficient model 6.0.0.