AI-assisted hacking is already here, Google warns
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Google says it has identified what may be the first known case where cybercriminals used AI to discover and weaponize a previously unknown zero-day vulnerability.
Why it matters: Security researchers have long warned AI could one day accelerate cyberattacks. That day appears to be here.
Driving the news: Google's threat intelligence group said in a report Monday that it found evidence of several "prominent cyber crime threat actors" partnering to identify a bug in a Python script that would let them bypass two-factor authentication on a popular open-source system.
- The groups, which Google didn't identify, then used AI-assisted code to weaponize the previously unknown vulnerability, according to the report.
- The attempt to exploit the unidentified open-source system was thwarted, and Google said it has since disclosed the flaw to the vendor.
The intrigue: Google based its assessment on characteristics common in AI-generated code, including overly explanatory comments in the code, a made-up severity rating for the bug and coding patterns commonly seen in AI-generated Python scripts.
Threat level: Google warned that advanced AI models are getting better at finding subtle security weaknesses in software that conventional cybersecurity tools often fail to catch.
- In the zero-day example, the model appeared to identify a hidden trust assumption in the software's login logic that could be exploited to bypass two-factor authentication protections.
What they're saying: "There's a misconception that the AI vulnerability race is imminent," John Hultquist, chief analyst at Google's threat intelligence group, said in a statement. "The reality is that it's already begun."
- "For every zero-day we can trace back to AI, there are probably many more out there," he added.
The big picture: The AI-assisted exploit was one of several cases Google uncovered in recent months highlighting growing interest among both cybercriminals and nation-state hackers in using AI to supercharge attacks.
- North Korean and Chinese state actors are experimenting with AI in a variety of ways to exploit vulnerabilities, according to the report.
- In one case, researchers found APT45, a North Korean military group, using AI to test and validate thousands of exploits targeting software flaws.
- Google also uncovered malware, dubbed PromptSpy, that uses Gemini to autonomously navigate Android devices by interpreting on-screen activity and generating commands in real time.
What to watch: U.S. AI companies are increasingly grappling with how to prevent their more sophisticated AI models from being abused by cybercriminals and state-backed hackers.
