HackerOne: 540% Rise In Prompt Injection Vulnerabilities In One Year

Published on Oct 1, 2025
Joel Witts Written by Joel Witts
CISA Directive Requires Feds To Mitigate Cisco Zero-Days By EOD Tomorrow

HackerOne has warned that AI risks are accelerating faster than ever, with valid AI-related vulnerability reports up 210% year over year and prompt-injection findings soaring 540%.

HackerOne’s annual Hacker-Powered Security Report pulls together data from over 1,820 researchers, 99 customers, and 580,000 vulnerabilities found by threat hunters.  

Attacks targeting production AI are surging, led by a 540% rise in prompt-injection vulnerabilities — the platform’s fastest-growing AI attack vector.

Prompt injection attacks target AI models with malicious instructions, exploiting their eagerness to execute user inputs. 

Cases of sensitive-information disclosure rose 152% YoY (more than doubled), highlighting the risk of AI systems leaking sensitive data.

Nevertheless, AI adoption has skyrocketed in the last 12-months. Over the last 12-months, organizations have expanded AI program adoption by 270%. 

“AI demands a different approach to risk and resilience,” said Kara Sprague, CEO of HackerOne.

“AI vulnerabilities increased by more than 200% this year, while enterprises expanded AI security initiatives at nearly three times last year’s pace.”

Other key takeaways include:

  • 70% of vulnerability researchers are now using AI to hunt for vulnerabilities
  • HackerOne’s bug bounty programs paid out $81 million USD last year
  • Autonomous AI agents submitted over 560+ valid vulnerability reports
  • The average bounty HackerOne paid was $1,090 (+4%)

HackerOne threat researchers are increasingly using AI themselves. Everything from report writing, to data summarization to automated reconnaissance is being assisted by AI tools. 

“Researchers are experimenting broadly, combining web-based LLMs, locally hosted models and custom-built offensive tools to fit their workflows,” HackerOne notes.

There has also been a rise in fully automated threat research. Six unique “hackbots” submitted reports to HackerOne in the last year. Over 560 valid reports were submitted. 

“The future is a symbiosis between hackers and AI. Hackbots can replace the boring repetitive work so humans can focus on creativity and new research,” said Andre Baptista, long-time hacker, co–founder of Ethiack.

HackerOne has also launched their own agentic AI system tool for threat exposure management.

HackerOne’s report makes clear that AI is rewriting the rules of cybersecurity; the attack surface is expanding at record speed, and both attackers and defenders are arming themselves with AI.

The lesson? Organizations can’t treat AI as “just another IT asset.” Prompt injection, data leakage, and insecure integrations are already slipping past traditional controls. To stay ahead, security teams should:

  • Bring AI into scope now – treat LLMs, plugins, and MCP servers as high-risk systems.
  • Leverage researcher expertise – crowdsource AI red teaming instead of relying only on in-house testing.
  • Balance automation with human oversight – AI tools accelerate detection, but human judgment is still critical for complex flaws.

As HackerOne CEO Kara Sprague put it, those that thrive will be the ones that “evolve with AI and tap into the expertise of security researchers.”