25% Of Employees Are Using Unapproved AI Apps At Work

Latest 1Password report reveals worrying habits of IT workers.

Published on Oct 30, 2025
Mirren McDade Written by Mirren McDade
25% Of Employees Are Using Unapproved AI Apps At Work

A new survey of 5,000 workers worldwide has revealed that 1-in-4 employees are using AI apps at work with no control or oversight from IT teams.

This finding comes from 1Password’s 2025 Annual Report, which assesses the rapid adoption of generative AI and the strain it is having on identity security.

The report highlights the widening gap between enterprise control over IT apps and how employees use AI in their workflow to be as effective and efficient as they can. This disparity is known as “Shadow AI”.

It poses a significant threat for organizations as IT teams are unable to monitor and manage services they don’t know about. Any tools used for work purposes should be monitored, managed, and patched by your IT leads.

When employees use AI tools that are not approved by their organization, they may inadvertently be sharing sensitive company or customer data in the process. For example, employees may well enter sales figures of targets into an AI tool to calculate projected revenue, without considering that this is sensitive information. How secure is that information if the AI tool is hacked?

Even when AI policies exist, awareness is uneven. Only 6% of IT and security professionals say their organization lacks an AI policy, but 16% of general employees report the same concerns. This highlights a communication gap within the organization.

Jacob DePriest, CISO & CIO at 1Password said: “The various ways employees are using AI are growing faster than policies can keep up with. Security and privacy teams need the tools to help them understand the usage, put in place controls, and grow with the adoption of these technologies.”

Generative AI is now evolving into agentic AI, where autonomous agents act across systems on behalf of employees. While this presents new opportunities for productivity, it begs the question: who is responsible for what these apps do?

“I think it’s [AI] is really challenging problem right now, Mark Hillick, CISO at Brex, said in a recent CISO roundtable hosted by 1Password. “I don’t think there is any one solution out there that’s absolutely perfect, and what I would say is, a lot of the solutions that are being proposed and developed, the more that we’ve looked into them, they are not really ready for enterprise.

SaaS sprawl, device sprawl, and identity sprawl already leave many organizations with gaps in their governance pipelines. With AI agents entering this ecosystem, they run the risk of exacerbating these vulnerabilities if security controls fail to evolve simultaneously.

The SSO Gap

The report also underscores persistent challenges with enterprise SSO and shadow IT. More than half of employees (52%) have download work-related applications without IT approval.

Even sanctioned apps are frequently mismanaged. SSO tools protect only 66% of applications on average, leaving a third ungoverned. Offboarding and unmanaged access means that 38% of employees have accessed accounts from previous employers.

Credential hygiene remains a pressing concern, with nearly half of security leaders citing compromised passwords as a leading cause of breaches and a concerning two-thirds of employees admitting to partaking in unsafe practices like password reuse or sharing credentials via messaging apps.

In response to these behaviors, organizations are increasingly adopting passkeys. 89% of IT and security professionals encourage their use, and 41% of employees have already switched where available.

Endpoint security also faces significant limitations. While managed devices form the backbone of enterprise security, 75% of CISOs say MDM does not fully protect devices, and a majority of employees use personal devices for work at least occasionally, often circumventing security policies.

Closing the Access-Trust Gap

Solving these issues requires more than blocking tools or restricting access. Organizations must also take steps to implement policies that balance security with productivity, extend governance across SaaS, devices, and identities, and prepare for the next wave of agentic AI. 

“I think you’ve got to try really hard to be the security guy that doesn’t say no all the time,” said Mark Hazelton, CSO at Oracle Red Bull Racing. You’ve got to find ways to say yes, or find ways to say no that sound like yes … Enable the things you can, and be careful of the things you cant.” 

In an interview with Expert Insights, CISO, Nick Mistry recommends that teams “establish governance for AI now. Require AI-BOMs from vendors, define policy guardrails for internal AI use, and use security telemetry to monitor how AI-generated components behave in your environment.”

“Learn the distinctions between LLMs, fine-tuned models, and agentic AI systems, because each introduces unique risk profiles … You need to understand the full software lifecycle, the mechanics of AI, and how automation changes your defensive posture.”

Early awareness, comprehensive monitoring, and employee empowerment are essential to bridging the Access-Trust Gap and maintaining both security and business outcomes.


Key Takeaways For IT Professionals

  • AI adoption is outpacing policy; shadow AI is a growing risk that can expose sensitive data
  • SaaS sprawl and unmanaged apps leave gaps that SSO alone cannot address
  • Credential hygiene remains poor, making passkeys and phishing-resistant authentication critical
  • Endpoint security is incomplete; personal and unmanaged devices introduce significant vulnerabilities
  • CISOs must balance security and productivity, crafting policies that employees understand and follow while preparing for agentic AI adoption