top of page

The Enterprise Trust Gap: Why Companies Fear Losing Control Of AI

  • Writer: Kelsie Papenhausen
    Kelsie Papenhausen
  • 2 hours ago
  • 4 min read

New research reveals Americans' biggest AI anxiety is related to regulation and control, mirroring enterprise challenges with shadow AI and governance.

 

A new study tracking public anxiety about artificial intelligence (AI) shows that Americans are more concerned with how to control AI than they are about losing their jobs to it. The joint study by Cybernews and nexos.ai analyzed five categories of AI-related concerns from January to October 2025 and found searches related to governance and privacy far outpaced those about employment.

 

The findings reveal that “Control and regulation” was the category that evoked the most anxiety, with an average interest score of 27, followed closely by “Data and privacy” at 26. In contrast, “Job displacement and workforce impact” ranked last, despite a year marked by significant tech layoffs.

 

Žilvinas Girėnas, head of product at nexos.ai, an all-in-one AI platform for enterprises, explains the situation from a management perspective:

 

“Leaders are not necessarily afraid of AI itself, but rather of losing visibility into its operations. When teams adopt unapproved AI tools, companies lose track of what data is being used and where it’s going. Without visibility, you can’t manage risk or compliance,” he says.

 

The roots of AI anxiety

 

The study's findings highlight a growing global phenomenon known as “AI anxiety.” It describes a collective psychological response to the rapid and pervasive integration of AI technologies into society. It stems from a logical unease about the speed of AI development, its complexity, and the profound societal changes it brings.

 

A primary source of anxiety is the fear of AI systems evolving beyond human management and understanding. Many advanced AI models operate as “black boxes,” where the reasoning behind their outputs is not transparent. This opacity intensifies fears of losing control and undermines the public's sense of security, directly explaining why “Control and regulation” is the top concern.

 

AI systems also rely on vast amounts of personal data, often collected without explicit user consent from sources like social media, browsing history, and smart devices. This continuous data collection, combined with the risk of breaches, fuels fears of identity theft, financial loss, and the erosion of personal privacy, making it a close second in the anxiety rankings.

 

Furthermore, AI's ability to create highly realistic but fake content blurs the line between reality and fabrication, leading to mistrust and “reality apathy.” When AI is trained on biased data, it can learn and amplify societal stereotypes in critical areas, such as hiring and lending, causing anxiety about fairness and discrimination.

 

While job displacement ranked last in average concern, the anxiety associated with it is profound. It's not just about income. The threat of AI replacing cognitive roles can lead to a diminished sense of purpose, identity, and self-worth, which researchers term “existential anxiety.”

 

“These public fears are a rational response to the ‘black box’ nature of AI today. Organizations face the same challenge: when teams don’t really understand how AI works, confidence in the technology drops, and it can slow down AI adoption. The only way to innovate safely is to build a framework of trust, and that foundation is built on total visibility into your AI ecosystem,” says Girėnas.

 

The concrete risks of losing control

 

For companies, AI anxiety isn't theoretical. The fear of losing control translates into specific, measurable risks that impact revenue, reputation, and legal standing. McKinsey's latest research shows that as companies use AI, they are experiencing significant negative consequences:

  • Inaccuracy and reputational damage: Inaccuracy is the single most common negative consequence of AI use reported by companies. When employees use unvetted “shadow AI” tools, the risk of biased or nonsensical "hallucinated" outputs entering products or client communications increases.

  • Cybersecurity vulnerabilities: Over half (51%) of organizations using AI are actively working to mitigate cybersecurity risks, fearing that these tools can be exploited to leak sensitive corporate data or introduce malware.

  • Intellectual property (IP) infringement: Companies are deeply concerned that proprietary code, strategic plans, and trade secrets fed into unapproved public AI models will be absorbed and exposed. Organizations that use AI most heavily, so-called “AI high performers,” are more likely than others to report experiencing IP infringement as a negative consequence.

  • Regulatory compliance failures: Without centralized governance, it is nearly impossible to ensure that AI usage complies with regulations like the GDPR or the EU AI Act. This creates a significant risk of costly fines and legal action, a fear that 43% of organizations are actively trying to mitigate.


How to protect your business 

According to Girėnas, solving anxiety around AI requires a proactive and structured approach, and offers four key tips for leaders to close the trust gap and innovate safely:

  1. Centralize governance. Establish a single set of rules governing the company's use of AI.

  2. Implement a “human-in-the-loop” protocol. Require a person to review and approve the AI's work before it is used in any critical function.

  3. Make AI governance a C-suite priority. The company's executives must lead the AI safety and governance efforts.

  4. Shift from restriction to visibility. Focus on identifying which AI tools your teams are using, rather than just trying to ban them.


ABOUT NEXOS.AI 

nexos.ai is an all-in-one AI platform to drive secure, organization-wide AI adoption. Through a secure AI Workspace for employees and an AI Gateway for developers, nexos.ai enables companies to replace scattered AI tools with a unified interface that provides built-in guardrails, full visibility, and flexible access controls across all leading AI models — allowing teams to move fast while maintaining security and compliance. Headquartered in Vilnius, Lithuania, nexos.ai is backed by Evantic Capital, Index Ventures, Creandum, Dig Ventures and a number of notable angels including Olivier Pomel (CEO of Datadog), Sebastian Siemiatkowski (CEO of Klarna), through Flat Capital, Ilkka Paananen (CEO of Supercell), and Avishai Abrahami (CEO of Wix.com).

 
 
bottom of page