Beyond The Firewall: Prompt-Driven Data Leaks Emerge As Top AI Threat In Finance
- Finopotamus Staff
- Sep 3, 2025
- 4 min read
As 90% of financial and insurance institutions deploy AI for critical operations, new analysis reveals employee queries are leaking sensitive data and bypassing traditional security.
As the financial and insurance industries race toward an estimated $97 billion in artificial intelligence (AI) spending by 2027, up from $35 billion in 2023, they face some of the highest AI risk exposures of any sector. Cybernews research into S&P 500 companies identified 158 AI-related threats, including data leakage, algorithmic bias, insecure outputs, and model evasion attacks. Each of these poses significant risks to customer trust and compliance with regulations.
These risks are heightened by the industry’s data-heavy, language-focused operations. The very features that make AI so effective also make financial services a prime target. The threat landscape has worsened significantly, with deepfake tool trading increasing by 223% in early 2024. This surge has fueled scams that have already resulted in tens of millions of dollars in fraudulent transactions.
At the same time, financial institutions are rapidly changing their operations. With up to 39% of tasks in banking, insurance, and capital markets being highly automatable, AI adoption is moving faster than security systems designed for traditional IT. This creates a perfect storm of opportunity and risk.
Emanuelis Norbutas, chief technology officer at nexos.ai, an all-in-one AI platform, warns that the main danger isn't how quickly AI is adopted, but that financial institutions are integrating it into legacy systems that were never designed for such scale. “A single faulty model in lending, claims, or trading can result in regulatory breaches, harm to reputation, and market instability. Without proper governance, these risks turn into systemic issues instead of isolated ones.”
The Cybernews findings show how these threats could undermine the industry’s foundations. Algorithmic bias can distort lending decisions, while model evasion attacks aim to bypass fraud detection systems that safeguard trillions in daily transactions. Financial leaders are increasingly raising concerns about misinformation and market manipulation, which are now central topics in boardroom discussions.
With nearly 90% of financial institutions already using AI for fraud detection and other critical processes, the technology is too deeply embedded to reverse course. The challenge has shifted: It’s no longer about deciding whether to use AI, but about securing it fast enough to preserve customer trust, regulatory compliance, and market stability.
The four horsemen of financial AI risk
Financial institutions that rely heavily on AI face four related risks that rise sharply: data leakage, algorithmic bias, fraud evasion, and regulatory challenges. Traditional IT security frameworks were never designed for this mix, which leaves the sector exposed to serious vulnerabilities.
Data leakage is the most immediate danger. With 35 documented cases in the sector, even routine employee prompts can expose customer data, proprietary trading strategies, or insurance models. Since nearly 90% of institutions already use AI in critical workflows, every query has the potential to become a breach.
Algorithmic bias is a slow but serious threat. Historical data that shows past discrimination becomes part of lending or insurance algorithms, leading to unfair outcomes on a large scale. Some AI models have shown promise. For example, credit unions have experienced a 40% increase in approvals for women and people of color. However, this improvement highlights how widespread bias was in the past. Once bias seeps into a model, it can reproduce and worsen inequality across millions of transactions before anyone realizes it.
Fraud and model evasion are now part of an escalating arms race. Criminals are using AI against the systems designed to stop them. In early 2024, the use of deepfake tools increased by 223%. They are testing thousands of attack variations and cloning voices for elaborate scams. These scams have caused $25 million in fraudulent transfers. Additionally, they are bypassing fraud detection tools that protect trillions of dollars in daily transactions.
Finally, regulatory and compliance pressures create more complexity. With 84% of financial organizations rushing to set up governance frameworks, institutions need to meet privacy, lending, and transparency requirements in several areas. Each AI deployment could conflict with one or more of these frameworks. This slows innovation and exposes firms to serious penalties.
Together, these risks form a convergence crisis: When a biased algorithm denies loans to a protected group, it leads to regulatory investigations that slow down security updates. This creates weaknesses that criminals can take advantage of through model evasion attacks. At the same time, data leaks reveal these unfair patterns to the public.
As Norbutas says, "The challenge isn't whether to use AI. That decision has already been made. The real question is whether institutions can build security and fairness into AI at the scale the industry needs."
Practical steps forward
According to Norbutas, these five immediate actions can transform AI risks into competitive advantages:
Implement centralized policy enforcement. Deploy unified governance that automatically applies security policies across all AI interactions.
Enable token-level data protection. Activate automated redaction that strips sensitive data, like account numbers, PII, and proprietary trading strategies, before it ever reaches AI models.
Establish model access controls with smart routing. Define which teams can access which model for specific use cases.
Create comprehensive audit trails. Build forensic-ready logs that capture every AI interaction — who queried what, which model responded, what policies were applied, and what data was accessed.
Deploy bias detection and approval workflows. Implement real-time monitoring that flags potential discrimination in lending decisions or underwriting outputs, with automated workflows that route high-risk decisions for human review before execution.
ABOUT NEXOS.AI
nexos.ai is a cutting-edge AI infrastructure company providing a centralized platform for enterprises to seamlessly integrate and manage multiple AI models. Founded in 2024 by Tomas Okmanas and Eimantas Sabaliauskas, who also co-founded several bootstrapped global ventures, including the $3B cybersecurity unicorn Nord Security and Oxylabs, nexos.ai addresses the urgent enterprise need to efficiently deploy, manage, and optimize AI models within organizations. Originating in the ecosystem of Lithuania-based tech accelerator Tesonet, the company attracted its first investment of €8M in early 2025 from Index Ventures, Creandum, Dig Ventures, and a number of prominent angel investors.
