

Modern Know Your Customer (KYC) systems were sold as a trust upgrade for financial services. In practice, however, they have become one of the industry’s most fragile trust assumptions. The greatest risk no longer comes from anonymous hackers probing the perimeter, but from insiders and vendors who now sit squarely inside the system.
As KYC programs expand across banks, fintechs and crypto platforms, industry access is still treated as an acceptable cost of regulatory compliance. That level of tolerance is increasingly indefensible, especially given that insider-related activity accounted for roughly 40 percent of incidents in 2025.
At the same time, KYC workflows routinely require highly sensitive materials—identity documents, biometric data and account credentials—to move across cloud providers, verification vendors and manual review teams. Each additional person, tool or system granted access widens the blast radius. The uncomfortable reality is that many KYC stacks are architected in ways that make leaks not just possible, but likely.
Recent breach data bears this out. Roughly half of all incidents last year stemmed from two classic indicators of poorly designed KYC infrastructure: misconfiguration and third-party vulnerabilities. Misconfiguration alone accounted for an estimated 15 to 23 percent of all breaches in 2025, while third-party exposure contributed roughly 30 percent.
A direct example is last year’s breach of the “Tea” app, which was marketed as a women-focused platform. Passports and personal information were exposed after a database was left publicly accessible, illustrating how easily sensitive identity data can leak when basic architectural safeguards are missing.
Exposure is no longer theoretical
The scale of vulnerability in centralized identity systems is now well documented. Last year saw more than 12,000 confirmed breaches, resulting in hundreds of millions of records being exposed. Supply-chain breaches were particularly damaging, with nearly one million records lost per incident on average.
These numbers matter acutely for KYC because identity data is uniquely permanent. Passwords that have been compromised can be reset, but passports, biometric templates and government-issued identifiers cannot. When KYC databases are copied, improperly managed internally or accessed through compromised vendors, users may have to live with the consequences indefinitely.
For financial institutions, the damage extends far beyond breach-response costs. Trust erosion directly impacts onboarding, retention and regulatory scrutiny, turning security failures into long-term commercial liabilities.
Financial services have not been spared. Data from the Identity Theft Resource Center (ITRC) shows breach volumes in that sector rising from a low of 269 incidents in 2022 to more than 730 in each subsequent year. This increase closely tracks growing reliance on third-party compliance tools and outsourced review processes. Regulators may mandate KYC, but they do not require institutions to centralize sensitive data in ways that invite misuse.
Weak identity checks are a systemic risk
Recent law-enforcement actions have underscored how fragile identity verification can become when treated as a box-ticking exercise. Lithuanian authorities’ dismantling of SIM-farm networks revealed how weak KYC controls and SMS-based verification were exploited to weaponize legitimate telecom infrastructure.
In that case, approximately 75,000 active SIMs were registered under false or recycled identities, enabling large-scale fraud and account takeovers. The lesson is broader: once identity verification becomes procedural rather than substantive, attackers adapt faster than controls can evolve.
A.I.-assisted compliance adds another layer of complexity. Many KYC providers—including platforms such as Onfido and Sumsub—rely on centralized, cloud-hosted A.I. models to review documents, flag anomalies and score risk. In default configurations, sensitive inputs are transmitted beyond the institution’s direct control. Logs, prompts and even training data may be retained under vendor policies rather than regulatory intent.
Security teams routinely warn employees not to upload confidential data into third-party A.I. tools. Yet many KYC systems institutionalize that exact behavior by design. Once identity data crosses organizational boundaries, insider misuse and vendor compromise become governance problems rather than purely technical ones, an abstraction that offers little comfort to regulated entities or affected users.
Reframing the problem with confidential A.I.
When systems assume trusted insiders and trusted vendors, breaches become a question of timing rather than probability. Confidential A.I. challenges that premise by starting from a different assumption: sensitive data should remain protected even from those who operate the system. Confidential computing enables this by executing code inside hardware-isolated environments known as trusted execution environments (TEEs). Data remains encrypted not only at rest and in transit, but also during processing. Even administrators with root access cannot view its contents.
Research has demonstrated that technologies such as Intel SGX, AMD SEV-SNP and remote attestation can provide verifiable isolation at the processor level. Applied to KYC, confidential A.I. allows identity checks, biometric matching and risk analysis to occur without exposing raw documents or personal data to reviewers, vendors or cloud operators. Verification can be proven cryptographically without copying sensitive files into shared databases. Insider access shifts from a matter of policy to a matter of physics.
Reducing insider visibility is not an abstract security upgrade. It changes who bears risk and reassures users that submitting identity documents does not require blind trust in unseen employees or subcontractors. Institutions shrink their liability footprint by minimizing plaintext access to regulated data. Regulators gain stronger assurances that compliance systems align with data-minimization principles rather than contradict them.
Critics argue that confidential A.I. adds operational complexity or depends on hardware vendors. Those concerns merit scrutiny, but complexity already exists. It is simply hidden inside opaque vendor stacks and manual review queues.
Hardware-based isolation is auditable in ways human process controls are not. It also aligns with regulatory momentum toward demonstrable safeguards rather than policy-only assurances.
A necessary shift in KYC thinking
KYC will remain mandatory across financial ecosystems, including the crypto markets. What is not fixed is the architecture used to meet that obligation. Continuing to centralize identity data and grant broad internal access normalizes insider risk, an increasingly untenable position given current breach patterns.
Confidential A.I. does not eliminate all threats, nor does it remove the need for governance. It does, however, challenge a long-standing assumption that sensitive data must be visible to be verified.
For an industry struggling to safeguard irreversible personal information while maintaining public trust, that challenge is overdue. The next phase of KYC will not be judged by how much data institutions collect, but by how little they expose. Those that ignore insider risk will continue paying for it. Those that redesign KYC around confidential computing will set a higher standard for compliance, security, and user trust, one that regulators and customers are likely to demand sooner than many expect.

Want more insights? Join Working Title - our career elevating newsletter and get the future of work delivered weekly.