Explainability must be the baseline for AI in financial compliance | Opinion

Explainability must be the baseline for AI in financial compliance | Opinion

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

Artificial intelligence is supercharging financial crime, and the financial industry’s defenses are falling behind. Criminals now use AI to create convincing deepfakes, craft tailored phishing attacks, and fabricate synthetic identities at scale. These tactics move faster than traditional compliance systems can track, exposing a fatal flaw in current approaches.

Despite the scale of this threat, many organizations are rushing to deploy their own AI systems without ensuring those tools are explainable, transparent, or even fully understood. Unless explainability becomes a baseline requirement for any AI system used in financial compliance, we risk replacing one form of opacity with another, which will not build trust with the public or regulators.

The arms race has already begun

AI is being used to make old crimes faster and cheaper, and facilitating the perpetration of new types of crime. Consider the recent surge in synthetic identity fraud. Cybercriminals use AI to stitch together real and fake data into realistic, fabricated identities. These profiles can open accounts, obtain credit, and bypass verification systems while being almost indistinguishable from genuine users.

Deepfake technology has added another weapon to the arsenal. Convincing impersonations of CEOs, regulators, or even family members can now be generated with minimal effort. These videos and audio clips are being used to initiate fraudulent transactions, mislead employees, and trigger internal data leaks.

Even phishing has evolved. AI-driven natural language tools can craft hyper-personalized, grammatically correct messages tailored to each target based on their public data, online behavior, and social context. These aren’t the misspelled spam messages of the past. They’re bespoke attacks designed to earn trust and extract value. In the crypto space, phishing is booming, and AI is accelerating the trend.

Compliance tools are stuck in the pre-AI era

The challenge isn’t just the speed or scale of these threats; it’s the mismatch between attacker innovation and defender inertia. Traditional rules-based compliance systems are reactive and brittle. They depend on pre-defined triggers and static pattern recognition. 

Machine learning and predictive analytics offer more adaptive solutions, but many of these tools are opaque. They generate outputs without clarity on how they reached a conclusion. That “black box” issue is more than a technical limitation. It’s a compliance headache.

Without explainability, there is no accountability. If a financial institution cannot explain how its AI system flagged a transaction (or failed to flag one), then it cannot defend its decision to regulators, clients, or courts. Worse, it may not be able to detect when the system itself is making biased or inconsistent decisions.

Explainability is a security requirement

Some argue that requiring AI systems to be explainable will slow innovation. That’s shortsighted. Explainability isn’t a luxury; it’s a requirement for trust and legality. Without it, compliance teams are flying blind. They may detect anomalies, but they won’t know why. They may approve models, but they won’t be able to audit them.

The financial sector must stop treating explainability as a technical bonus. It should be a condition of deployment, especially for tools involved in KYC/AML, fraud detection, and transaction monitoring. This is not just a best practice. It is essential infrastructure.

In a fast-moving space like crypto, where trust is already fragile and scrutiny is high, this becomes even more urgent. The use of AI in security and compliance must be not only effective but also demonstrably fair, auditable, and understandable.

A coordinated response is non-negotiable

Financial crime is no longer a matter of isolated incidents. In 2024 alone, the illicit transaction volume reached $51 billion, a figure likely undercounting the role of AI-enhanced attacks. No firm, regulator, or technology provider can tackle this threat alone.

A coordinated response must include:

  • Mandating explainability in any AI system used in high-risk compliance functions.
  • Enabling shared threat intelligence to surface new attack patterns across firms.
  • Training compliance professionals to interrogate and evaluate AI outputs.
  • Requiring external audits of ML systems used in fraud detection and KYC.

Speed will always matter. But speed without transparency is a liability, not a feature.

AI is not neutral, and neither is its misuse

The conversation must shift. It’s not enough to ask whether AI “works” in compliance. We must ask whether it can be trusted. Can it be interrogated? Audited? Understood?

Failing to answer those questions puts the entire financial system at risk. Not just from criminals, but from the tools we rely on to stop them.

If we don’t build transparency into our defenses, then we are not defending the system. We are automating its failure.

Robert MacDonald

Robert MacDonald is the Chief Legal & Compliance Officer at Bybit, the world’s second-largest cryptocurrency exchange by trading volume. With nearly two decades of experience spanning the public sector, traditional finance, and the crypto industry, Robert is a seasoned expert in regulatory compliance, legal governance, and combating financial crime. Robert began his career as a barrister specializing in financial crime in London, later serving with the U.K. Ministry of Justice. His journey took him across continents, where he held leadership roles in some of the world’s most prominent financial institutions, including a global asset management giant, South Korea’s leading eCommerce platform, and Binance, a major crypto exchange. In his role at Bybit, Robert oversees a global team of legal counsels and compliance professionals tasked with navigating the evolving regulatory landscape. His team addresses licensing, adherence to jurisdictional requirements, anti-money laundering (AML), know-your-customer (KYC) protocols, and responsible client onboarding. Ensuring Bybit operates with integrity and compliance across its global footprint.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *