Autonomous trading demands verifiable controls | Opinion

Autonomous trading demands verifiable controls | Opinion

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

The boundary between ‘autonomy’ and ‘automation’ is already dissolving in modern markets. Agents that can place orders, negotiate fees, read filings, and rebalance a company portfolio are already outside of their respective sandboxes and face-to-face with client funds. While this might sound like a new plane of existence for efficiency, it also ushers in a whole new class of risk.

Summary
  • Autonomous AI agents are already operating beyond test environments, making financial decisions in real markets — a leap in efficiency that also opens the door to systemic risks and liability gaps.
  • Current AI governance and controls are outdated, with regulators like the FSB, IOSCO, and central banks warning that opaque behavior, clustering, and shared dependencies could trigger market instability.
  • Safety must be engineered, not declared — through provable identity, verified data inputs, immutable audit trails, and coded ethical constraints that make accountability computable and compliance verifiable.

The industry is still acting like intent and liability can be segregated with a disclaimer, but this is simply incorrect. Once software has the means to shift funds or publish prices, the burden of proof inverts, and input proofs, action constraints, and audit trails that can’t be altered become vital, non-negotiable in fact. 

Without such requirements in place, a feedback loop established by an autonomous agent rapidly becomes a fast-moving accident that regulators wince at. Central banks and those that set the standards of the market are pushing the same warning message everywhere: current AI controls weren’t built for agents of today.

This advancement of AI amplifies so many risks on multiple vectors of vulnerability, but the fix is truly simple if one ethical standard is established: autonomous trading is acceptable only when provably safe by construction.

Feedback loops to be feared

The way markets are built creates an incentivized system where speed and homogeneity exist, and AI agents turbocharge both of them. If many firms deploy similarly trained agents on the same signals, procyclical de-risking and correlated trades become the baseline for all movement in the market.

The Financial Stability Board has already flagged clustering, opaque behavior, and third-party model dependencies as risks that can destabilize the market. The FSB also warned that supervisors of these markets must actively monitor rather than passively observe, ensuring that gaps don’t appear and catastrophes don’t ensue.

Even the Bank of England report in April iterated the risk that wider AI adoption can have without the appropriate safeguards, especially when said markets are under stress. The signs all point to better engineering built into the models, data, and execution routing before positions from across the web crowd then unwind together.

Live trading floors with mass amounts of loitering active AI agents can’t be governed by generic ethical documents; rules must be compiled into runtime controls. The who, what, which, and when must be built into the code to ensure gaps don’t appear and ethics are not thrown to the wind.

The International Organization of Securities Commissions’ (IOSCO) consultation also expressed concerns in March, sketching the governance gaps and calling for controls that can be audited from end to end. Without understanding vendor concentration, untested behaviors under stress, and explainability limits, the risks will compound.

Data provenance matters as much as policy here. Agents should only ingest signed market data and news; they should bind each decision to a versioned policy, and a sealed record of that decision should be retained on-chain securely. In this new and evolving state, accountability is everything, so make it computable to ensure attributable accountability to AI agents.

Ethics in practice

What does ‘provably safe by construction’ look like in practice? It begins with scoped identity, where every agent operates behind a named, attestable account with clear, role-based limits defining what it can access, alter, or execute. Permissions aren’t assumed; they’re explicitly granted and monitored. Any modification to those boundaries requires multi-party approval, leaving a cryptographic trail that can be independently verified. In this model, accountability isn’t a policy requirement; it’s an architectural property embedded from day one.

The next layer is input admissibility, ensuring that only signed data, whitelisted tools, and authenticated research enter the system’s decision space. Every dataset, prompt, or dependency must be traceable to a known, validated source. This drastically reduces exposure to misinformation, model poisoning, and prompt injection. When input integrity is enforced at the protocol level, the entire system inherits that trust automatically, making safety not just an aspiration but a predictable outcome.

Then comes the sealing decision: the moment every action or output is finalized. Each must carry a timestamp, digital signature, and version record, binding it to its underlying inputs, policies, model configurations, and safeguards. The result is a complete, immutable evidence chain that’s auditable, replayable, and accountable, turning post-mortems into structured analysis instead of speculation.

This is how ethics becomes engineering, where the proof of compliance lives in the system itself. Every input and output must come with a verifiable receipt, showing what the agent relied on and how it reached its conclusion. Firms that embed these controls early will pass procurement, risk, and compliance reviews faster, while building consumer trust long before that trust is ever stress-tested. Those that don’t will confront accountability mid-crisis, under pressure, and without the safeguards they should have designed in.

The rule is simple: build agents that prove identity, verify every input, log every decision immutably, and stop on command, without fail. Anything less no longer meets the threshold for responsible participation in today’s digital society, or the autonomous economy of tomorrow, where proof will replace trust as the foundation of legitimacy.

Selwyn Zhou (Joe)

Selwyn Zhou (Joe) is the co-founder of DeAgentAI, bringing a powerful combination of experience as an AI PhD, former SAP Data Scientist, and top venture investor. Before founding his web3 company, he was an investor at leading VCs and an early-stage investor in several AI unicorns, leading investments into companies such as Shein ($60B valuation), Pingpong (a $4B AI payfi company), the publicly-listed Black Sesame Technology (HKG: 2533), and Enflame (a $4B AI chip company).

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *