The Governance Gap in AI-Driven Finance

Dilek Çilingir, EY Global Forensic & Integrity Services Leader, warns that traditional risk management is hitting a breaking point. To manage the "black box" of AI in finance, treasury leaders must shift from reactive retrofitting to proactive, cross-functional governance.

Author
Date published
May 07, 2026 Categories

The rapid integration of artificial intelligence into the engine room of corporate finance has outpaced the development of the frameworks intended to govern it. Traditional risk management structures, designed for a more static technology environment, are increasingly hitting a breaking point as organizations struggle to keep pace with AI’s fluid evolution. According to Dilek Çilingir, EY Global Forensic and Integrity Services Leader, many organizations discover too late that their controls were not built for AI-driven risk, often because compliance was treated as an afterthought rather than a fundamental component of the deployment process. To avoid costly and disruptive retrofitting, treasury leaders must pivot toward a “compliance by design” approach, embedding robust, cross-functional governance from day one.

The Anatomy of AI Risk in Treasury

Within the treasury function, the risks associated with AI are particularly acute in high-stakes areas like payments and fraud detection. One of the primary technical hurdles is model drift, where AI systems trained on historical patterns fail to deliver accurate results when those underlying market or organizational patterns shift. This is compounded by the threat of data poisoning, where manipulated invoices or transaction histories can compromise the integrity of the entire model.

Furthermore, as these systems grow in complexity, they often suffer from a lack of explainability, creating a “black box” effect where results are delivered without a clear understanding of the underlying logic. Without continuous focus on model governance, data integrity, and human oversight, treasury departments risk becoming over-reliant on automated outputs that they can no longer justify to regulators or stakeholders.

Balancing Innovation with Centralized Control

For multinational organizations, the challenge lies in stress-testing these systems across diverse jurisdictions without paralyzing the very innovation they seek. A successful strategy relies on a centralized governance model paired with local empowerment. While innovation often sparks within local markets where employees interact directly with field data, these use cases must be fed back into a central system for structured testing and formalization before being scaled globally. This ensures that as risks evolve such as the rise of synthesized voices or fraudulent AI-generated invoices the organization’s collective knowledge and compliance strategies evolve with them.

Regulatory Readiness and the “Human Factor”

In the eyes of regulators, the “AI did it” defense carries no weight. When investigating incidents, authorities look for a rapid, clear response backed by documented evidence of model design, data protection protocols, and ongoing monitoring. Accountability in this era is inherently blurred; because AI touches every facet of the business, ownership of risk cannot be siloed within IT or Legal but must be shared across finance and compliance functions, ultimately reporting to the Board of Directors.

The difference between a contained operational failure and a full-scale reputational crisis often comes down to the first 24 to 48 hours of a discovery. Proactive crisis management, including scenario planning and “tabletop exercises,” is now a baseline requirement for treasury leaders. Ultimately, high awareness and human oversight remain the non-negotiable final lines of defense, ensuring that while AI supports the treasury function, it does not operate beyond the bounds of strategic control and ethical integrity.

Exit mobile version