Global Risk Institute Unveils AGILE Framework to Manage AI Risks in Financial Sector

The Global Risk Institute, working with Canadian public sector partners, has released a major report outlining how financial institutions should manage the growing risks and opportunities created by artificial intelligence. The report introduces a new framework called AGILE aimed at helping the financial industry adapt to the rapid expansion of AI technologies.

The document represents the second phase of the Financial Industry Forum on Artificial Intelligence (FIFAI), a three-year initiative that brought together more than 170 participants including financial institutions, academics, regulators, policymakers and consumer advocates. Discussions focused on key issues such as cybersecurity, financial crime, consumer protection, financial stability and the broader impact of AI on the financial system.

Sonia Baxendale, President and CEO of the Global Risk Institute, said the project was designed to move beyond theoretical debates and focus on how the industry can practically respond to the transformation driven by artificial intelligence.

According to the report, AI has evolved from a governance issue inside financial institutions into a wider systemic concern. Emerging risks include sophisticated fraud, increased reliance on third-party technology providers, potential market disruption and the risk of consumer harm.

The AGILE framework organizes recommendations under five core pillars: Awareness, Guardrails, Innovation, Learning and Ecosystem Resiliency. The structure is intended to help financial institutions, regulators and governments develop coordinated strategies for managing AI-related challenges.

Under the “Awareness” pillar, the report calls for stronger oversight from corporate boards and senior executives, along with improved monitoring of emerging risks and more frequent stress testing to understand how AI failures could affect financial markets.

“Guardrails” focus on strengthening data governance, ensuring human oversight for high-impact automated decisions and conducting more rigorous due diligence on third-party technology providers that supply AI tools or infrastructure.

The “Innovation” pillar emphasizes that institutions should not delay adopting AI simply out of caution. Instead, the report encourages investment in AI talent and modern data infrastructure while expanding the use of AI in areas such as cybersecurity, fraud detection, suspicious transaction monitoring and consumer protection.

Learning is another key element of the framework. The report highlights the need for stronger AI literacy across the financial ecosystem, including regulators, institutions and consumers. This includes understanding how AI systems can generate errors or “hallucinations,” as well as recognizing emerging threats such as AI-powered scams.

The forum also identified a growing shortage of specialized talent that combines advanced technical knowledge of artificial intelligence with expertise in financial regulation and operations.

While many Canadian financial institutions have already adopted the EDGE principles developed during the first phase of the forum, the accelerating adoption of AI tools has broadened the list of risks regulators and firms must consider.

One of the most pressing concerns is the growing use of AI by cyber criminals and fraudsters. The report warns that artificial intelligence is making social engineering attacks more convincing, allowing criminals to generate deepfakes, synthetic identities and cloned voices with limited personal information.

These techniques are increasingly targeting vulnerable points such as customer call centres, IT support systems and remote hiring processes where identity verification can be difficult.

Another major concern is concentration risk within the AI supply chain. Financial institutions are becoming increasingly dependent on a small number of technology providers for cloud computing infrastructure, AI models, software platforms and data services.

Limited visibility into extended vendor relationships, including fourth-party and fifth-party providers, could create systemic vulnerabilities if a key technology provider experiences a failure or security breach.

Despite these risks, the report stresses that artificial intelligence also represents a major economic opportunity for the financial sector. Financial services are already among the largest adopters of AI technologies globally, and wider use could improve productivity, strengthen fraud detection systems, enhance regulatory compliance and deliver more personalized financial advice to customers.

However, the report also warns that widespread AI adoption could introduce new financial stability risks. These include the possibility of correlated market behaviour when multiple trading systems rely on the same datasets, faster operational shocks if automated systems fail simultaneously and potential stress on credit markets if AI-driven automation disrupts employment patterns in the broader economy.

As artificial intelligence continues to reshape the global financial landscape, the AGILE framework is intended to guide institutions and regulators toward a more resilient, secure and responsible adoption of the technology.

Related posts

Fuel Price Surge Triggers Transport Strike in Manila as Philippines Declares Energy Emergency

TSX Surges 440 Points as Global Markets Rise Despite Iran Rejecting U.S. War Pause Plan

From Mortgage Costs to Utilities, Developers Get Creative With Incentive Offers