Clayton Mitchell
Managing Principal
Alex Fuglaar
Manager, Consulting
Corey Minard
Senior Manager

Like any transformative technology, artificial intelligence (AI) can introduce significant new risks as it reshapes the banking industry. For bank directors and executive teams, identifying potentially valuable uses of AI and the ability to mitigate emerging risks with strong governance, will be essential to their ultimate success with this powerful technology.

Benefits and Risks
In addition to having broad implications for banks’ overall risk management frameworks — including such categories as data and privacy risk, model risk, third-party security risk, legal and regulatory risk, and use and process risk — individual AI applications present their own specific risks. Common AI use cases include:

  • Document processing. Generative AI can read and write documents as well as intake and manage data for loans and other customer interactions. Automating these tasks can reduce employee workloads and improve efficiency, but the risks include overreliance on AI decision-making without adequate human oversight, as well as inadvertent use of others’ intellectual property without permission.
  • Customer contact. Online AI chatbots can mimic human responses, assisting with loan applications, payment scheduling and other routine tasks while also leading customers to the most appropriate products. Risks include potential manipulation by unscrupulous actors who input fraudulent information as well as systems that pull information from incorrect sources, leading customers or employees to act based on inaccurate input.
  • Product pricing and decision-making. Predictive or statistical AI can apply dynamic, risk-based product pricing and improve underwriting decision-making, considering both individual and macroeconomic factors. As in other applications, overreliance on automated decision-making, without proper testing, monitoring and model testing, adds significant risk.
  • Code conversion. Many banks’ critical technology systems are still coded in legacy languages that do not integrate well with many third-party applications now used in financial services. AI can convert legacy code into modern codes that are more adaptable, appropriate and relevant — an important benefit as the number of IT personnel conversant with legacy languages continues to decline. A key risk to consider is that mistakes can be generated in the new code by the AI system.
  • Fraud and anomaly detection. AI’s ability to analyze massive amounts of data and identify atypical patterns can be valuable in various risk management, compliance and legal applications such as fraud detection and Bank Secrecy Act and anti-money laundering compliance. Associated risks include overreliance on closed-box systems, in which responsible officers are unable to explain to regulators how their systems operate or how they guard against inappropriate, illogical or unethical use of customer data.

Governance To Mitigate Risks
To mitigate the risks associated with AI and confirm that AI-driven models are transparent, explainable and regularly tested against regulatory requirements, bank boards and executive teams should implement robust AI governance frameworks that include:

  • Comprehensive documentation and model governance. Documentation should cover model purpose, data sources, training methodologies, testing processes, performance metrics and decision-making logic. It should establish clear model versioning, audit trails and change management protocols as well as comply with all relevant regulatory expectations and industry-specific governance frameworks.
  • Bias and explainability testing. Regular bias detection assessments are needed to confirm that AI models do not disproportionately affect specific populations or business decisions. Techniques such as Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME), used for improving AI model transparency and explainability, can help organizations interpret and justify AI-driven decisions.
  • Independent model testing and audits. Internal audit teams and third-party evaluators should conduct independent AI risk assessments to verify model integrity, detect drift and assess compliance with governance policies. Model testing should include stress testing, adversarial attack resilience testing and back testing against historical data.
  • Data governance and security controls. AI models should include robust data governance frameworks that define data lineage, access controls, encryption protocols and audit logs. Data input validation and quality control processes are needed to prevent model drift and confirm the integrity of AI-driven decisions.
  • Human oversight and decision accountability. Banks should implement review and override mechanisms that allow human specialists to intervene, test or adjust AI-generated outcomes when necessary. Ethical AI review committees or AI governance boards should continually assess AI-driven decisions for unintended consequences and ethical considerations.
  • Continuous monitoring and performance management. AI model performance-tracking frameworks should monitor false positives, false negatives, model accuracy, computational efficiency and bias fluctuations over time. AI models should be regularly tested against evolving business conditions, regulatory changes and new data sources. Real-time feedback loops allow AI models to adapt dynamically while maintaining accuracy and effectiveness.
WRITTEN BY

Clayton Mitchell

Managing Principal

Clayton Mitchell is a managing principal with Crowe LLP in the risk consulting practice specializing in regulatory compliance consulting services for capital markets as well as financial services in the United Kingdom.  He has nearly 15 years of experience with on-site regulatory compliance engagements, reviews and audits for domestic and international financial institutions and global payment and financial technology companies.

 

Mr. Mitchell has experience conducting compliance audits and establishing anti-money laundering test plans as a full-time member of an internal audit team at a financial institution.

WRITTEN BY

Alex Fuglaar

Manager, Consulting

Alex Fuglaar is a Manager in the Cybersecurity & Process Optimization and Robotics team with over nine years of experience delivering projects focused on privacy & regulatory compliance, cybersecurity assessments, and applied artificial intelligence.

WRITTEN BY

Corey Minard

Senior Manager

Corey Minard is a seasoned financial services senior manager at Crowe, where he serves as one of the firm’s Financial Crime Model Validation SMEs and Crowe’s AI Testing and Validation Offering Leader. Corey’s primary focus is guiding clients in the testing and validation of AI models to assess the reliability, transparency, bias, and accuracy of AI systems across varying industries. As the AI testing and validation leader, Corey supports clients in the management of new and emerging risk and compliance related to the development, deployment, and use AI technology.