Back to Resources

Third-Party Regulation in the Age of AI & Cyber: What Leaders Really Need to Know

Blog
July 2, 2025

Guest author: Nita Kohli, CEO & Founder of Kohli Advisors

In today’s globally-interconnected world, businesses no longer operate in silos. They increasingly depend on a growing ecosystem of partners, platforms, and technologies to innovate, scale, and deliver value. However, with these interdependencies, comes additional forms of risk that need to be managed. Third-party failures, AI missteps, and cybersecurity gaps aren’t just hypothetical; they’re boardroom issues and real threats that can disrupt operations, damage reputations, and invite regulatory consequences.

Around the globe over the last decade, regulators are stepping up with new frameworks and guidance. Rules are being introduced to tighten oversight on third-party partnerships, improve operational resilience, and strengthen how AI systems are governed. For leaders in finance and tech, this isn’t just about keeping up with compliance - it's about building smarter, more resilient organizations, a strategic imperative, a theme I focus on.

This article focuses on four major regulations shaping this space: the EU AI Act, the U.S. Interagency Guidance, Canada’s Guideline B-10, and the EU Digital Operational Resilience Act (DORA). While each has its own angle and approach, together they reveal a clear direction for the future of risk management especially as we begin to see convergence regarding the intent and purpose behind the regulations.

EU AI Act:  Provides Guardrails for Innovation

Introduced in April 2021, the EU AI Act is the first major attempt to create a comprehensive legal framework for AI with phased implementation from 2024. It carries global weight, especially for financial institutions that use AI to make decisions on credit, fraud detection, KYC, or risk scoring.

At its core, the Act uses a risk-based approach. Systems deemed “high-risk”, like those that impact consumers’ rights or financial outcomes, are subject to strict rules and requirements. These include thorough risk assessments, documentation, human oversight, and requirements that models be transparent and trained on reliable data. Institutions will also need to register high-risk AI systems in an EU-wide database.

For CIOs, Chief Data Officers, and risk managers, this regulation sends a key message: AI governance isn’t a nice-to-have, it’s a foundational requirement. Since the rules apply to AI provided by third parties, financial firms will need to look beyond the boundaries of their organizations. In order to understand supply chains, assessing and monitoring AI vendors must become part of standard due diligence.

U.S. Interagency Guidance: A Framework with Fresh Momentum

Originally issued in 2004 by the FFIEC and updated in 2020, the U.S. Interagency Guidance lays out how financial institutions should manage third-party risk throughout the entire vendor lifecycle, from initial due diligence and contracts to ongoing monitoring.

With today’s cloud-heavy, API-driven infrastructure, this guidance is more relevant than ever. It emphasizes clear contracts, continued oversight, and ownership of risk, even when key functions are outsourced. A key update in Appendix J pushes institutions to conduct joint testing with critical third parties.

For CTOs and vendor managers, this means contracts need to do more than define SLAs. They must include performance metrics, escalation procedures, and audit rights. Meanwhile, there is regulatory expectation for continuously monitoring vendor performance, no longer just an annual check in.

Canada’s Guideline B-10: Elevating Third-Party Risk to a Governance Level

Canada’s Guideline B-10, issued by OSFI, takes a policy-driven approach, emphasizing that third-party risk isn’t just an operational concern, it’s a board-level issue.

The guideline requires financial institutions to develop formal outsourcing policies approved by the board. These must include criteria for vendor selection, thresholds for acceptable risk, and a framework for ongoing monitoring. It also mandates access and audit rights, plus regular reviews of outsourcing arrangements.

What stands out is the strategic tone. Guideline B-10 encourages boards and executives to treat third-party risk management as a critical aspect of corporate governance, not just a task for the back office.  In my opinion,  these guidelines build upon and integrate existing BCBS standards, including the revised Principles for the Sound Management of Operational Risk, published in March 2021.

DORA: Making Cyber Resilience a Strategic Priority

The EU’s Digital Operational Resilience Act (DORA) is part of a broader push to modernize financial sector regulation. It aims to ensure that institutions are proactive and prepared to deal with cyber incidents and ICT disruptions, not just react to them. Furthermore, it moves beyond traditional risk management to require operational resilience by design and expands requirements to ICT providers supporting the financial industry.

DORA sets requirements for robust ICT risk management, incident reporting, resilience testing, and oversight of third-party tech providers, especially cloud vendors. It demands proof that firms can withstand, recover from, and learn from disruptions, whether internal or from an external partner.

For tech leaders and security teams, DORA is a call to integrate cyber resilience into every layer of architecture and operations. For vendor managers, it means applying the same standards to third parties that are used internally, something financial institutions have needed for a long time.

Key Themes Emerging Across All Four Frameworks

Despite differences in scope and geography, a few shared expectations emerge:

  • Accountability Is Central: Accountability cannot be outsourced, even more so with the adoption of AI. Whether it’s AI, outsourcing, or cyber defense, regulators expect institutions to maintain full accountability, even when the activity is performed by a third party.  
  • Resilience Requires Integration: Siloed approaches no longer work. Risk, compliance, procurement, and tech teams must collaborate to build resilience into the foundation across organizations; it’s a multi-player game.
  • Governance Must Scale with Complexity: Board involvement, executive ownership, and real-time visibility are all essential. As our digital ecosystems grow more complex, so must governance. 

If your organization is subject to one or more of these regulations, it's essential to recognize their shared principles and avoid siloed compliance efforts. Instead of approaching each in isolation, focus on the underlying intent—align your strategy with the outcomes these frameworks aim to achieve, not just with ticking regulatory boxes.

Focus Areas for Leadership

  • Dependency Mapping: Understand where your third-party relationships sit, how data flows, and where AI plays a role.
  • Embed Risk into Design: Evaluate risks and compliance issues before solutions go live, not post implementation.
  • Align Governance & Execution: Make sure board-level strategies are reflected in day-to-day execution—and vice versa.
  • Test for the Real World Scenarios: Move beyond checklists. Simulate plausible disruptions like vendor downtime or AI model drift to see how your systems and people respond.

Closing Thoughts

Gone are the days when informal vendor arrangements or experimental AI deployments were acceptable. Regulators expect more and so do customers. From experience, institutions that embrace this challenge proactively, treating it as a strategic opportunity instead of a compliance burden, stand to gain more than just reduced risk. They’ll gain trust, agility, and competitive edge in a rapidly evolving landscape!!

Share this post:
Blog
July 2, 2025

Third-Party Regulation in the Age of AI & Cyber: What Leaders Really Need to Know

Guest author: Nita Kohli, CEO & Founder of Kohli Advisors

In today’s globally-interconnected world, businesses no longer operate in silos. They increasingly depend on a growing ecosystem of partners, platforms, and technologies to innovate, scale, and deliver value. However, with these interdependencies, comes additional forms of risk that need to be managed. Third-party failures, AI missteps, and cybersecurity gaps aren’t just hypothetical; they’re boardroom issues and real threats that can disrupt operations, damage reputations, and invite regulatory consequences.

Around the globe over the last decade, regulators are stepping up with new frameworks and guidance. Rules are being introduced to tighten oversight on third-party partnerships, improve operational resilience, and strengthen how AI systems are governed. For leaders in finance and tech, this isn’t just about keeping up with compliance - it's about building smarter, more resilient organizations, a strategic imperative, a theme I focus on.

This article focuses on four major regulations shaping this space: the EU AI Act, the U.S. Interagency Guidance, Canada’s Guideline B-10, and the EU Digital Operational Resilience Act (DORA). While each has its own angle and approach, together they reveal a clear direction for the future of risk management especially as we begin to see convergence regarding the intent and purpose behind the regulations.

EU AI Act:  Provides Guardrails for Innovation

Introduced in April 2021, the EU AI Act is the first major attempt to create a comprehensive legal framework for AI with phased implementation from 2024. It carries global weight, especially for financial institutions that use AI to make decisions on credit, fraud detection, KYC, or risk scoring.

At its core, the Act uses a risk-based approach. Systems deemed “high-risk”, like those that impact consumers’ rights or financial outcomes, are subject to strict rules and requirements. These include thorough risk assessments, documentation, human oversight, and requirements that models be transparent and trained on reliable data. Institutions will also need to register high-risk AI systems in an EU-wide database.

For CIOs, Chief Data Officers, and risk managers, this regulation sends a key message: AI governance isn’t a nice-to-have, it’s a foundational requirement. Since the rules apply to AI provided by third parties, financial firms will need to look beyond the boundaries of their organizations. In order to understand supply chains, assessing and monitoring AI vendors must become part of standard due diligence.

U.S. Interagency Guidance: A Framework with Fresh Momentum

Originally issued in 2004 by the FFIEC and updated in 2020, the U.S. Interagency Guidance lays out how financial institutions should manage third-party risk throughout the entire vendor lifecycle, from initial due diligence and contracts to ongoing monitoring.

With today’s cloud-heavy, API-driven infrastructure, this guidance is more relevant than ever. It emphasizes clear contracts, continued oversight, and ownership of risk, even when key functions are outsourced. A key update in Appendix J pushes institutions to conduct joint testing with critical third parties.

For CTOs and vendor managers, this means contracts need to do more than define SLAs. They must include performance metrics, escalation procedures, and audit rights. Meanwhile, there is regulatory expectation for continuously monitoring vendor performance, no longer just an annual check in.

Canada’s Guideline B-10: Elevating Third-Party Risk to a Governance Level

Canada’s Guideline B-10, issued by OSFI, takes a policy-driven approach, emphasizing that third-party risk isn’t just an operational concern, it’s a board-level issue.

The guideline requires financial institutions to develop formal outsourcing policies approved by the board. These must include criteria for vendor selection, thresholds for acceptable risk, and a framework for ongoing monitoring. It also mandates access and audit rights, plus regular reviews of outsourcing arrangements.

What stands out is the strategic tone. Guideline B-10 encourages boards and executives to treat third-party risk management as a critical aspect of corporate governance, not just a task for the back office.  In my opinion,  these guidelines build upon and integrate existing BCBS standards, including the revised Principles for the Sound Management of Operational Risk, published in March 2021.

DORA: Making Cyber Resilience a Strategic Priority

The EU’s Digital Operational Resilience Act (DORA) is part of a broader push to modernize financial sector regulation. It aims to ensure that institutions are proactive and prepared to deal with cyber incidents and ICT disruptions, not just react to them. Furthermore, it moves beyond traditional risk management to require operational resilience by design and expands requirements to ICT providers supporting the financial industry.

DORA sets requirements for robust ICT risk management, incident reporting, resilience testing, and oversight of third-party tech providers, especially cloud vendors. It demands proof that firms can withstand, recover from, and learn from disruptions, whether internal or from an external partner.

For tech leaders and security teams, DORA is a call to integrate cyber resilience into every layer of architecture and operations. For vendor managers, it means applying the same standards to third parties that are used internally, something financial institutions have needed for a long time.

Key Themes Emerging Across All Four Frameworks

Despite differences in scope and geography, a few shared expectations emerge:

  • Accountability Is Central: Accountability cannot be outsourced, even more so with the adoption of AI. Whether it’s AI, outsourcing, or cyber defense, regulators expect institutions to maintain full accountability, even when the activity is performed by a third party.  
  • Resilience Requires Integration: Siloed approaches no longer work. Risk, compliance, procurement, and tech teams must collaborate to build resilience into the foundation across organizations; it’s a multi-player game.
  • Governance Must Scale with Complexity: Board involvement, executive ownership, and real-time visibility are all essential. As our digital ecosystems grow more complex, so must governance. 

If your organization is subject to one or more of these regulations, it's essential to recognize their shared principles and avoid siloed compliance efforts. Instead of approaching each in isolation, focus on the underlying intent—align your strategy with the outcomes these frameworks aim to achieve, not just with ticking regulatory boxes.

Focus Areas for Leadership

  • Dependency Mapping: Understand where your third-party relationships sit, how data flows, and where AI plays a role.
  • Embed Risk into Design: Evaluate risks and compliance issues before solutions go live, not post implementation.
  • Align Governance & Execution: Make sure board-level strategies are reflected in day-to-day execution—and vice versa.
  • Test for the Real World Scenarios: Move beyond checklists. Simulate plausible disruptions like vendor downtime or AI model drift to see how your systems and people respond.

Closing Thoughts

Gone are the days when informal vendor arrangements or experimental AI deployments were acceptable. Regulators expect more and so do customers. From experience, institutions that embrace this challenge proactively, treating it as a strategic opportunity instead of a compliance burden, stand to gain more than just reduced risk. They’ll gain trust, agility, and competitive edge in a rapidly evolving landscape!!