← Back to Insights

Responsible AI-FAQs

January 2026|AI Governance · Deep Dive

When is an AI system considered high risk?

An AI system is classified as high risk when it meets the definitions for Sensitive Use or Restricted Use, or when it has the potential for significant adverse impacts on people, organizations, or society.

High-risk classification commonly applies to systems that generate outputs directly affecting the allocation of resources or opportunities in essential domains, including:

  • Finance and Insurance
  • Education and Employment
  • Healthcare and Social Welfare
  • Housing

What must exist before deploying a high-risk AI system?

Before deployment, multiple governance artifacts and technical controls must be in place:

  1. Impact Assessment: Completed early and reviewed by compliance.
  2. Sensitive Use Reporting: Systems meeting Sensitive Use criteria must be reported to the Office of Responsible AI (ORAI).
  3. Responsible Release Criteria (RRC): Defined metrics for performance, thresholds, and error tolerances.
  4. Human Oversight Plan: Documented mechanisms to override or interrupt the system.
  5. Transparency Note: Documentation demonstrating the system is fit for purpose.

Who is accountable for Responsible AI failures?

Accountability rests with the individuals and teams who design, deploy, and operate the systems.

  • Human Control: AI must not be the final authority in decisions affecting people.
  • Operational Accountability: Stakeholders must be clearly identified for post-deployment troubleshooting.
  • Traceability: MLOps lineage must track model versions, approvals, and changes.

What comes first: retraining or incident escalation?

Incident Escalation.

When failures occur:

  1. Safety First: Immediate escalation to ORAI. Execute rollback plans and disable features to prevent harm.
  2. Remediation: Technical responses like retraining are secondary to immediate safety actions.
  3. Mitigation: Consult with reviewers/ORAI if evaluation targets are missed.

Analogy: Think of it like a commercial elevator. If sensors detect an imbalance, the priority is to trigger the alarm and stop the car (Incident Response), not to re-calibrate the sensors (Retraining) while people are still inside.