AI Risk Assessment: Enterprise Guide
Why You Need a Risk Matrix
Not all AI systems carry the same risk. A chatbot suggesting restaurant recommendations is fundamentally different from a model deciding who gets a loan. Treating them the same wastes resources on low-risk systems and under-protects high-risk ones.
A risk matrix maps two dimensions: probability of harm and severity of impact. This gives you four quadrants and a clear basis for resource allocation.
EU AI Act Risk Tiers
The EU AI Act provides the most comprehensive regulatory framework for AI risk classification. Starting August 2026, compliance is mandatory for companies operating in the EU:
- Unacceptable risk: Banned outright. Social scoring, real-time biometric surveillance in public spaces, manipulation of vulnerable groups.
- High risk: Requires conformity assessments, human oversight, and technical documentation. Includes recruitment tools, credit scoring, law enforcement, and critical infrastructure.
- Limited risk: Transparency obligations. Users must know they're interacting with AI. Chatbots, deepfake generators, emotion recognition.
- Minimal risk: No specific obligations. Spam filters, video game AI, basic recommendations.
The Assessment Process
Step 1: System inventory. List every AI system, including vendor tools and embedded AI in SaaS products. Most companies miss 30-40% of their AI footprint on the first pass.
Step 2: Impact analysis. For each system, answer: What decisions does it influence? Who is affected? What happens when it fails? Can the affected person appeal?
Step 3: Classify. Map each system to a risk tier. When in doubt, classify higher. Reclassifying downward is easier than explaining to regulators why you under-classified.
Step 4: Define controls. Each tier gets minimum requirements. High-risk systems need documentation, testing, monitoring, and human oversight. Document your rationale for each classification.
Step 5: Review schedule. High-risk systems: quarterly review. Limited risk: semi-annual. Minimal risk: annual. Any system change triggers a re-assessment.
Common Mistakes
Ignoring vendor AI is the most common gap. If you use a SaaS platform with embedded AI for hiring, that's your risk — not the vendor's. The second mistake: static assessments. Risk profiles change as models retrain, data shifts, and regulations evolve.
Combine risk assessment with bias detection and proper model documentation for complete coverage. More at the AI Governance hub.


