Is my AI high-risk Annex III? Decision tree for EU SMB SaaS
EU AI Act divides AI into 4 categories: unacceptable / high-risk / limited / minimal. Most SaaS for SMB falls into minimal or limited = low compliance burden. But if your AI touches recruitment, credit, education, healthcare, biometrics, critical infrastructure, law enforcement, migration, courts = HIGH-RISK = full compliance burden + €15M / 3% turnover penalty.
The decision tree below will help you determine your tier in 5 minutes.
4 risk tiers under EU AI Act. 02.08.2026 = enforcement deadline for high-risk systems (Annex III). 60%+ of EU SMB haven't started compliance yet. Penalty for high-risk violations: €15M or 3% global turnover. Most SMB SaaS = minimal or limited risk (light burden). HR-Tech, FinTech credit, EdTech, HealthTech, biometrics = high-risk Annex III with full Articles 9-15 obligations.
Why this is urgent
02.08.2026 = enforcement deadline EU AI Act for high-risk systems (Annex III).
Over 60% of EU SMB haven't started compliance (Conversantech, SoftwareSeni, Center for Data Innovation surveys 2025-2026 — "fewer than 30% have taken any steps").
Penalty for high-risk violation: €15M or 3% global turnover (Article 99 EU AI Act).
Digital Omnibus 13.05.2026 may push deadline to 02.12.2027 — but that's not certain (trilogue 28.04.2026 = FAILED without agreement). Hoping for deferral ≠ compliance strategy.
4 EU AI Act risk categories
🔴 Tier 0: UNACCEPTABLE RISK — banned
Banned AI systems (Article 5):
- Social scoring by governments / public authorities
- Real-time biometric ID in public spaces (with narrow law enforcement exceptions)
- Emotion recognition in workplace / education
- Predictive policing based solely on profiling
- Untargeted facial scraping from internet / CCTV
- Subliminal manipulation AI
- Exploitation of vulnerabilities (children, disability, socioeconomic)
Action: STOP immediately. Cease deployment. Penalty: €35M / 7% global turnover.
🟠 Tier 1: HIGH RISK — Annex III, full compliance burden
8 areas (decision tree below).
Compliance requirements:
- Risk management system (Article 9)
- Data governance + bias testing (Article 10)
- Technical documentation (Annex IV)
- Logging / audit trail (Article 12)
- Transparency notices (Article 13)
- Human oversight mechanism (Article 14)
- Accuracy + robustness testing (Article 15)
- CE marking
- EU database registration (Annex VIII)
- Post-market monitoring
Cost: Self-assessment €9,500-14,500 + 4-6 weeks internal time. Plus €5-9k legal review.
Penalty: €15M / 3% global turnover.
🟡 Tier 2: LIMITED RISK — transparency only
- Chatbots interacting with humans
- AI-generated content (deepfakes)
- Emotion recognition (informational only, NOT workplace/education)
- Biometric categorization (non-prohibited contexts)
Compliance: Transparency disclosure — user must know they're interacting with AI / content is AI-generated.
Cost: ~€500-2,000 (mostly UI text + ToC updates).
🟢 Tier 3: MINIMAL RISK — no obligations
- Spam filters
- Basic recommender systems
- AI in video games
- Marketing personalization
- Inventory management AI
- General-purpose chatbots without Annex III categories
Compliance: Voluntary code of conduct (optional). NOT mandatory.
Cost: €0.
Decision tree — Is your AI high-risk?
START
Q1: Is your AI BANNED (Tier 0)?
├─ Social scoring by government?
├─ Real-time biometric ID in public space (non-LE)?
├─ Emotion recognition in workplace / education?
├─ Predictive policing solely on profiling?
├─ Untargeted facial scraping?
├─ Subliminal manipulation?
└─ Exploitation of vulnerabilities (children, etc.)?
│
├─ YES (any) → 🔴 BANNED — cease EU operations immediately
└─ NO → Q2
Q2: Does your AI fall into ANY Annex III area?
1. BIOMETRIC ID & CATEGORIZATION
(excluding verification for personal use)
└─ Examples: facial recognition, voice ID, gait analysis,
emotion detection in surveillance contexts
2. CRITICAL INFRASTRUCTURE
(energy, transport, water, gas, electricity)
└─ Examples: grid load AI, traffic management AI,
water quality monitoring AI
3. EDUCATION & VOCATIONAL TRAINING
└─ Examples: admissions screening AI, automated grading,
plagiarism detection, candidate evaluation,
proctoring AI, learning analytics affecting access
4. EMPLOYMENT & WORKER MANAGEMENT
└─ Examples: CV screening, candidate ranking, interview AI,
performance evaluation, productivity monitoring,
promotion decisions, dismissal recommendations
5. ESSENTIAL PRIVATE/PUBLIC SERVICES
(credit, insurance, healthcare, public benefits)
└─ Examples: credit scoring, insurance pricing,
healthcare diagnosis/treatment AI,
welfare benefit eligibility, emergency triage
6. LAW ENFORCEMENT
└─ Examples: risk assessment for individuals,
polygraph alternatives, evidence evaluation,
profiling for criminal investigation
7. MIGRATION, ASYLUM & BORDER CONTROL
└─ Examples: asylum claim assessment AI,
border risk scoring, document verification
8. ADMINISTRATION OF JUSTICE
└─ Examples: judicial decision support,
legal research AI with autonomy,
democratic process AI
│
├─ YES (any) → 🟠 HIGH-RISK Annex III
│ Full Articles 9-15 obligations + conformity assessment
└─ NO → Q3
Q3: Is your AI a CHATBOT / GENERATIVE / EMOTION RECOGNITION?
│
├─ YES → 🟡 LIMITED RISK — transparency disclosure (Article 50)
└─ NO → 🟢 MINIMAL RISK — voluntary code of conduct only
Common SaaS use cases — quick classification
| Use case | Annex III? | Tier |
|---|---|---|
| CV screening AI for HR-Tech | #4 employment | 🟠 HIGH |
| Credit scoring for lending | #5 essential services | 🟠 HIGH |
| AI tutoring with grade impact | #3 education | 🟠 HIGH |
| Medical diagnostic AI | #5 healthcare | 🟠 HIGH |
| Insurance risk pricing AI | #5 essential services | 🟠 HIGH |
| Customer support chatbot | No | 🟡 LIMITED (Art. 50) |
| Marketing email personalization | No | 🟢 MINIMAL |
| Product recommendation engine | No | 🟢 MINIMAL |
| Spam filter | No | 🟢 MINIMAL |
| Code generation assistant | No | 🟢 MINIMAL |
| Sales lead scoring | No (B2B) | 🟢 MINIMAL |
| AI deepfake / image generation | No | 🟡 LIMITED (Art. 50) |
| Sentiment analysis (customer) | No | 🟢 MINIMAL |
| Fraud detection (banking) | Edge case (#5?) | 🟠 HIGH (likely) |
| Workplace productivity monitoring | #4 employment | 🟠 HIGH |
Concrete example: TalentAI Inc. (US HR-Tech expanding to EU)
Fictional US Series A startup, 80 employees, San Francisco headquarters. Just signed first 3 EU enterprise customers (DAX firms in Germany). Their product:
System 1: AI Resume Screener
ML model that ranks candidates based on resume + job description. Decisions affect interview shortlist (recruiter sees only top 20% from AI ranking).
Classification: Annex III #4 employment → 🟠 HIGH-RISK
Why: AI affects employment decision (who gets interviewed). Even if a human (recruiter) reviews shortlist, AI gates access — that's "AI substantially influencing employment decision".
System 2: AI Interview Transcript Analyzer
Analyzes interview recording, extracts key competencies, generates summary report for hiring manager.
Classification: Annex III #4 → 🟠 HIGH-RISK
Why: Same employment domain, same potential influence on decisions. Even if hiring manager makes final call, AI summary affects perception.
System 3: Marketing Chatbot
Pre-sales chatbot on website, answers questions about pricing, features. Generates leads for sales team.
Classification: NOT Annex III → 🟡 LIMITED RISK (Article 50)
Why: Customer-facing chatbot needs transparency disclosure ("you're chatting with AI"), but no Annex III obligations.
What this classification means for TalentAI
2 of 3 systems are HIGH-RISK Annex III. Compliance burden:
- Articles 9-15 implementation (8-16 weeks of focused work)
- Annex VI conformity assessment (4-12 weeks process)
- EU database registration before deployment
- EU representative appointment (no EU office)
- CE marking
- Post-market monitoring
Estimated effort: 6-12 weeks internal team work + €5-15k legal review. Total $30-80k all-in.
Vs cost of non-compliance: €15M penalty per Article 99(4) per violation. SMB-tier limit (Article 99(6)) = lower-of, so €15M absolute ceiling. Class action exposure on top.
5 most common classification mistakes
Trap 1: "We have a chatbot = high-risk"
Wrong. Chatbots are LIMITED RISK (Article 50 transparency only). Unless your chatbot makes decisions about employment, credit, healthcare, etc. — then it would fall into Annex III via that domain. Generic customer support bot = limited risk.
Trap 2: "Lead scoring = HR = high-risk"
Wrong. Lead scoring of B2B prospects (which company to call) is NOT employment AI. Annex III #4 is about evaluating natural persons in employment context (job applicants, employees). Lead scoring is sales AI = MINIMAL RISK.
Trap 3: "Marketing personalization = profiling = high-risk"
Wrong. Marketing personalization (showing different content based on browsing) is MINIMAL RISK. GDPR Article 22 (automated decision-making with legal effects) is a different framework — and personalization doesn't trigger Article 22 unless decisions have "legal or significant effect" on the individual.
Trap 4: "Healthcare AI = automatically high-risk"
Partially wrong. Annex III #5 covers healthcare AI for "essential services" — diagnosis, treatment recommendation, patient triage. But healthcare-adjacent AI (appointment scheduling, billing, hospital inventory) is NOT high-risk. Distinguish patient-affecting from operational AI.
Trap 5: "EU AI Act only applies to EU companies"
Wrong — and most dangerous misconception for US founders. Article 2 of EU AI Act applies extraterritorially to "providers and deployers... located outside the EU where the output of the AI system is used in the Union". Same as GDPR.
If your US-based AI processes data of EU residents, OR your output reaches EU users, you're in scope. Non-EU companies caught unaware in 2018 with GDPR; same pattern expected with AI Act in 2026-2027.
Full guide for US SaaS expanding to EU.
What to do now (concrete actions)
1. Inventory your AI systems (15 min)
List every AI feature in your product. For each: what it does, what data it uses, who consumes the output. Most companies are surprised how much "AI" is hidden in features they didn't think of.
2. Per-system decision tree (30 min)
For each AI system, run through the decision tree above. Output: classification (Tier 0/1/2/3) per system.
3. Gap analysis (1-2h per high-risk system)
For high-risk systems, map current state vs Articles 9-15 requirements. Art. 10 data governance details, Art. 14 human oversight details.
4. Roadmap fix
Prioritized action plan with effort estimates. Aim for 6-12 weeks total compliance work for HR-Tech-style products, longer for FinTech credit / HealthTech.
5. Legal review
EU AI Act-specialized lawyer reviews documentation before formal conformity assessment. €5-15k typical for SMB scope.
Skip the manual process — €799 audit (founding tier)
We run focused EU AI Act audits for SMB SaaS. In 5 days you get: full Annex III classification, system inventory, gap analysis Articles 9-15, prioritized roadmap, PDF report + Loom walkthrough. 30-day money-back guarantee.
Order audit →Sources & references
- Annex III high-risk areas (full text)
- Article 6 — Risk classification
- Article 99 — Penalties
- Article 10 data governance — 7 mistakes
- Article 14 human oversight — 7 requirements
- GPAI obligations for EU SMB SaaS
- EU AI Act for US SaaS expanding to EU