Czy moja AI to high-risk Annex III? Decision tree dla EU SMB SaaS
EU AI Act dzieli AI na 4 kategorie: unacceptable / high-risk / limited / minimal. Większość SaaS dla SMB to minimal lub limited = niski compliance burden. Ale jeśli Twoja AI dotyka rekrutacji, kredytu, edukacji, healthcare, biometrii, infrastruktury krytycznej, law enforcement, migracji, sądów = HIGH-RISK = pełen compliance burden + €15M / 3% turnover penalty.
Decision tree poniżej pomoże Ci ustalić w 5 minut.
Dlaczego to jest pilne
02.08.2026 = enforcement deadline EU AI Act dla high-risk systems (Annex III).
Ponad 60% EU SMB nie zaczęło compliance (multiple compliance surveys 2025-2026 — Conversantech, SoftwareSeni, plus Center for Data Innovation w late 2025 raportowała "fewer than 30% have taken any steps").
Penalty za high-risk violation: €15M lub 3% global turnover (Article 99 EU AI Act).
Digital Omnibus 13.05.2026 może odsunąć deadline do 02.12.2027 — ale to nie jest pewne (trilogue 28.04.2026 = FAIL bez agreement). Hoping for deferral ≠ compliance strategy.
4 kategorie ryzyka EU AI Act
🔴 Tier 0: UNACCEPTABLE RISK — banned
Banned AI systems (Article 5):
- Social scoring przez rządy / publiczne autorytety
- Real-time biometric ID w miejscach publicznych (poza law enforcement exceptions)
- Emotion recognition w workplace / education
- Predictive policing based solely na profiling
- Untargeted facial scraping z internetu / CCTV
- Subliminal manipulation AI
- Exploitation of vulnerabilities (dzieci, niepełnosprawność, sytuacja socjoekonomiczna)
Action: STOP immediately. Cease deployment. Penalty: €35M / 7% global turnover.
🟠 Tier 1: HIGH RISK — Annex III, full compliance burden
8 obszarów (poniżej decision tree).
Compliance wymogi:
- Risk management system
- Data governance + bias testing
- Technical documentation (Annex IV)
- Logging / audit trail
- Transparency notices
- Human oversight mechanism
- Accuracy + robustness testing
- CE marking
- EU database registration (Annex VIII)
- Post-market monitoring
Cost: Self-assessment €9,500-14,500 + 4-6 weeks internal time. Plus €5-9k legal review.
Penalty: €15M / 3% global turnover.
🟡 Tier 2: LIMITED RISK — transparency only
- Chatbots interacting with humans
- AI-generated content (deepfakes)
- Emotion recognition (informational only, NOT workplace/education)
- Biometric categorization (non-prohibited contexts)
Compliance: Transparency disclosure — user must know they're interacting with AI / content is AI-generated.
Cost: ~€500-2,000 (mostly UI text + ToC updates).
🟢 Tier 3: MINIMAL RISK — no obligations
- Spam filters
- Basic recommender systems
- AI in video games
- Marketing personalization
- Inventory management AI
- General-purpose chatbots without Annex III categories
Compliance: Voluntary code of conduct (optional). NIE mandatory.
Cost: €0.
Decision tree — Czy Twoja AI to high-risk?
START
Q1: Czy AI jest BANNED (Tier 0)?
├─ Social scoring by gov?
├─ Real-time biometric ID w public space (non-LE)?
├─ Emotion recognition w workplace / education?
├─ Predictive policing solely on profiling?
├─ Untargeted facial scraping?
├─ Subliminal manipulation?
└─ Exploitation of vulnerabilities?
YES → 🔴 UNACCEPTABLE. STOP. Cease deployment. €35M penalty.
NO → Q2
Q2: Czy AI dotyczy decyzji w 1 z 8 Annex III obszarów?
1. BIOMETRICS
- Categorization sensitive attrs (race, religion, political views)
- Emotion recognition (poza prohibited contexts)
2. CRITICAL INFRASTRUCTURE
- Road traffic, water, gas, electricity, digital infrastructure
3. EDUCATION
- Admission decisions
- Exam scoring / grading
- Behavior monitoring podczas exam
4. EMPLOYMENT (← najczęściej w SaaS!)
- Recruitment / CV screening
- Promotion / termination decisions
- Performance evaluation
- Worker monitoring with consequences
5. ESSENTIAL SERVICES
- Creditworthiness / credit scoring
- Insurance pricing
- Public benefits eligibility
- Emergency dispatching
- Healthcare access decisions
6. LAW ENFORCEMENT
- Recidivism risk assessment
- Polygraph / lie detection
- Evidence reliability
- Profiling for criminal offenses
7. MIGRATION & BORDER
- Visa decisions
- Asylum claims
- Border lie detection
8. JUSTICE & DEMOCRACY
- AI legal research for judges
- Election influence / political ad targeting
- Voting machines
YES → 🟠 HIGH RISK. Full compliance burden. €15M penalty.
NO → Q3
Q3: Czy AI interactuje z user-facing way?
- Chatbot conversing with humans?
- User receives AI-generated content (deepfake, marketing copy)?
- Emotion recognition (informational, NOT workplace)?
- Biometric categorization (non-prohibited)?
YES → 🟡 LIMITED RISK. Transparency disclosure required.
NO → 🟢 MINIMAL RISK. No mandatory compliance.
Common SaaS use cases — quick classification
Najczęstsze przypadki dla SMB SaaS w EU:
| Use case | Tier | Reasoning |
|---|---|---|
| Customer support chatbot (GPT/Claude) | 🟡 Limited | Transparency: "you're chatting with AI" required |
| Marketing copy generation (ChatGPT/Jasper) | 🟢 Minimal | No human decision impact |
| Lead scoring / sales AI ranking | 🟠 High* | *if scores affect employment decisions of clients |
| HR ATS resume screening | 🟠 HIGH | Annex III #4 employment — direct match |
| AI interview transcript analyzer | 🟠 HIGH | Annex III #4 worker evaluation |
| Credit scoring (B2C bank app) | 🟠 HIGH | Annex III #5 essential services |
| Recommendation engine (e-commerce) | 🟢 Minimal | No human decision per Annex III |
| AI medical imaging analysis | 🟠 HIGH | Annex III #5 essential services + healthcare |
| AI document classification (legal) | 🟢 / 🟠 | Minimal if internal, HIGH if used by judiciary |
| AI fraud detection (banking) | 🟠 HIGH | Annex III #5 essential services |
| AI-generated images / deepfakes | 🟡 Limited | Transparency: "AI-generated" label required |
| Sentiment analysis social media | 🟢 Minimal | No regulatory trigger |
| Spam filter / antivirus | 🟢 Minimal | No regulatory trigger |
| AI-powered code review | 🟢 Minimal | No human decision per Annex III |
Konkretny przykład: TalentAI Sp. z o.o.
Polski HR-tech startup, 18 employees, Series A, oferuje AI ATS dla rekruterów.
3 AI systems:
System 1: AI Resume Screener
- Filtruje CV przed human review
- Recruiter widzi top 30%
Q1: Banned? NIE.
Q2: Annex III #4 employment? TAK — "screening or filtering applications, evaluating candidates" = direct match.
→ 🟠 HIGH RISK
System 2: AI Interview Transcript Analyzer
- Analizuje transcripty rozmów rekrutacyjnych
- Daje recruiter insights
Q1: Banned? NIE.
Q2: Annex III #4 worker evaluation? TAK — wpływ na recruiter recommendation.
→ 🟠 HIGH RISK
System 3: Marketing Chatbot
- Customer-facing chatbot na website TalentAI
- Odpowiada o produkt
Q1: Banned? NIE.
Q2: Annex III? NIE (no employment/credit/healthcare decisions).
Q3: User interacts with AI? TAK → transparency disclosure required.
→ 🟡 LIMITED RISK
Co ten klasyfikacja oznacza dla TalentAI
2 high-risk systems = TalentAI MA znaczące compliance gaps. Przed 02.08.2026 musi:
- Risk management system documentation (16h legal + 8h internal)
- Technical documentation package (24h)
- Transparency notices (kandydaci + klienci) (8h)
- Bias testing + remediation (16h + €1-2k specialist tool)
- Logging retention extended to 6 mc (12h dev)
- Human oversight procedure (8h)
- CE marking (€3-5k legal + 4-8 weeks process)
Total estimated effort: 80-120h dev + €5-9k external costs + 3 mc intensywnej pracy.
Marketing chatbot (Limited): dodaj wyraźny komunikat "Rozmawiasz z AI assistant" + always-visible "Talk to human" button = ~5h pracy.
Najczęstsze pułapki klasyfikacji
Pułapka 1: "Mamy chatbot = high-risk"
FALSE. Customer support chatbot bez Annex III decision impact = Limited risk, nie high-risk. Wystarczy transparency notice.
Pułapka 2: "Lead scoring = HR = high-risk"
Często false. Lead scoring dla CRM (qualifying inbound prospects) ≠ employment decision. Annex III #4 dotyczy screening kandydatów do PRACY, nie lead-qualification w sales.
Edge case: jeśli Twój lead scoring AI sprzedaje do klientów którzy używają go do rekrutacji = THEY są w Annex III #4, TY jesteś provider AI system → też podlegasz pod Annex III.
Pułapka 3: "Marketing personalization = profiling = high-risk"
FALSE. Marketing personalization (recommendations) = minimal risk, nawet jeśli używa profilowania, chyba że profile dotyczy decyzji w Annex III obszarach.
Pułapka 4: "Healthcare AI = automatically high-risk"
Mostly TRUE, ale niuanse. AI medical imaging analysis dla diagnostyki = high-risk (Annex III #5). ALE AI booking system dla kliniki = minimal. Decision impact decyduje, NIE branża.
Pułapka 5: "EU AI Act dotyczy tylko EU companies"
FALSE. Dotyczy każdej firmy która sprzedaje / używa AI w EU, niezależnie od HQ. US SaaS z EU klientami = w scope. Polish SaaS sprzedająca tylko w PL = w scope (Polska = EU).
Co teraz konkretnie zrobić
1. Inventory swoje AI systems (15 min)
Wymień każdy AI używany w Twojej firmie:
- Customer-facing chatboty
- Internal AI tools (HR, productivity)
- Content generation
- Recommender systems
- Risk scoring / fraud detection
- Document processing
- Search / RAG systems
- Decision-making automation
2. Per-system decision tree (30 min)
Przejdź każdy system przez decision tree powyżej:
- Q1: Banned?
- Q2: Annex III obszar?
- Q3: User-facing?
Wynik: kategoria ryzyka per system.
3. Gap analysis (1-2h per high-risk system)
Dla każdego high-risk system sprawdź czy masz:
- Risk management documentation
- Data governance / bias testing
- Technical documentation
- Logging / audit trail
- Transparency notices
- Human oversight
- Accuracy testing records
- CE marking application
- Post-market monitoring
4. Roadmap fix
Prioritized:
- Critical (deploy-blocker): violations grożące immediate enforcement
- High (compliance-blocker): wymogi do 02.08.2026
- Medium (best practice): voluntary improvements
- Low (nice-to-have): future-proofing
5. Legal review
Każdy high-risk classification + roadmap musi być reviewowany przez qualified EU AI Act counsel przed implementation. NIE jesteś prawnikiem (i ja też nie).
Skróć ten proces — €799 audit
Sam możesz to zrobić w 4-6 tygodni internal time + €9,500-14,500 cost (DIY self-assessment).
Albo: €799 audit za 4h pracy z mojej strony, klarność w 5 dni.
Audit teraz €799 →Sources & references
- EU AI Act official text — artificialintelligenceact.eu
- Article 5 — Prohibited AI
- Article 6 — High-risk classification
- Article 50 — Transparency obligations
- Article 99 — Penalties
- Annex III — High-risk AI areas
- Annex IV — Technical documentation requirements
- Annex VIII — Database registration
- European Commission AI Act page
Disclaimer: Ten artykuł jest informational i NIE zastępuje legal advice. Klasyfikacja konkretnego AI system pod EU AI Act wymaga review przez qualified EU AI Act counsel. Author (Piotr Reder / aiactaudit.pl) nie jest prawnikiem.
Penalty informacje per Article 99 EU AI Act:
- €35M / 7% global turnover (unacceptable risk)
- €15M / 3% global turnover (high-risk)
- €7.5M / 1% global turnover (incorrect information)