// EU AI ACT · DECISION TREE

Czy moja AI to high-risk Annex III? Decision tree dla EU SMB SaaS

Date: 2026-05-03 Author: Piotr Reder, aiactaudit.pl Reading time: 8 min
TL;DR

EU AI Act dzieli AI na 4 kategorie: unacceptable / high-risk / limited / minimal. Większość SaaS dla SMB to minimal lub limited = niski compliance burden. Ale jeśli Twoja AI dotyka rekrutacji, kredytu, edukacji, healthcare, biometrii, infrastruktury krytycznej, law enforcement, migracji, sądów = HIGH-RISK = pełen compliance burden + €15M / 3% turnover penalty.

Decision tree poniżej pomoże Ci ustalić w 5 minut.

Dlaczego to jest pilne

02.08.2026 = enforcement deadline EU AI Act dla high-risk systems (Annex III).

Ponad 60% EU SMB nie zaczęło compliance (multiple compliance surveys 2025-2026 — Conversantech, SoftwareSeni, plus Center for Data Innovation w late 2025 raportowała "fewer than 30% have taken any steps").

Penalty za high-risk violation: €15M lub 3% global turnover (Article 99 EU AI Act).

Digital Omnibus 13.05.2026 może odsunąć deadline do 02.12.2027 — ale to nie jest pewne (trilogue 28.04.2026 = FAIL bez agreement). Hoping for deferral ≠ compliance strategy.

4 kategorie ryzyka EU AI Act

🔴 Tier 0: UNACCEPTABLE RISK — banned

Banned AI systems (Article 5):

Action: STOP immediately. Cease deployment. Penalty: €35M / 7% global turnover.

🟠 Tier 1: HIGH RISK — Annex III, full compliance burden

8 obszarów (poniżej decision tree).

Compliance wymogi:

Cost: Self-assessment €9,500-14,500 + 4-6 weeks internal time. Plus €5-9k legal review.

Penalty: €15M / 3% global turnover.

🟡 Tier 2: LIMITED RISK — transparency only

Compliance: Transparency disclosure — user must know they're interacting with AI / content is AI-generated.

Cost: ~€500-2,000 (mostly UI text + ToC updates).

🟢 Tier 3: MINIMAL RISK — no obligations

Compliance: Voluntary code of conduct (optional). NIE mandatory.

Cost: €0.

Decision tree — Czy Twoja AI to high-risk?

START

Q1: Czy AI jest BANNED (Tier 0)?
    ├─ Social scoring by gov?
    ├─ Real-time biometric ID w public space (non-LE)?
    ├─ Emotion recognition w workplace / education?
    ├─ Predictive policing solely on profiling?
    ├─ Untargeted facial scraping?
    ├─ Subliminal manipulation?
    └─ Exploitation of vulnerabilities?

    YES → 🔴 UNACCEPTABLE. STOP. Cease deployment. €35M penalty.
    NO  → Q2

Q2: Czy AI dotyczy decyzji w 1 z 8 Annex III obszarów?

    1. BIOMETRICS
       - Categorization sensitive attrs (race, religion, political views)
       - Emotion recognition (poza prohibited contexts)

    2. CRITICAL INFRASTRUCTURE
       - Road traffic, water, gas, electricity, digital infrastructure

    3. EDUCATION
       - Admission decisions
       - Exam scoring / grading
       - Behavior monitoring podczas exam

    4. EMPLOYMENT (← najczęściej w SaaS!)
       - Recruitment / CV screening
       - Promotion / termination decisions
       - Performance evaluation
       - Worker monitoring with consequences

    5. ESSENTIAL SERVICES
       - Creditworthiness / credit scoring
       - Insurance pricing
       - Public benefits eligibility
       - Emergency dispatching
       - Healthcare access decisions

    6. LAW ENFORCEMENT
       - Recidivism risk assessment
       - Polygraph / lie detection
       - Evidence reliability
       - Profiling for criminal offenses

    7. MIGRATION & BORDER
       - Visa decisions
       - Asylum claims
       - Border lie detection

    8. JUSTICE & DEMOCRACY
       - AI legal research for judges
       - Election influence / political ad targeting
       - Voting machines

    YES → 🟠 HIGH RISK. Full compliance burden. €15M penalty.
    NO  → Q3

Q3: Czy AI interactuje z user-facing way?
    - Chatbot conversing with humans?
    - User receives AI-generated content (deepfake, marketing copy)?
    - Emotion recognition (informational, NOT workplace)?
    - Biometric categorization (non-prohibited)?

    YES → 🟡 LIMITED RISK. Transparency disclosure required.
    NO  → 🟢 MINIMAL RISK. No mandatory compliance.

Common SaaS use cases — quick classification

Najczęstsze przypadki dla SMB SaaS w EU:

Use caseTierReasoning
Customer support chatbot (GPT/Claude)🟡 LimitedTransparency: "you're chatting with AI" required
Marketing copy generation (ChatGPT/Jasper)🟢 MinimalNo human decision impact
Lead scoring / sales AI ranking🟠 High**if scores affect employment decisions of clients
HR ATS resume screening🟠 HIGHAnnex III #4 employment — direct match
AI interview transcript analyzer🟠 HIGHAnnex III #4 worker evaluation
Credit scoring (B2C bank app)🟠 HIGHAnnex III #5 essential services
Recommendation engine (e-commerce)🟢 MinimalNo human decision per Annex III
AI medical imaging analysis🟠 HIGHAnnex III #5 essential services + healthcare
AI document classification (legal)🟢 / 🟠Minimal if internal, HIGH if used by judiciary
AI fraud detection (banking)🟠 HIGHAnnex III #5 essential services
AI-generated images / deepfakes🟡 LimitedTransparency: "AI-generated" label required
Sentiment analysis social media🟢 MinimalNo regulatory trigger
Spam filter / antivirus🟢 MinimalNo regulatory trigger
AI-powered code review🟢 MinimalNo human decision per Annex III
Pattern: gdzie AI dotyka decyzji o ludziach (zatrudnienie, kredyt, edukacja, świadczenia, prawo, healthcare) = high-risk. Wszystko inne = zwykle minimal/limited.

Konkretny przykład: TalentAI Sp. z o.o.

Polski HR-tech startup, 18 employees, Series A, oferuje AI ATS dla rekruterów.

3 AI systems:

System 1: AI Resume Screener

Q1: Banned? NIE.
Q2: Annex III #4 employment? TAK — "screening or filtering applications, evaluating candidates" = direct match.
🟠 HIGH RISK

System 2: AI Interview Transcript Analyzer

Q1: Banned? NIE.
Q2: Annex III #4 worker evaluation? TAK — wpływ na recruiter recommendation.
🟠 HIGH RISK

System 3: Marketing Chatbot

Q1: Banned? NIE.
Q2: Annex III? NIE (no employment/credit/healthcare decisions).
Q3: User interacts with AI? TAK → transparency disclosure required.
🟡 LIMITED RISK

Co ten klasyfikacja oznacza dla TalentAI

2 high-risk systems = TalentAI MA znaczące compliance gaps. Przed 02.08.2026 musi:

Total estimated effort: 80-120h dev + €5-9k external costs + 3 mc intensywnej pracy.

Marketing chatbot (Limited): dodaj wyraźny komunikat "Rozmawiasz z AI assistant" + always-visible "Talk to human" button = ~5h pracy.

Najczęstsze pułapki klasyfikacji

Pułapka 1: "Mamy chatbot = high-risk"

FALSE. Customer support chatbot bez Annex III decision impact = Limited risk, nie high-risk. Wystarczy transparency notice.

Pułapka 2: "Lead scoring = HR = high-risk"

Często false. Lead scoring dla CRM (qualifying inbound prospects) ≠ employment decision. Annex III #4 dotyczy screening kandydatów do PRACY, nie lead-qualification w sales.

Edge case: jeśli Twój lead scoring AI sprzedaje do klientów którzy używają go do rekrutacji = THEY są w Annex III #4, TY jesteś provider AI system → też podlegasz pod Annex III.

Pułapka 3: "Marketing personalization = profiling = high-risk"

FALSE. Marketing personalization (recommendations) = minimal risk, nawet jeśli używa profilowania, chyba że profile dotyczy decyzji w Annex III obszarach.

Pułapka 4: "Healthcare AI = automatically high-risk"

Mostly TRUE, ale niuanse. AI medical imaging analysis dla diagnostyki = high-risk (Annex III #5). ALE AI booking system dla kliniki = minimal. Decision impact decyduje, NIE branża.

Pułapka 5: "EU AI Act dotyczy tylko EU companies"

FALSE. Dotyczy każdej firmy która sprzedaje / używa AI w EU, niezależnie od HQ. US SaaS z EU klientami = w scope. Polish SaaS sprzedająca tylko w PL = w scope (Polska = EU).

Co teraz konkretnie zrobić

1. Inventory swoje AI systems (15 min)

Wymień każdy AI używany w Twojej firmie:

2. Per-system decision tree (30 min)

Przejdź każdy system przez decision tree powyżej:

Wynik: kategoria ryzyka per system.

3. Gap analysis (1-2h per high-risk system)

Dla każdego high-risk system sprawdź czy masz:

4. Roadmap fix

Prioritized:

5. Legal review

Każdy high-risk classification + roadmap musi być reviewowany przez qualified EU AI Act counsel przed implementation. NIE jesteś prawnikiem (i ja też nie).

Skróć ten proces — €799 audit

Sam możesz to zrobić w 4-6 tygodni internal time + €9,500-14,500 cost (DIY self-assessment).

Albo: €799 audit za 4h pracy z mojej strony, klarność w 5 dni.

Audit teraz €799 →

Sources & references

Disclaimer: Ten artykuł jest informational i NIE zastępuje legal advice. Klasyfikacja konkretnego AI system pod EU AI Act wymaga review przez qualified EU AI Act counsel. Author (Piotr Reder / aiactaudit.pl) nie jest prawnikiem.

Penalty informacje per Article 99 EU AI Act: