If you're a Provider — what's specific to your role
For: provider
Tier: free+
Time: ~12 min
Why you'd do this
Provider (Art. 3(3)) carries the heaviest obligation load under EU AI Act — roughly 175 actionable obligations across ~30 articles. Your role-filter view in the dashboard hides the rest, but the work remaining is still substantial. This chapter walks the 6 workflows that consume the most setup time, in the order you typically meet them.
Before you start
- Read the Concept Primer chapter first — it defines the role and explains how risk classification gates which articles apply to you
- Have your AI system's intended-use documentation handy — most Provider obligations cite back to it (Art. 13, Art. 17, Art. 72)
- If you're a non-EU Provider placing your AI on the EU market, you ALSO need an Authorised Representative (Art. 22) — see that persona addendum, the AR carries delegated obligations on your behalf
Step 1
Workflow 1 — Risk classification (Art. 5, 6, 50)
Before any other Provider obligation kicks in, you must classify your AI system's risk band. The classification determines which articles apply at all:
- Prohibited (Art. 5) → cease and desist; no compliance work to do, the system can't be marketed in the EU
- High-risk (Art. 6 + Annex III) → full Art. 8-27 + 43 stack (~150 obligations of your 175)
- Limited-risk (Art. 50) → transparency obligations only (~10 obligations: inform users, label deepfakes, etc.)
- Minimal-risk → no specific obligations
If you're unsure which band you fall into, the dashboard's Risk Classification Guide wizard walks you through Annex III categories. See chapter 14 (Risk classification setup) for the step-by-step UI flow.
Step 2
Workflow 2 — Technical documentation (Art. 11 + Annex IV)
Annex IV lists 9 sections of mandatory technical documentation: general description, design specs, training-data methodology, validation procedures, risk-mgmt evidence, change log, post-market monitoring plan, instructions for use, and the EU declaration of conformity itself.
ComplianceLint's Compliance All-in-One Pack export (Business+) produces a regulator-ready bundle that covers Annex IV — see chapter 23. For free/starter/pro tiers, the per-article PDFs cover the same content but you assemble them yourself.
Note: the single-document path (Art. 11(2)) is a simplification available only when your system is NOT an Annex I product (medical device, machinery, etc.). The Profiling Wizard (Starter+) auto-derives this distinction.
Step 3
Workflow 3 — Quality management system (Art. 17)
Providers must establish a documented QMS covering: change control, data management, post-market monitoring procedures, examination and review processes, accountability assignment, and the entire AI lifecycle from design through retirement.
QMS evidence in ComplianceLint is collected through the Human Gates questionnaire for Art. 17 (Pro+ tier). The questionnaire walks all 13 sub-clauses; your answers persist through scans (do not need to re-enter on each scan).
If your organisation is an SME (per Art. 99(6)), the QMS may be implemented in a simplified manner — the dashboard's Org size setting (chapter 15) feeds this into the per-article applicability engine.
Step 4
Workflow 4 — Conformity assessment (Art. 43)
Before placing a high-risk AI on the EU market, Providers must complete a conformity assessment. Two routes exist depending on Annex III sub-category:
- Self-assessment based on internal control (Annex VI) — available for most Annex III categories where harmonised standards exist or have been applied in full
- Third-party (notified body) assessment (Annex VII) — required for biometrics (Art. 6(1) + Annex III §1) and any case where harmonised standards weren't fully applied
The dashboard's per-article view for Art. 43 surfaces which route applies based on your declared Annex III sub-category. ComplianceLint generates the EU Declaration of Conformity (Pro+ via Human Gates → Declaration PDF) and the Technical Documentation (same path → Technical Doc PDF) — both are regulator-facing artifacts.
Note: notified body certification itself is OUTSIDE ComplianceLint's scope. We produce the dossier; you submit it to the body.
Step 5
Workflow 5 — Post-market monitoring (Art. 72) + serious-incident reporting (Art. 73)
Once your AI is on the market, you must operate a documented post-market monitoring system: collect performance data over the system's lifetime, analyse it, and feed findings back into your QMS. Serious incidents (Art. 73) — defined as those leading to death, serious health harm, infrastructure disruption, fundamental rights breach, or property damage — must be reported to the market-surveillance authority within 15 days of becoming aware (Art. 73(2)) — or 2 days for widespread / fatal incidents.
ComplianceLint's role here is procedural — the Human Gates questionnaires for Art. 72 and Art. 73 collect your monitoring plan and incident-response procedures. Actual incident reporting happens through national-authority portals, not ComplianceLint.
Step 6
Workflow 6 — GPAI obligations (Art. 51-55) — only if you're a GPAI provider
If your AI system is a General-Purpose AI model (Art. 3(63)) — broadly: a model capable of competently performing a wide range of distinct tasks, not narrowly scoped — you have an additional 12 obligations stacked on top of the high-risk set. The Profiling Wizard's is_gpai flag (Starter+) gates these on/off.
Key GPAI-specific obligations:
- Art. 51 — Classification thresholds: declare whether your model crosses the systemic-risk FLOPs threshold (10²⁵ FLOPs training compute)
- Art. 53 — Provider obligations: technical documentation covering architecture + training data summaries; copyright policy compliance; downstream-provider documentation
- Art. 54 — Authorised representative: GPAI providers established outside the EU MUST appoint one (the obligation is stricter than for non-GPAI Providers)
- Art. 55 — Systemic-risk additional obligations: model evaluation, adversarial testing, incident tracking, cybersecurity protection — all enforceable from 2025-08-02
If you're not a GPAI provider, set is_gpai = false in the Profiling Wizard — the dashboard auto-marks all 12 GPAI obligations as NOT_APPLICABLE and they disappear from your view.
What can go wrong
- Dashboard shows ~244 obligations for your repo even though you set risk_classification = limited-risk — You're on the Free tier — applicability customization (auto-NA based on profile signals) is gated to Starter+ (
applicabilityCustomizationflag). Free shows the worst-case universal view. Either upgrade or manually mark NA viacl_update_finding action="rebut". - You're a non-EU Provider and unsure which obligations transfer to your Authorised Representative — Per Art. 22(3) + Art. 25, the AR is responsible for: keeping the EU declaration of conformity + technical doc available to authorities (10y), cooperating with authorities, terminating the mandate if Provider acts contrary to AI Act. Your CORE design + risk + post-market obligations stay with you as Provider. The AR is your EU contact point, not a co-Provider.
Related
- concept-primer
- risk-classification-setup
- applicability-customization
- human-gates
- compliance-all-in-one-pack
- persona-authorised-representative
Last updated: 2026-04-30