If you're a Deployer — what's specific to your role
For: deployer
Tier: free+
Time: ~7 min
Why you'd do this
Deployer (Art. 3(4)) is the role for organisations using an AI system in their own activity — typically buying a vendor's API or integrating a model into a product. You inherit lighter obligations than the Provider (~24 actionable items vs ~175), but several are non-trivial: human oversight, FRIA, log retention, and informing natural persons subjected to the system.
Before you start
- Confirm with your Provider that the AI system is high-risk under Art. 6 + Annex III — most Deployer obligations only kick in for high-risk systems (Art. 50 transparency obligations apply more broadly)
- Have the Provider's instructions for use (Art. 13) on hand — most of your Art. 26 obligations cite back to it
Step 1
Workflow 1 — Use the AI system per the Provider's instructions (Art. 26(1))
The bedrock Deployer obligation: operate the AI system according to its accompanying instructions (Art. 13 instructions for use). Reasonable adaptations are allowed but documented as deviations.
ComplianceLint's Art. 26 Human Gates questionnaire captures: which instructions you received, your operational policies that mirror them, your training material for staff who use the system, and any documented deviations.
Step 2
Workflow 2 — Human oversight (Art. 14 + Art. 26(2))
Art. 14 lays out the Provider's obligation to design FOR human oversight; Art. 26(2) is the matching Deployer obligation to actually ASSIGN named humans with the necessary competence and authority to perform that oversight in operations.
Concrete deliverables you maintain:
- Named oversight roles (with backup) — who can override outputs?
- Their competence record (training, certification, refresher cadence)
- Operating procedures: when does an output trigger human review before action?
- Escalation path when oversight personnel disagree with the AI
ComplianceLint's Art. 26 questionnaire collects all four.
Step 3
Workflow 3 — Fundamental Rights Impact Assessment (Art. 27)
FRIA is mandatory for Deployers that are public bodies, private operators providing public services, or any Deployer using high-risk systems for credit-scoring (Annex III §5(b)) or life/health insurance pricing (Annex III §5(c)).
The assessment must describe: the deployment's purpose; the time frame and frequency of use; the categories of natural persons potentially affected; identification of foreseeable risks of harm; human oversight measures; risk-mitigation steps including specific complaint and redress mechanisms.
ComplianceLint's Art. 27 Human Gates questionnaire walks all six sections. Output is a stand-alone PDF that you submit to your national market-surveillance authority via their portal — ComplianceLint does not submit on your behalf.
Step 4
Workflow 4 — Automated log retention (Art. 26(6))
Deployers must keep the AI system's automatically generated logs for at least 6 months (longer if national or EU law requires). Logs cover the events the Provider's Art. 12 design produced — you don't add new logging, but you must store and protect what the system already emits.
ComplianceLint doesn't retain your AI's runtime logs (we never see them). The Art. 26 questionnaire records WHERE you store them, WHO has access, WHAT the retention policy is, and how you'd produce them on request to an authority.
Step 5
Workflow 5 — Inform affected natural persons (Art. 50)
Even outside high-risk classification, certain transparency obligations attach to Deployers:
- Chatbots / conversational AI (Art. 50(1)): users must be informed they're interacting with an AI unless this is obvious from context
- Emotion-recognition / biometric categorisation (Art. 50(3)): subjects must be informed of the operation
- Deepfakes (Art. 50(4)): visibly disclose that content is artificially generated/manipulated, unless used for art / criticism / parody
- AI-generated public-interest text (Art. 50(4) second clause): disclose unless reviewed by a human and an editorial entity takes responsibility
These are surface-level UX obligations — usually a banner or watermark. ComplianceLint's Art. 50 Human Gates questionnaire collects screenshots / URL of where the disclosure appears.
Step 6
Other Deployer-relevant obligations
- Art. 4 — AI literacy: ensure staff using the AI have a sufficient level of AI literacy. Practical: training records for any role that interacts with the system. Applies to Provider AND Deployer.
- Art. 26(11) — DPIA interaction: where a DPIA under GDPR Art. 35 is required, use the FRIA output to inform it. Most high-risk deployments trigger both.
- Art. 86 — Right to explanation: affected natural persons have a right to a clear and meaningful explanation of the role of the AI system in the decision-making procedure. Your operational procedures need to support producing this on request.
- Art. 99 — Penalty regime: SMEs (per Art. 99(6)) get lower-of-the-two penalty calculation (
min(fixed, revenue×%)); set your org size in Compliance Profile (chapter 15) to have the dashboard reflect this.
What can go wrong
- You're using a high-risk AI from a non-EU Provider — who handles the EU compliance contact? — The non-EU Provider must designate an Authorised Representative (Art. 22) before placing the system on the EU market. As Deployer you can request the AR's contact details from the Provider and should have them on file.
- Your AI system isn't on Annex III but you still see Art. 50 obligations applying — Art. 50 transparency obligations apply to all AI systems with the specific surface (chatbot / emotion / biometric / deepfake / public-interest text) regardless of high-risk status. The Profiling Wizard's
generates_synthetic_contentflag (Starter+) refines the deepfake-specific subset.
Related
Last updated: 2026-04-30