MCP commands reference — the 17 cl_* tools
For: all
Tier: free+
Time: ~7 min
Why you'd do this
ComplianceLint is MCP-only — every interaction with the scanner side happens through one of 17 cl_* tools surfaced by the MCP server. This chapter is reference material; bookmark it once you have a feel for the day-one path (Quick Start) and consult per tool as needed.
Before you start
- MCP server connected (any MCP host: VS Code Claude Code, Cursor, etc.)
- Tools surface in your IDE's tool palette under names matching the
cl_*prefix
Step 1
Connection & meta (4 tools)
These tools establish or check the link between your local IDE and your SaaS account.
| Tool | What it does | Notes |
|---|---|---|
| cl_connect | Opens your browser to sign in with GitHub or Google. API key auto-saved to .compliancelintrc after sign-in. | No email arg required (legacy email param deprecated). Pass switch_account=true to re-authenticate as a different account. |
| cl_disconnect | Removes API key + connection config from .compliancelintrc. | Local scan data in .compliancelint/ is preserved (use cl_delete to remove that). |
| cl_version | Returns scanner version + tool count + checks for updates. | Light-weight — safe to call any time. |
| cl_report_bug | Bundles your recent scanner logs + environment info into a Markdown file at ~/compliancelint-bugreport-{timestamp}.md. | Privacy-scrubbed (home paths collapsed to ~, emails/IPs redacted, no source code). You attach the file manually to a GitHub issue or email — nothing auto-uploads. |
Step 2
Scanning (4 tools)
The core analysis loop. cl_analyze_project first to give the AI the project skeleton, then either cl_scan (one or more specific articles, full findings) or cl_scan_all (summary across all articles). cl_explain is for plain-language reading of an article.
| Tool | What it does | Important detail |
|---|---|---|
| cl_analyze_project | Returns project metadata (directory tree, manifests, source samples) as a starting point for scanning. | The response is a skeleton — only ~5 files × 2KB sampled. The AI MUST then Grep + Read across the full codebase before filling compliance_answers. Filling answers from samples alone produces unreliable verdicts (the docstring is emphatic about this). |
| cl_scan | Scans one or more articles from a regulation, returns full findings. | articles param accepts "all" (default), single number "12", comma list "9,12,14", or JSON array "[9, 12, 14]". Requires ai_provider (your full model id) for audit traceability. |
| cl_scan_all | Scans all articles, returns a summary (one row per article + top findings). | For per-article detail use cl_scan(articles="N"). Per-article timeout of 30s. |
| cl_explain | Plain-language explanation of a single article: requirement summary, what's automatable vs needs human judgment, the ComplianceLint checklist, cross-references. | Read-only — no scan side-effects. |
Step 3
Action & guidance (4 tools)
Once findings exist, these tools help with remediation, questionnaire navigation, regulation deadlines, and the interim compliance checklist.
| Tool | What it does | Important detail |
|---|---|---|
| cl_action_plan | Generates a prioritised human action plan for an article (or all articles). Items requiring human judgment are flagged. | Requires a prior scan (cl_scan or cl_scan_all) — needs project context loaded. |
| cl_action_guide | Tells you where to complete a Human Gate questionnaire for a specific obligation. | Does NOT return questionnaire content or accept answers — those live on the dashboard. The tool only points you there. |
| cl_check_updates | Returns the enforcement-date status of EU AI Act milestones (Art. 5 / 4 already in force, Art. 51-55 already in force, Art. 6-49 enforceable from 2026-08-02, etc.) plus standards-track status. | Not a content-diff tool — it does not detect whether obligation JSONs have moved since your last scan. |
| cl_interim_standard | Returns the ComplianceLint compliance checklist for a specific article — fills the gap where official CEN-CENELEC standards don't yet exist. | Marked as non-official; will be replaced when official standards land. |
Step 4
Findings management (4 tools)
Mutate the local findings DB (.compliancelint/local/). Idempotent. Persistent only locally — until you cl_sync, the dashboard knows nothing about your changes.
| Tool | What it does | Important detail |
|---|---|---|
| cl_update_finding | Update one finding: provide evidence, mark NA, rebut, or acknowledge. | The AI must VERIFY the evidence specifically satisfies the obligation before calling this — vague evidence gets rejected. Bad evidence → false COMPLIANT → legal liability. |
| cl_update_finding_batch | Same as above but for many findings in one call. Two modes: (a) per-obligation explicit list, or (b) article-level evidence (one file applies to all open findings in that article). | Single user approval covers the entire batch — recommended over loop-calling cl_update_finding. |
| cl_verify_evidence | Loads compliance-evidence.json from the project root and returns verification instructions for each evidence item per its storage_kind (text → judge inline; repo_file → Read bytes; git_path → Read at line; url_reference → WebFetch). | Tool returns instructions; the AI client does the actual verification. |
| cl_delete | Three-target removal with very different blast radii. local (default, reversible): removes scan cache only, keeps git-committed evidence + .compliancelintrc. dashboard: removes the SaaS row + cascaded findings; on-disk preserved. all (IRREVERSIBLE): removes everything including git-committed evidence + .compliancelintrc; requires the exact phrase "I understand this is irreversible". | If user intent is ambiguous about target, the tool aborts and returns will_delete / will_keep lists for the user to disambiguate. |
Step 5
Sync (1 tool)
The bridge between local repo state and the SaaS dashboard.
| Tool | What it does | Important detail |
|---|---|---|
| cl_sync | Reads .compliancelint/local/state.json and uploads to the dashboard. Requires a prior cl_connect. | Only findings JSON is sent — source code never leaves the machine. Each invocation gets a fresh request_id echoed back in headers + error envelopes for end-to-end log correlation between scanner-side and SaaS-side. |
Evidence file uploads (Pro+ via dashboard) are a separate path: files are relayed through the SaaS only as a transient pipe and land in your repo's .compliancelint/evidence/ directory. The SaaS does not retain the bytes. (See chapter 20 — Evidence file upload to repo.)
What can go wrong
- A
cl_*tool isn't visible in your MCP host's tool palette — Check the MCP server is running (the host typically shows a connection indicator). For VS Code Claude Code, the panel header shows the server status. If unresponsive, restart the host or re-add the MCP server entry. cl_syncfails withunauthorizedor403— The API key in.compliancelintrcis stale or revoked. Runcl_connect(switch_account=true)to re-authenticate.cl_scanorcl_scan_alltakes much longer than expected — Scan time scales with the AI-sidecl_analyze_projectwork, which scales with project size and AI provider latency.cl_scan_allhas a 30-second per-article timeout to bound worst-case.- After running
cl_scan, dashboard still shows old findings — You haven't runcl_syncyet. The local scan DB and dashboard DB are independent untilcl_syncpushes the diff.
Related
Last updated: 2026-04-30