Guide

Building a 30/60/90 Plan for EU AI Act Vendor Compliance

A 30/60/90 plan converts the EU AI Act third-party risk obligation into a sequenced operational programme. Days 1-30 build the AI vendor inventory and scope. Days 30-60 run full assessments on the highest-exposure vendors. Days 60-90 remediate gaps, formalise governance, and stand up ongoing monitoring. This plan is calibrated for DACH-regulated risk teams running ahead of the 2 August 2026 deployer-obligation deadline.

TL;DR. A 30/60/90 plan converts the EU AI Act third-party risk obligation into a sequenced operational programme. Days 1-30 build the AI vendor inventory and scope. Days 30-60 run full assessments on the highest-exposure vendors. Days 60-90 remediate gaps, formalise governance, and stand up ongoing monitoring. This plan is calibrated for DACH-regulated risk teams running ahead of the 2 August 2026 deployer-obligation deadline.


1. Why 90 days, and why this sequence

The EU AI Act's deployer obligations under Article 26 apply from 2 August 2026 for Annex III high-risk systems. The systemic-risk GPAI provider obligations under Article 55 have already applied since 2 August 2025. National market-surveillance authorities are stood up in most Member States, and BSI/BaFin in Germany are signalling examination focus on AI vendor portfolios.

A risk team starting today has roughly one quarter to convert from ad-hoc assessment to a defensible programme. 90 days is not arbitrary — it is the working horizon for management-board sign-off cycles, the typical refresh cadence for vendor risk reviews, and the practical limit of what a small risk team can sustain before the work calcifies.

The sequence — inventory → assessment → remediation/governance — mirrors the way regulators read a compliance file. Asking "show me your AI vendor inventory" is the first question; "show me your assessment of top-exposure vendors" is the second; "show me your remediation log and ongoing monitoring" is the third. A risk team that produces these three artefacts in this order has a story.

For the underlying framework, see the EU AI Act Third-Party Risk pillar.


2. Days 1-30: Inventory and scoping

The first month produces three artefacts: the AI vendor inventory, the role/tier mapping, and the prioritised assessment list.

Week 1: Inventory data sources

Pull AI usage signals from across the organisation:

Source Signal
Procurement contract repository Vendors with "AI", "ML", "model", "neural", "GPT", "LLM" keywords
Cloud-cost reporting OpenAI, Anthropic, Google AI, Bedrock, Azure OpenAI, Vertex AI, Hugging Face
Engineering tooling API endpoint allowlists, package dependencies (transformers, langchain, openai SDK)
Procurement spend data Vendors above materiality threshold with AI-feature roadmap
Business-unit interviews Self-declared AI use by department heads
DPIA / DPA register Processors flagged for AI/ML processing
Help-desk tickets "AI" / "Copilot" / "ChatGPT" / "RAG" mentions

Cross-reference. Each AI vendor should appear in at least two sources before being treated as confirmed. Single-source flags need verification.

Week 2: Role and tier provisional mapping

For each confirmed AI vendor, capture:

Provisional means "best estimate from documentation review pending full assessment". Document the reasoning and the open questions per vendor.

Week 3: Prioritisation matrix

Score each vendor on two axes:

Risk axis Spend / criticality axis
AI Act tier (high-risk → Art. 50 → minimal) Annual spend
GPAI / systemic-risk GPAI Number of users / customers affected
Annex III category sensitivity Substitutability (low → high concentration risk)
Public-data / inference-only / training-data exposure Integration depth (gateway → embedded)

Top-10 = top decile by combined score. This is the assessment cohort for Days 30-60.

Week 4: Assessment scoping and resource allocation

For each top-10 vendor:

Output of Days 1-30: AI vendor inventory (typically 30-150 vendors for a mid-sized DACH organization), role/tier provisional mapping, top-10 prioritised assessment list with owners and timelines.

For procurement-side decisions during this phase, see the Annex III procurement cluster.


3. Days 30-60: Top-10 assessment

The second month converts the top-10 list into completed 13-dimension assessments. Each assessment produces a scorecard, an EU AI Act classification with citation, a documentation gap log, and a remediation list.

The 13-dimension assessment per vendor

# Dimension Output
1 Legal entity & beneficial ownership Registry data, ownership chain, sanctions screening result
2 Data processing scope DPA scope, sub-processor consent mechanism, retention
3 Security certifications SOC 2 type/scope/period; ISO 27001 scope/SoA; BSI C5 type — see BSI C5 cluster
4 Sub-processor chain Named 4th-parties, concentration
5 Data residency EU-only / cross-border, transfer mechanism
6 Incident & breach history Public disclosures, regulator actions, response times
7 Sanctions / PEP / adverse media OFAC / EU / UN / UK lists; named individuals
8 Business continuity RTO/RPO, DR test cadence, substitutability
9 Technical attack surface DNS, TLS, exposed assets, certificate hygiene
10 AI use disclosure Model type, training data, GPAI dependency, system card
11 EU AI Act risk tier Article-cited classification
12 Documentation completeness DPA, AUP, sub-processor list, SBOM, DPIA, system card, Annex IV summary
13 DORA / NIS2 / CSDDD applicability Sector mapping

Each dimension produces complete / partial / missing / contradictory + cited evidence. A finding without a quotation, document reference, or probe ID is not a finding.

Pacing and concentration analysis

A 4-week phase covering 10 vendors at 2-3 assessments per week, with the final week reserved for cross-vendor concentration analysis. PartnerScope Pro at €299/vendor completes the cohort at €2,990; Enterprise (€4,900/quarter for 15 vendors) absorbs the cohort with 5 vendors of headroom.

Cross-vendor concentration

After all 10 individual assessments, run two concentration analyses:

See the GPAI deployer cluster for the GPAI concentration analysis structure.

Output of Days 30-60


4. Days 60-90: Remediation, governance, ongoing monitoring

The third month converts findings into actions and the actions into a sustained programme.

Remediation execution

For each vendor finding rated Critical or High:

Action Owner Deadline
Issue remediation request to vendor with cited gap Risk owner Day 60 + 5
Vendor response (acceptance, push-back, alternative) Vendor Day 60 + 20
Risk decision: accept / require change / replace Risk committee Day 60 + 25
Contract amendment if required Procurement / legal Day 60 + 30 (target)
Sign-off and documentation in risk register First / second line Day 90 - 5

A Critical finding without a remediation deadline within 90 days is a residual risk that must be approved at the management-body level under DORA Art. 28 and equivalent governance. See the DORA + EU AI Act cluster for the governance touchpoints.

Governance formalisation

In parallel with remediation, formalise:

Ongoing monitoring stand-up

Beyond Day 90, the programme moves into monitoring mode:

Output of Days 60-90: closed remediation log for top-decile, board-approved policy, governance procedures live, monitoring cadence in operation.


5. Common failure modes

The 30/60/90 plan fails predictably when:

Failure Symptom Mitigation
Inventory is incomplete Vendors surface during Day 60 review that should have been on Day 7 Two-source verification rule; engineering / cloud-cost integration
Top-10 ranking is biased toward spend, not risk High-risk Annex III vendors with low spend get overlooked Two-axis matrix; explicit Annex III flag overrides spend
Vendor self-attestation is treated as evidence Findings without cited documents Hard rule: every finding has a quote, document reference, or probe ID
Remediation deadlines slip without escalation Day 90 governance sign-off pending Day 75 escalation gate to risk committee
Programme dies after Day 90 Quarterly cadence not stood up Calendar-locked Q+1 review meeting and quarterly board reporting

A single-quarter sprint that does not transition into ongoing monitoring is wasted; regulators read the quarterly cadence as the maturity signal.


6. Frequently asked questions

Can a risk team smaller than 5 people run this plan? Yes, with two compromises: outsource the assessment phase (PartnerScope Pro at €299/vendor or Enterprise at €4,900/quarter for 15 vendors) and reduce the assessment cohort to top-5 instead of top-10. The structure stays.

What is the right size for the AI vendor inventory before assessment? Most DACH mid-sized organizations land between 30 and 150 confirmed AI vendors after a thorough inventory. Anything below 20 typically reflects an incomplete inventory; anything above 200 typically reflects double-counting from non-AI products that ship AI features.

How do I justify the 13-dimension scope to executive management? Map each dimension to a regulator: SOC 2 → auditor; BSI C5 → BSI/BaFin; DPA → BfDI/DPA; Annex III classification → market-surveillance authority; sub-processor chain → DORA register; concentration → DORA Art. 29; AI red-teaming → operational risk. The scope is not optional; it is the regulators' aggregated checklist.

How does this plan handle vendors discovered after Day 30? Treat new discoveries as exception cases routed to the same triage: add to inventory, score on two-axis matrix, classify in next quarterly cohort unless the score puts them in the current top-10 (then bump). The plan's resilience is in the standing process, not the one-time exercise.

Can PartnerScope deliver the assessment cohort within Days 30-60? Yes for cohorts up to 15-20 vendors. Pro tier assessments deliver in 5-10 working days each; Enterprise onboarding for a 15-vendor portfolio takes 2-3 weeks plus assessment time. €4,900/quarter Enterprise + €2,500 onboarding fits the budget envelope of a typical mid-market risk function.


CTA

Run a free 60-second EU AI Act Snapshot at partnerscope.eu to seed your AI vendor inventory and surface the highest-exposure vendor in your stack. Or read the complete pillar guide.

Try PartnerScope

Run a free 60-second EU AI Act Snapshot — classifies your vendor's AI under the Act and produces a starter scorecard before any commitment.