Complete Guide
EU AI Act Third-Party Risk: A Complete Guide for DACH-Regulated Organizations
The EU AI Act applies to deployers, not just providers. From August 2026 the prohibitions and governance rules go live. From August 2027, high-risk obligations under Annex III bite. Anything AI in your vendor stack inherits obligations to you. Standard TPRM tools, built before AI was a risk category, miss this exposure entirely. This guide explains how to classify vendor AI under the Act, what documents to actually verify, where GPAI obligations propagate downstream, and how a 13-dimension assessment framework maps the work to your existing risk-management process.
TL;DR. The EU AI Act applies to deployers, not just providers. From August 2026 the prohibitions and governance rules go live. From August 2027, high-risk obligations under Annex III bite. Anything AI in your vendor stack inherits obligations to you. Standard TPRM tools, built before AI was a risk category, miss this exposure entirely. This guide explains how to classify vendor AI under the Act, what documents to actually verify, where GPAI obligations propagate downstream, and how a 13-dimension assessment framework maps the work to your existing risk-management process.
1. What the EU AI Act actually requires — providers vs. deployers
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 and phases in over four years. The Act distinguishes several roles, and the obligations attached to each are not symmetric:
- Provider (Art. 3(3)) — develops or commissions an AI system or GPAI model and places it on the EU market under its own name
- Deployer (Art. 3(4)) — uses an AI system under its authority in a professional capacity
- Importer (Art. 3(6)) — places on the EU market an AI system originating from a non-EU provider
- Distributor (Art. 3(7)) — makes an AI system available on the EU market without affecting its properties
- Authorized representative (Art. 3(5)) — designated EU representative for non-EU providers
A single organization frequently occupies multiple roles. A bank that licenses a fraud-detection model from a SaaS vendor, retrains it on its own data, and offers fraud-as-a-service to subsidiaries is simultaneously a deployer (of the original system), a provider (of its retrained variant — see Art. 25(1)(c)), and arguably an importer if the vendor is non-EU.
What changes for deployers under the Act:
| Obligation | Article | Triggers when |
|---|---|---|
| Use system per provider's instructions for use | 26(1) | Always for high-risk AI |
| Assign human oversight to competent persons | 26(2) | High-risk |
| Ensure input data is relevant and representative | 26(4) | High-risk |
| Monitor operation, retain logs ≥ 6 months | 26(5)–(6) | High-risk |
| Inform workers' representatives + workers | 26(7) | Workplace high-risk |
| Conduct fundamental rights impact assessment | 27 | Annex III public-sector + essential services |
| Cooperate with competent authorities | 26(11) | Always |
| Transparency for limited-risk systems | 50 | Limited risk (chatbots, deepfakes, emotion recognition, biometric categorisation) |
The deployer obligations under Art. 26 are not optional. They apply regardless of how diligent the provider was, and they require the deployer to operate the high-risk AI in a way that complies with the Act — meaning the deployer needs to know what kind of system they actually have.
What providers owe their downstream deployers
Providers of high-risk AI must give deployers everything they need to comply (Art. 13–14, 26(3)). This includes:
- Detailed instructions for use (Art. 13(3))
- Information on accuracy, robustness, and cybersecurity (Art. 13(3)(b)(ii))
- Specifications for input data (Art. 13(3)(b)(vi))
- Description of human oversight measures the deployer must implement (Art. 14)
- Performance limits under specific conditions
If a vendor cannot provide any of these on request, that vendor is not Act-compliant — and using their system as a deployer puts you out of compliance too.
2. The deployer's third-party AI problem
In a typical DACH-regulated organization (bank, insurer, healthcare network, energy utility), 60–80% of AI is not built in-house. It comes from:
- SaaS vendors who shipped AI features into existing products (fraud detection, document classification, chatbots, RAG-based knowledge tools)
- Cloud providers offering AI primitives (Bedrock, Azure OpenAI, Vertex AI) wrapped into customer-built workflows
- Specialist AI vendors (translation, summarization, code generation)
- General-purpose AI models accessed via API (GPT, Claude, Gemini, open-source models on Hugging Face)
- Embedded models inside non-AI vendors (CRM, HR, ERP)
Each of these can trigger AI Act obligations. Each has a different role in the regulatory chain. Each has documentation that may or may not exist, may or may not be valid, may or may not cover what the vendor claims.
The deployer's core problem is straightforward: before you can comply with Art. 26 obligations, you need to know what classification your vendor's AI falls under, and you need that classification to be defensible to your regulator and your auditor.
The compounding effect of GPAI
The Act introduces a separate set of obligations for general-purpose AI models (Art. 51–55). A GPAI model is one that displays significant generality, can perform a wide range of distinct tasks, and can be integrated into a variety of downstream systems (Art. 3(63)).
When your vendor's product uses a GPAI model underneath — and most modern AI vendors do — three things happen:
- The GPAI provider has obligations (technical documentation, copyright compliance, training data summary)
- The GPAI provider must make information available to your vendor
- Your vendor inherits responsibility for the system as integrated, with the GPAI as a sub-component
If the GPAI is classified as systemic-risk GPAI (training compute > 10²⁵ FLOPs as of writing — Art. 51(2)), additional obligations apply: model evaluation, adversarial testing, incident reporting, cybersecurity protections (Art. 55).
This means your vendor risk now has three axes: - Risk tier of the system as deployed (Articles 5/6/50/minimal) - GPAI status of any models underneath - Systemic-risk GPAI status
Standard TPRM frameworks track none of these.
3. Risk-tier classification for vendor AI
Every AI Act assessment of a vendor system answers two questions:
- What tier does this system fall into?
- What role(s) does the vendor have, and what role(s) do you have?
Tier 1: Prohibited (Art. 5) — fully active since February 2025
The Act prohibits specific AI practices outright. Practices banned under Art. 5 include:
- Subliminal techniques materially distorting behavior
- Exploitation of vulnerabilities of specific groups (age, disability, socio-economic situation)
- Social scoring by public authorities or on their behalf
- Biometric categorisation inferring race, political opinions, trade union membership, religion, sex life, sexual orientation
- Untargeted scraping of facial images for biometric databases
- Emotion recognition in workplace and educational institutions (with narrow exceptions)
- Predictive policing based solely on profiling
- Real-time remote biometric identification in publicly accessible spaces (with very narrow law-enforcement exceptions)
If a vendor's system performs any of these, the relationship is closed. You cannot deploy a prohibited system, period.
In assessing vendors, prohibitions matter even when the system's intended purpose is benign — because reasonably foreseeable misuse (Art. 9(2)(b)) extends the analysis. A vendor that ships a "workplace productivity insight" tool with emotion-inference features built in falls under Art. 5(1)(f) regardless of marketing language.
Tier 2: High-risk (Art. 6 + Annexes I, III) — bites August 2027
A system is high-risk when:
- It is a safety component of a product covered by Annex I Union harmonisation legislation (machinery, medical devices, toys, automotive, civil aviation, etc.), AND
- That product is subject to third-party conformity assessment
OR
- It falls into one of eight Annex III categories: 1. Biometric ID + categorisation 2. Critical infrastructure (water, gas, electricity, traffic) 3. Education + vocational training (admission, evaluation, monitoring) 4. Employment (recruitment, performance evaluation, task allocation) 5. Essential private and public services (credit scoring, life/health insurance pricing, emergency dispatch, public benefits eligibility) 6. Law enforcement (risk assessment, polygraph, evidence reliability) 7. Migration, asylum, border control 8. Justice + democratic processes (case-law search, election-influencing systems)
For Annex III systems, the deployer obligations under Art. 26 fully apply. The deployer must understand which category, must implement human oversight, retain logs, conduct FRIA where required (Art. 27).
For Annex I systems, integration with sectoral conformity assessment (MDR for medical devices, Machinery Regulation, etc.) is required. The deployer's obligations may flow through sector-specific channels.
Tier 3: Limited risk — transparency obligations (Art. 50) — phased
Limited-risk systems require transparency disclosures:
- Chatbots / conversational AI — users must be informed they are interacting with an AI, except where obvious (Art. 50(1))
- Deepfakes / synthetic image, audio, video — must be labelled as artificially generated (Art. 50(4))
- Emotion recognition + biometric categorisation outside workplace/education — subjects must be informed (Art. 50(3))
Art. 50 obligations sit on both providers and deployers. A SaaS vendor providing a chatbot widget must enable disclosure; a deployer integrating that widget on their customer site must verify the disclosure is shown.
Tier 4: Minimal risk — voluntary
Everything outside the above is minimal risk. The Act encourages voluntary codes of conduct (Art. 95) but imposes no obligations.
GPAI — orthogonal axis (Art. 51–55) — provider obligations live August 2025
GPAI obligations apply to providers of general-purpose AI models. For deployers, the practical concern is:
- Verify your vendor identifies any GPAI underneath their system
- Verify the GPAI provider is in the EU AI Office GPAI register where applicable (Art. 71)
- Verify documentation flow from GPAI provider → your vendor → you covers the integration
Deployers do not have direct GPAI obligations under Art. 51–55, but their vendor's compliance posture for the GPAI it uses materially affects whether the deployed system itself complies with Art. 6 high-risk requirements.
4. Why standard questionnaires miss EU AI Act exposure
Most TPRM workflows in 2026 still rely on questionnaire-based assessment: a document of 40–80 questions covering data handling, security controls, sub-processors, breach history. The vendor self-attests, the risk team scores, the contract proceeds.
Three things questionnaires cannot do for AI Act assessment:
Questionnaires cannot classify
A question like "Does your AI system use personal data?" produces a yes/no answer that doesn't tell you whether the system is Annex III high-risk. Classification requires a structured analysis of:
- Intended purpose statement vs. actual capabilities
- Reasonably foreseeable misuse (which the vendor by definition won't volunteer)
- Annex III category mapping with edge-case judgment
- GPAI dependency disclosure with technical evidence
- Provider/deployer role allocation in your specific deployment
A vendor under no commercial pressure to disclose exposure won't disclose it. A questionnaire is not a forensic instrument.
Questionnaires cannot verify documents
When a vendor uploads a SOC 2 Type II report, a typical TPRM workflow stops at "received". But the SOC 2's Trust Services Criteria scope might exclude Confidentiality. The reporting period might be 9 months old with no surveillance audit. The CUEC list might require your team to implement controls you've never been told about.
The same applies to ISO 27001 (scope statement may carve out the AI subsystem), BSI C5 (Type 1 vs Type 2 distinction matters), DPAs (sub-processor consent mechanism may be silent or non-existent), SBOMs (may list components but not flag known vulnerabilities).
Verification means reading the documents — not just receiving them.
Questionnaires cannot test the actual system
If the vendor's AI system is a chatbot, the only way to know whether its safety guardrails work is to test them. Adversarial probes — prompt injection, jailbreak, data leakage, PII handling, multilingual edge cases — produce reproducible evidence of failures that no self-attestation can match.
This is why AI red-teaming has emerged as a third assessment layer alongside documentary review and security checks. It treats the vendor's AI like any other security control and tests whether it actually does what the marketing says.
5. A 13-dimension framework for AI vendor assessment
A complete TPRM assessment of an AI vendor under EU AI Act needs to cover thirteen risk dimensions:
| # | Dimension | What gets verified |
|---|---|---|
| 1 | Legal entity & beneficial ownership | Registry data, ownership chain, sanctions exposure of ultimate beneficial owners |
| 2 | Data processing scope | DPA coverage under GDPR Art. 28, sub-processor consent mechanism, retention policies |
| 3 | Security certifications | ISO 27001, SOC 2 (Type I vs II), BSI C5 (Type 1 vs 2), expiry, scope vs. claim |
| 4 | Sub-processor chain | Named 4th-parties, concentration risk, jurisdictional spread |
| 5 | Data residency | EU-only vs. cross-border, transfer mechanisms (SCCs), TIA where required |
| 6 | Incident & breach history | Public disclosures, regulatory actions, settlement records, response times |
| 7 | Sanctions / PEP / adverse media | OFAC, EU, UK, UN lists; named individuals; recent press coverage |
| 8 | Business continuity | RTO/RPO commitments, DR test cadence, vendor substitutability |
| 9 | Technical attack surface | DNS, TLS configuration, exposed assets, certificate hygiene |
| 10 | AI use disclosure | Model type, training data origin, inference path, GPAI dependency, system card |
| 11 | EU AI Act risk tier | Annex I, Annex III, Art. 50, minimal — with reasoning citing Article numbers |
| 12 | Documentation completeness | DPA, AUP, sub-processor list, SBOM, DPIA where applicable, system card |
| 13 | DORA / NIS2 / CSDDD applicability | Sector-specific obligation mapping |
Each dimension produces a status (complete / partial / missing / contradictory) with cited evidence. A finding without a quotation, document reference, or probe ID is not a finding — it is an opinion.
6. Documentary verification: what to actually check
The four documents that matter most for AI vendor TPRM:
Data Processing Agreement (DPA)
GDPR Art. 28 requires:
- Subject matter and duration of processing
- Nature and purpose of processing
- Type of personal data and categories of data subjects
- Obligations and rights of the controller
- Sub-processor authorization mechanism (Art. 28(2)) — this is where most DPAs are weakest
- Sub-processor list and notification of changes
- Data subject rights handling
- Audit rights of the controller
Read for: whether the AI use case is named in the subject matter; whether the sub-processor mechanism actually allows you to object; whether the DPA contemplates GPAI sub-processors specifically (most do not).
ISO 27001 certificate + Statement of Applicability
ISO 27001 certs come in many shapes. The certificate alone is meaningless — what matters is the scope statement and the SoA.
Read for:
- Does the scope explicitly include the AI system you are about to use?
- Is the certifying body recognized (UKAS, DAkkS for DACH, IAS)?
- Is the cert in surveillance audit cycle, or stale?
- Does the SoA include Annex A controls relevant to AI (A.5.30 on continuity, A.8.2 on access controls, A.8.16 on monitoring)?
SOC 2 Type II report
Read for:
- Period covered (older than 12 months → useless)
- Trust Services Criteria scope: Security only, or also Availability / Processing Integrity / Confidentiality / Privacy?
- Exceptions and qualifications
- CUECs — what controls does the customer (you) need to implement for the SOC 2 to hold?
- Subservice organizations and their attestations
BSI C5 attestation
For DACH-specific deployments. C5:2020 is the current version. Read for:
- Type 1 (design) vs. Type 2 (operating effectiveness) — Type 2 is what you want
- Scope: services covered, geographic regions
- Auditor and audit period
- Findings and responses
SBOM (Software Bill of Materials)
For AI vendors, the SBOM should include:
- Foundation model identity and version
- Training data lineage where claimed
- Open-source components with versions
- Known vulnerabilities flagged
CycloneDX and SPDX are the dominant formats. A vendor unable to produce an SBOM in 2026 is a procurement red flag regardless of AI Act status.
7. AI red-teaming as procurement diligence
Red-teaming has migrated from research practice to standard procurement diligence. The work breaks down into eight categories:
Probe categories
- Prompt injection — direct, indirect (via RAG context), encoded (base64, unicode), multi-turn
- Jailbreak — persona, hypothetical, roleplay chains, DAN-style
- Data leakage — training-data extraction, system-prompt disclosure, RAG-document leak
- PII handling — synthetic PII tests, redaction quality, downstream propagation
- Toxicity and bias — protected-class probes (race, gender, age, disability, religion, nationality)
- Hallucination — fabricated citation tests, confidence calibration, made-up legal/medical/financial claims
- Tool / function abuse — when the vendor exposes tools, scope misuse
- Multilingual edge cases — DE, RU, AR, TR adversarial inputs (matters for DACH and tourism use cases)
Probe count by tier
A reasonable working baseline:
- Starter: 5 probes (one per top category)
- Pro: 5–10 probes
- Enterprise: 25+ probes including multilingual, multi-turn, and tool-misuse
Each probe produces a pass / partial / fail record with reproduction steps. A failure with severity (Critical / High / Medium / Low) and category becomes a finding in the assessment report.
Authorization and ethics
Red-teaming a vendor's production system requires written authorization. PartnerScope's standard contract clause grants authorization for the scope of the assessment. Without it, probing crosses into unauthorized testing.
8. Conformity assessment — what flows down to deployers
Conformity assessment under the Act applies to providers, not deployers, but the documentary outputs flow to deployers and form the basis of your own compliance.
For Annex III high-risk systems, two routes exist:
- Annex VI (internal control) — most Annex III systems; provider self-assesses
- Annex VII (notified body involved) — biometric ID systems + Annex I products
For your deployer assessment, the question is: does the vendor produce an Annex IV technical file?
Required sections include:
- General description of the system
- Detailed description of elements and development process
- Information on monitoring, functioning, control
- Risk management system (Art. 9)
- Lifecycle changes
- Harmonised standards applied
- EU declaration of conformity
- Post-market monitoring plan
A vendor that cannot produce or summarize this file for an Annex III high-risk system is not Act-compliant. Procurement should treat this as gating for high-risk deployments.
9. Timeline — when each obligation bites
| Date | What kicks in |
|---|---|
| 1 Aug 2024 | Act enters into force |
| 2 Feb 2025 | Prohibitions (Art. 5) and AI literacy (Art. 4) |
| 2 Aug 2025 | GPAI provider obligations, governance bodies, penalties (Art. 52, 99) |
| 2 Aug 2026 | Most other obligations, including Annex III high-risk for deployers (Art. 26, 27) |
| 2 Aug 2027 | Annex I high-risk integration |
| 2 Aug 2030 | Article 6(1) high-risk for systems already on market before Aug 2026 |
The practical implication for deployers: high-risk Annex III obligations under Art. 26 are live 2 August 2026. If a high-risk vendor system is in your stack on that date and you cannot demonstrate Art. 26 compliance, you are non-compliant.
10. Intersections with DORA, NIS2, and CSDDD
EU AI Act third-party risk assessment does not operate in isolation. Three other regulations create overlapping vendor-risk obligations:
DORA (Regulation 2022/2554) — financial entities
DORA Article 28–30 requires financial entities to manage ICT third-party risk. A register of all third-party arrangements is mandated. For AI vendors that are critical ICT third-party providers (CTPP), oversight by the European Supervisory Authorities applies directly.
The 13-dimension assessment maps directly to DORA Article 28 requirements: concentration risk (dimension 4), exit strategy (dimension 8), incident chain (dimensions 6 and 13), sub-outsourcing (dimensions 2 and 4).
NIS2 (Directive 2022/2555)
NIS2 Article 21 requires essential and important entities to implement measures including supply chain security (21(2)(d)). Vendor AI sits within supply chain risk where the AI provides a service to the entity.
CSDDD (Directive 2024/1760)
CSDDD applies to large companies and requires due diligence across business chains for human rights and environmental impact. AI vendors whose use could affect protected rights (employment AI, biometric AI in Annex III categories) fall within the CSDDD assessment when the deployer is in scope.
11. Implementation: a 30/60/90-day plan for risk teams
First 30 days — inventory
- Build an AI vendor inventory: every SaaS, cloud service, embedded model, API access in current use
- For each, identify role (provider / deployer / importer), risk tier (best estimate), GPAI dependency
- Flag the high-risk Annex III candidates and the Art. 50 limited-risk candidates
- Identify the top 10 vendors by risk × spend for prioritized assessment
Days 30–60 — assessment
- Run full 13-dimension assessment on the top 10 vendors
- For each, produce an EU AI Act classification with cited reasoning
- Document Art. 13 / 14 / 26 information flow gaps
- Identify which vendors have not yet provided Annex IV-equivalent technical information
Days 60–90 — remediation + governance
- Issue remediation requests with deadlines for documentation gaps
- For high-risk vendors with material gaps, decide: accept residual risk (with sign-off) / require change / replace
- Establish ongoing monitoring for the AI vendor portfolio
- Integrate findings into existing risk register and DORA / NIS2 reporting
12. Frequently asked questions
Does the EU AI Act apply if my vendor is in the US?
Yes, in two ways. Under Art. 2(1)(c), the Act applies where output produced by the AI system is used in the EU. So a US vendor whose AI generates output consumed in the EU triggers Act obligations. Additionally, if the US vendor places the system on the EU market (sells or licenses it for use in the EU), Art. 2(1)(a) applies directly.
Can I rely on my vendor's self-assessment of AI Act compliance?
In principle yes; in practice no. Self-assessment is a starting point. As deployer, you carry obligations under Art. 26 that depend on the system's actual classification. If the vendor's self-assessment is wrong, your downstream obligations are misallocated, which puts you out of compliance even if you acted in good faith. Independent assessment and documentary verification are the standard of care.
What if my vendor uses ChatGPT, Claude, or Gemini under the hood?
Then the underlying model is a GPAI, and you need to know which one. The provider of the GPAI (OpenAI, Anthropic, Google) has obligations to your vendor under Art. 53, including technical documentation and copyright training-data summary. Your vendor must pass through the relevant information to you. If the GPAI is systemic-risk (currently anything trained with > 10²⁵ FLOPs), Art. 55 imposes additional obligations including model evaluation and incident reporting.
Does my SOC 2 already cover AI Act?
No. SOC 2 is an information security and trust criteria attestation. It does not classify systems under EU AI Act, does not evaluate Annex III applicability, and does not verify GPAI dependency or AI red-teaming. A SOC 2 Type II is a useful input to dimension 3 of your AI vendor assessment, but not a substitute.
How does PartnerScope's 13-dimension framework relate to ISO/IEC 42001?
ISO/IEC 42001 (AI Management System) is a management-system standard for organizations developing or using AI. It is complementary, not overlapping. ISO 42001 governs your internal AI governance posture; the 13-dimension assessment governs your view of vendor AI exposure. Many DACH organizations are pursuing ISO 42001 certification while running parallel AI vendor assessment.
What about AI red-teaming — is it required by the Act?
Not directly for deployers. Art. 55 requires systemic-risk GPAI providers to conduct adversarial testing. For deployers, red-teaming is best-practice diligence, not a regulatory requirement — but it produces evidence that your Art. 26(2) human oversight assignment is informed by actual system behavior, not vendor claims.
Where do I report a serious incident?
Providers report under Art. 73. Deployers must inform the provider, and where applicable the importer/distributor, immediately, and inform market surveillance authorities (Art. 73(1) for providers, Art. 26(5) for deployers' role in the chain).
Do I need to register my vendor's AI in the EU database?
Providers of high-risk AI register their systems in the EU database (Art. 71). Deployers do not register themselves, but the database is a verification tool: a high-risk Annex III system that your vendor cannot point to a database registration for is a flag.
What's the penalty exposure?
Up to €35M or 7% of global annual turnover for prohibited-system violations (Art. 99(3)). Up to €15M or 3% for most other obligations, including deployer obligations under Art. 26 (Art. 99(4)). Up to €7.5M or 1% for incorrect, incomplete, or misleading information to authorities (Art. 99(5)). Penalties scale with deployer size and conduct.
How does PartnerScope help with this?
PartnerScope is a third-party AI risk assessment platform built for the EU AI Act. Every assessment produces a 13-dimension scorecard with EU AI Act classification, GPAI scoping, AI red-teaming evidence, and remediation roadmap — mapped to your deployer obligations under Art. 26 and adjacent regulations (DORA, NIS2, CSDDD). Pricing is transparent: Starter at €99 per vendor, Pro at €299 with red-teaming, Enterprise at €4,900 per quarter for 15-vendor portfolios with continuous monitoring. Operated by EKM Global Consulting GmbH in Baden-Baden, Germany.
13. References
This guide draws on:
- Regulation (EU) 2024/1689 (the EU AI Act)
- Regulation (EU) 2016/679 (GDPR)
- Regulation (EU) 2022/2554 (DORA)
- Directive (EU) 2022/2555 (NIS2)
- Directive (EU) 2024/1760 (CSDDD)
- ISO/IEC 27001:2022 — Information Security Management Systems
- ISO/IEC 42001:2023 — AI Management System
- ISO/IEC 23894:2023 — AI Risk Management
- BSI C5:2020 — Cloud Computing Compliance Criteria Catalogue
- ENISA — AI Cybersecurity Threat Landscape
- EU AI Office — implementing acts, GPAI Code of Practice, FAQs
Internal links — cluster pages
This pillar will link out to (and be linked from) the following cluster articles, all addressing more specific queries:
- "EU AI Act Annex III explained for procurement teams"
- "GPAI provider obligations and what they mean for your vendor stack"
- "DORA + EU AI Act: dual-regime vendor assessment for financial entities"
- "How to read a SOC 2 report when assessing AI vendors"
- "BSI C5 vs SOC 2 vs ISO 27001: which matters for AI vendor risk"
- "PartnerScope vs OneTrust: which TPRM platform handles EU AI Act?"
- "PartnerScope vs Vanta: AI vendor classification, side-by-side"
- "Building a 30/60/90 plan for EU AI Act vendor compliance"
CTA
Run a free 60-second EU AI Act Snapshot at partnerscope.eu — classifies your vendor's AI under Article 5, 6, Annex III, Art. 50, or minimal, plus GPAI status, before you finish reading this guide a second time.
Try PartnerScope
Run a free 60-second EU AI Act Snapshot — classifies your vendor's AI under the Act and produces a starter scorecard before any commitment.