Complete Guide

EU AI Act Third-Party Risk: A Complete Guide for DACH-Regulated Organizations

The EU AI Act applies to deployers, not just providers. From August 2026 the prohibitions and governance rules go live. From August 2027, high-risk obligations under Annex III bite. Anything AI in your vendor stack inherits obligations to you. Standard TPRM tools, built before AI was a risk category, miss this exposure entirely. This guide explains how to classify vendor AI under the Act, what documents to actually verify, where GPAI obligations propagate downstream, and how a 13-dimension assessment framework maps the work to your existing risk-management process.

TL;DR. The EU AI Act applies to deployers, not just providers. From August 2026 the prohibitions and governance rules go live. From August 2027, high-risk obligations under Annex III bite. Anything AI in your vendor stack inherits obligations to you. Standard TPRM tools, built before AI was a risk category, miss this exposure entirely. This guide explains how to classify vendor AI under the Act, what documents to actually verify, where GPAI obligations propagate downstream, and how a 13-dimension assessment framework maps the work to your existing risk-management process.


1. What the EU AI Act actually requires — providers vs. deployers

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 and phases in over four years. The Act distinguishes several roles, and the obligations attached to each are not symmetric:

A single organization frequently occupies multiple roles. A bank that licenses a fraud-detection model from a SaaS vendor, retrains it on its own data, and offers fraud-as-a-service to subsidiaries is simultaneously a deployer (of the original system), a provider (of its retrained variant — see Art. 25(1)(c)), and arguably an importer if the vendor is non-EU.

What changes for deployers under the Act:

Obligation Article Triggers when
Use system per provider's instructions for use 26(1) Always for high-risk AI
Assign human oversight to competent persons 26(2) High-risk
Ensure input data is relevant and representative 26(4) High-risk
Monitor operation, retain logs ≥ 6 months 26(5)–(6) High-risk
Inform workers' representatives + workers 26(7) Workplace high-risk
Conduct fundamental rights impact assessment 27 Annex III public-sector + essential services
Cooperate with competent authorities 26(11) Always
Transparency for limited-risk systems 50 Limited risk (chatbots, deepfakes, emotion recognition, biometric categorisation)

The deployer obligations under Art. 26 are not optional. They apply regardless of how diligent the provider was, and they require the deployer to operate the high-risk AI in a way that complies with the Act — meaning the deployer needs to know what kind of system they actually have.

What providers owe their downstream deployers

Providers of high-risk AI must give deployers everything they need to comply (Art. 13–14, 26(3)). This includes:

If a vendor cannot provide any of these on request, that vendor is not Act-compliant — and using their system as a deployer puts you out of compliance too.


2. The deployer's third-party AI problem

In a typical DACH-regulated organization (bank, insurer, healthcare network, energy utility), 60–80% of AI is not built in-house. It comes from:

Each of these can trigger AI Act obligations. Each has a different role in the regulatory chain. Each has documentation that may or may not exist, may or may not be valid, may or may not cover what the vendor claims.

The deployer's core problem is straightforward: before you can comply with Art. 26 obligations, you need to know what classification your vendor's AI falls under, and you need that classification to be defensible to your regulator and your auditor.

The compounding effect of GPAI

The Act introduces a separate set of obligations for general-purpose AI models (Art. 51–55). A GPAI model is one that displays significant generality, can perform a wide range of distinct tasks, and can be integrated into a variety of downstream systems (Art. 3(63)).

When your vendor's product uses a GPAI model underneath — and most modern AI vendors do — three things happen:

  1. The GPAI provider has obligations (technical documentation, copyright compliance, training data summary)
  2. The GPAI provider must make information available to your vendor
  3. Your vendor inherits responsibility for the system as integrated, with the GPAI as a sub-component

If the GPAI is classified as systemic-risk GPAI (training compute > 10²⁵ FLOPs as of writing — Art. 51(2)), additional obligations apply: model evaluation, adversarial testing, incident reporting, cybersecurity protections (Art. 55).

This means your vendor risk now has three axes: - Risk tier of the system as deployed (Articles 5/6/50/minimal) - GPAI status of any models underneath - Systemic-risk GPAI status

Standard TPRM frameworks track none of these.


3. Risk-tier classification for vendor AI

Every AI Act assessment of a vendor system answers two questions:

  1. What tier does this system fall into?
  2. What role(s) does the vendor have, and what role(s) do you have?

Tier 1: Prohibited (Art. 5) — fully active since February 2025

The Act prohibits specific AI practices outright. Practices banned under Art. 5 include:

If a vendor's system performs any of these, the relationship is closed. You cannot deploy a prohibited system, period.

In assessing vendors, prohibitions matter even when the system's intended purpose is benign — because reasonably foreseeable misuse (Art. 9(2)(b)) extends the analysis. A vendor that ships a "workplace productivity insight" tool with emotion-inference features built in falls under Art. 5(1)(f) regardless of marketing language.

Tier 2: High-risk (Art. 6 + Annexes I, III) — bites August 2027

A system is high-risk when:

OR

For Annex III systems, the deployer obligations under Art. 26 fully apply. The deployer must understand which category, must implement human oversight, retain logs, conduct FRIA where required (Art. 27).

For Annex I systems, integration with sectoral conformity assessment (MDR for medical devices, Machinery Regulation, etc.) is required. The deployer's obligations may flow through sector-specific channels.

Tier 3: Limited risk — transparency obligations (Art. 50) — phased

Limited-risk systems require transparency disclosures:

Art. 50 obligations sit on both providers and deployers. A SaaS vendor providing a chatbot widget must enable disclosure; a deployer integrating that widget on their customer site must verify the disclosure is shown.

Tier 4: Minimal risk — voluntary

Everything outside the above is minimal risk. The Act encourages voluntary codes of conduct (Art. 95) but imposes no obligations.

GPAI — orthogonal axis (Art. 51–55) — provider obligations live August 2025

GPAI obligations apply to providers of general-purpose AI models. For deployers, the practical concern is:

  1. Verify your vendor identifies any GPAI underneath their system
  2. Verify the GPAI provider is in the EU AI Office GPAI register where applicable (Art. 71)
  3. Verify documentation flow from GPAI provider → your vendor → you covers the integration

Deployers do not have direct GPAI obligations under Art. 51–55, but their vendor's compliance posture for the GPAI it uses materially affects whether the deployed system itself complies with Art. 6 high-risk requirements.


4. Why standard questionnaires miss EU AI Act exposure

Most TPRM workflows in 2026 still rely on questionnaire-based assessment: a document of 40–80 questions covering data handling, security controls, sub-processors, breach history. The vendor self-attests, the risk team scores, the contract proceeds.

Three things questionnaires cannot do for AI Act assessment:

Questionnaires cannot classify

A question like "Does your AI system use personal data?" produces a yes/no answer that doesn't tell you whether the system is Annex III high-risk. Classification requires a structured analysis of:

A vendor under no commercial pressure to disclose exposure won't disclose it. A questionnaire is not a forensic instrument.

Questionnaires cannot verify documents

When a vendor uploads a SOC 2 Type II report, a typical TPRM workflow stops at "received". But the SOC 2's Trust Services Criteria scope might exclude Confidentiality. The reporting period might be 9 months old with no surveillance audit. The CUEC list might require your team to implement controls you've never been told about.

The same applies to ISO 27001 (scope statement may carve out the AI subsystem), BSI C5 (Type 1 vs Type 2 distinction matters), DPAs (sub-processor consent mechanism may be silent or non-existent), SBOMs (may list components but not flag known vulnerabilities).

Verification means reading the documents — not just receiving them.

Questionnaires cannot test the actual system

If the vendor's AI system is a chatbot, the only way to know whether its safety guardrails work is to test them. Adversarial probes — prompt injection, jailbreak, data leakage, PII handling, multilingual edge cases — produce reproducible evidence of failures that no self-attestation can match.

This is why AI red-teaming has emerged as a third assessment layer alongside documentary review and security checks. It treats the vendor's AI like any other security control and tests whether it actually does what the marketing says.


5. A 13-dimension framework for AI vendor assessment

A complete TPRM assessment of an AI vendor under EU AI Act needs to cover thirteen risk dimensions:

# Dimension What gets verified
1 Legal entity & beneficial ownership Registry data, ownership chain, sanctions exposure of ultimate beneficial owners
2 Data processing scope DPA coverage under GDPR Art. 28, sub-processor consent mechanism, retention policies
3 Security certifications ISO 27001, SOC 2 (Type I vs II), BSI C5 (Type 1 vs 2), expiry, scope vs. claim
4 Sub-processor chain Named 4th-parties, concentration risk, jurisdictional spread
5 Data residency EU-only vs. cross-border, transfer mechanisms (SCCs), TIA where required
6 Incident & breach history Public disclosures, regulatory actions, settlement records, response times
7 Sanctions / PEP / adverse media OFAC, EU, UK, UN lists; named individuals; recent press coverage
8 Business continuity RTO/RPO commitments, DR test cadence, vendor substitutability
9 Technical attack surface DNS, TLS configuration, exposed assets, certificate hygiene
10 AI use disclosure Model type, training data origin, inference path, GPAI dependency, system card
11 EU AI Act risk tier Annex I, Annex III, Art. 50, minimal — with reasoning citing Article numbers
12 Documentation completeness DPA, AUP, sub-processor list, SBOM, DPIA where applicable, system card
13 DORA / NIS2 / CSDDD applicability Sector-specific obligation mapping

Each dimension produces a status (complete / partial / missing / contradictory) with cited evidence. A finding without a quotation, document reference, or probe ID is not a finding — it is an opinion.


6. Documentary verification: what to actually check

The four documents that matter most for AI vendor TPRM:

Data Processing Agreement (DPA)

GDPR Art. 28 requires:

Read for: whether the AI use case is named in the subject matter; whether the sub-processor mechanism actually allows you to object; whether the DPA contemplates GPAI sub-processors specifically (most do not).

ISO 27001 certificate + Statement of Applicability

ISO 27001 certs come in many shapes. The certificate alone is meaningless — what matters is the scope statement and the SoA.

Read for:

SOC 2 Type II report

Read for:

BSI C5 attestation

For DACH-specific deployments. C5:2020 is the current version. Read for:

SBOM (Software Bill of Materials)

For AI vendors, the SBOM should include:

CycloneDX and SPDX are the dominant formats. A vendor unable to produce an SBOM in 2026 is a procurement red flag regardless of AI Act status.


7. AI red-teaming as procurement diligence

Red-teaming has migrated from research practice to standard procurement diligence. The work breaks down into eight categories:

Probe categories

Probe count by tier

A reasonable working baseline:

Each probe produces a pass / partial / fail record with reproduction steps. A failure with severity (Critical / High / Medium / Low) and category becomes a finding in the assessment report.

Authorization and ethics

Red-teaming a vendor's production system requires written authorization. PartnerScope's standard contract clause grants authorization for the scope of the assessment. Without it, probing crosses into unauthorized testing.


8. Conformity assessment — what flows down to deployers

Conformity assessment under the Act applies to providers, not deployers, but the documentary outputs flow to deployers and form the basis of your own compliance.

For Annex III high-risk systems, two routes exist:

For your deployer assessment, the question is: does the vendor produce an Annex IV technical file?

Required sections include:

  1. General description of the system
  2. Detailed description of elements and development process
  3. Information on monitoring, functioning, control
  4. Risk management system (Art. 9)
  5. Lifecycle changes
  6. Harmonised standards applied
  7. EU declaration of conformity
  8. Post-market monitoring plan

A vendor that cannot produce or summarize this file for an Annex III high-risk system is not Act-compliant. Procurement should treat this as gating for high-risk deployments.


9. Timeline — when each obligation bites

Date What kicks in
1 Aug 2024 Act enters into force
2 Feb 2025 Prohibitions (Art. 5) and AI literacy (Art. 4)
2 Aug 2025 GPAI provider obligations, governance bodies, penalties (Art. 52, 99)
2 Aug 2026 Most other obligations, including Annex III high-risk for deployers (Art. 26, 27)
2 Aug 2027 Annex I high-risk integration
2 Aug 2030 Article 6(1) high-risk for systems already on market before Aug 2026

The practical implication for deployers: high-risk Annex III obligations under Art. 26 are live 2 August 2026. If a high-risk vendor system is in your stack on that date and you cannot demonstrate Art. 26 compliance, you are non-compliant.


10. Intersections with DORA, NIS2, and CSDDD

EU AI Act third-party risk assessment does not operate in isolation. Three other regulations create overlapping vendor-risk obligations:

DORA (Regulation 2022/2554) — financial entities

DORA Article 28–30 requires financial entities to manage ICT third-party risk. A register of all third-party arrangements is mandated. For AI vendors that are critical ICT third-party providers (CTPP), oversight by the European Supervisory Authorities applies directly.

The 13-dimension assessment maps directly to DORA Article 28 requirements: concentration risk (dimension 4), exit strategy (dimension 8), incident chain (dimensions 6 and 13), sub-outsourcing (dimensions 2 and 4).

NIS2 (Directive 2022/2555)

NIS2 Article 21 requires essential and important entities to implement measures including supply chain security (21(2)(d)). Vendor AI sits within supply chain risk where the AI provides a service to the entity.

CSDDD (Directive 2024/1760)

CSDDD applies to large companies and requires due diligence across business chains for human rights and environmental impact. AI vendors whose use could affect protected rights (employment AI, biometric AI in Annex III categories) fall within the CSDDD assessment when the deployer is in scope.


11. Implementation: a 30/60/90-day plan for risk teams

First 30 days — inventory

Days 30–60 — assessment

Days 60–90 — remediation + governance


12. Frequently asked questions

Does the EU AI Act apply if my vendor is in the US?

Yes, in two ways. Under Art. 2(1)(c), the Act applies where output produced by the AI system is used in the EU. So a US vendor whose AI generates output consumed in the EU triggers Act obligations. Additionally, if the US vendor places the system on the EU market (sells or licenses it for use in the EU), Art. 2(1)(a) applies directly.

Can I rely on my vendor's self-assessment of AI Act compliance?

In principle yes; in practice no. Self-assessment is a starting point. As deployer, you carry obligations under Art. 26 that depend on the system's actual classification. If the vendor's self-assessment is wrong, your downstream obligations are misallocated, which puts you out of compliance even if you acted in good faith. Independent assessment and documentary verification are the standard of care.

What if my vendor uses ChatGPT, Claude, or Gemini under the hood?

Then the underlying model is a GPAI, and you need to know which one. The provider of the GPAI (OpenAI, Anthropic, Google) has obligations to your vendor under Art. 53, including technical documentation and copyright training-data summary. Your vendor must pass through the relevant information to you. If the GPAI is systemic-risk (currently anything trained with > 10²⁵ FLOPs), Art. 55 imposes additional obligations including model evaluation and incident reporting.

Does my SOC 2 already cover AI Act?

No. SOC 2 is an information security and trust criteria attestation. It does not classify systems under EU AI Act, does not evaluate Annex III applicability, and does not verify GPAI dependency or AI red-teaming. A SOC 2 Type II is a useful input to dimension 3 of your AI vendor assessment, but not a substitute.

How does PartnerScope's 13-dimension framework relate to ISO/IEC 42001?

ISO/IEC 42001 (AI Management System) is a management-system standard for organizations developing or using AI. It is complementary, not overlapping. ISO 42001 governs your internal AI governance posture; the 13-dimension assessment governs your view of vendor AI exposure. Many DACH organizations are pursuing ISO 42001 certification while running parallel AI vendor assessment.

What about AI red-teaming — is it required by the Act?

Not directly for deployers. Art. 55 requires systemic-risk GPAI providers to conduct adversarial testing. For deployers, red-teaming is best-practice diligence, not a regulatory requirement — but it produces evidence that your Art. 26(2) human oversight assignment is informed by actual system behavior, not vendor claims.

Where do I report a serious incident?

Providers report under Art. 73. Deployers must inform the provider, and where applicable the importer/distributor, immediately, and inform market surveillance authorities (Art. 73(1) for providers, Art. 26(5) for deployers' role in the chain).

Do I need to register my vendor's AI in the EU database?

Providers of high-risk AI register their systems in the EU database (Art. 71). Deployers do not register themselves, but the database is a verification tool: a high-risk Annex III system that your vendor cannot point to a database registration for is a flag.

What's the penalty exposure?

Up to €35M or 7% of global annual turnover for prohibited-system violations (Art. 99(3)). Up to €15M or 3% for most other obligations, including deployer obligations under Art. 26 (Art. 99(4)). Up to €7.5M or 1% for incorrect, incomplete, or misleading information to authorities (Art. 99(5)). Penalties scale with deployer size and conduct.

How does PartnerScope help with this?

PartnerScope is a third-party AI risk assessment platform built for the EU AI Act. Every assessment produces a 13-dimension scorecard with EU AI Act classification, GPAI scoping, AI red-teaming evidence, and remediation roadmap — mapped to your deployer obligations under Art. 26 and adjacent regulations (DORA, NIS2, CSDDD). Pricing is transparent: Starter at €99 per vendor, Pro at €299 with red-teaming, Enterprise at €4,900 per quarter for 15-vendor portfolios with continuous monitoring. Operated by EKM Global Consulting GmbH in Baden-Baden, Germany.


13. References

This guide draws on:


Internal links — cluster pages

This pillar will link out to (and be linked from) the following cluster articles, all addressing more specific queries:


CTA

Run a free 60-second EU AI Act Snapshot at partnerscope.eu — classifies your vendor's AI under Article 5, 6, Annex III, Art. 50, or minimal, plus GPAI status, before you finish reading this guide a second time.

Try PartnerScope

Run a free 60-second EU AI Act Snapshot — classifies your vendor's AI under the Act and produces a starter scorecard before any commitment.