Legal Framework

GDPR and AI: Can Businesses Send Personal Data to AI Tools?

Legal analysis of Art. 6, data processing agreements, third-country transfers and the EU AI Act

Using AI tools such as ChatGPT, Copilot or DeepL in business raises fundamental data protection questions. May personal data be transmitted to these services? Which legal basis applies? And what does the new EU AI Act mean in practice?

This article systematically analyses the legal framework — from the legal basis under Art. 6 GDPR to the question of data processing agreements through to third-country transfers and the positions of data protection authorities. It concludes with a practical recommendation for organisations that want to use AI tools in a GDPR-compliant manner.

Note: This article does not constitute legal advice. It provides guidance based on the current legal position and statements from data protection authorities. For a binding assessment, consult your Data Protection Officer or a specialist data protection lawyer.

Legal Basis: Art. 6 GDPR

On what basis may personal data be transmitted to AI tools?

Any processing of personal data requires a legal basis under Art. 6(1) GDPR. For AI tool usage, three legal bases are primarily relevant:

Consent (Art. 6(1)(a))

The data subject may consent to the processing. In practice, however, this is rarely feasible for business data: who obtains consent from every person named in a contract before it is analysed by an AI? Moreover, consent must be revocable at any time, which complicates AI usage.

Performance of a Contract (Art. 6(1)(b))

Processing is lawful if it is necessary for the performance of a contract. This rarely applies, as AI usage is typically not contractually agreed and the data subjects are not parties to the AI usage.

Legitimate Interest (Art. 6(1)(f))

The most commonly relied-upon legal basis in practice. The organisation has a legitimate interest in efficient document processing. However, this must be balanced against the interests of the data subjects. The more sensitive the data, the greater the weight of the data subjects' protective interests.

Practical tip: Pseudonymization significantly strengthens the balancing test. When only pseudonyms are transmitted to the AI, the risk to data subjects is considerably lower — the organisation's legitimate interest will typically prevail.

Data Processing: Art. 28 GDPR

Is OpenAI a data processor — and what does that mean?

When an organisation transmits personal data to an external service provider that processes it on its behalf, this constitutes data processing under Art. 28 GDPR. This requires a Data Processing Agreement (DPA).

The Key Question: Data Processor or Independent Controller?

The classification of AI providers such as OpenAI is legally contested:

  • In favour of data processing: The AI provider processes data on behalf of the user, has no independent purpose and acts on instructions.
  • Against data processing: OpenAI uses input data (with certain products) for model training — this is an independent processing purpose that goes beyond data processing.

DPAs with Enterprise Products

ChatGPT Enterprise, Microsoft Copilot (with Microsoft 365 DPA) and Google Gemini Enterprise offer Data Processing Agreements. These regulate:

  • No model training with customer data
  • Data processing only within contractually agreed purposes
  • Technical and organisational measures for data protection
  • Sub-processors and their locations

Residual Risk Remains

Even with a DPA, risks remain: data leaves the EU, is processed on US servers and is subject to US law. Pseudonymization solves this problem as the AI only processes pseudonyms — even in the event of a data breach on the provider's side, the real data is not affected.

Third-Country Transfers: Art. 44-49 GDPR

The challenge of data transfers to the US after Schrems II

Most AI tools are operated by US companies. The transfer of personal data to the US is subject to strict rules:

EU-US Data Privacy Framework

Since July 2023, the EU Commission's adequacy decision enables data transfers to certified US companies. OpenAI and Microsoft are certified. However:

  • The adequacy decision could be challenged again (following Schrems I and Schrems II)
  • US intelligence agencies may access data under certain circumstances (FISA Section 702)
  • For particularly sensitive data (health, financial, professional secrets), additional safeguards are advisable

Positions of Data Protection Authorities

European data protection authorities have commented extensively on AI usage:

  • Italy (Garante, 2023): Temporary ChatGPT ban due to lack of legal basis, insufficient transparency and missing age verification. The ban was lifted after OpenAI made improvements.
  • EDPB Task Force (2024): The European Data Protection Board task force is developing harmonised guidelines for GDPR-compliant AI usage.
  • Hamburg DPA (2023): Warning against uncontrolled ChatGPT usage in companies and authorities. Recommendation: anonymize or pseudonymize data before input.
  • French CNIL (2024): Publication of practical guides for GDPR-compliant use of generative AI.

EU AI Act: The New Regulation

What the EU AI Act means for processing personal data with AI

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 and will become applicable in phases through to 2027. It supplements the GDPR with specific requirements for AI systems:

Risk Classification

  • Unacceptable risk: Prohibited (e.g. social scoring, real-time biometric surveillance)
  • High risk: Strict requirements on transparency, documentation, data quality
  • Limited risk: Transparency obligations (e.g. labelling AI-generated content)
  • Minimal risk: No special requirements

General-Purpose AI (GPAI)

ChatGPT, Copilot and Claude fall under the GPAI provisions as general-purpose AI models. Providers must meet transparency requirements, provide technical documentation and implement additional safety measures for models with systemic risk.

Data Protection Remains Mandatory

The AI Act does not replace the GDPR but supplements it. GDPR requirements continue to apply to the processing of personal data. The AI Act explicitly emphasises that technical protective measures such as pseudonymization and anonymization should be used.

Practical Recommendation: When Is AI Usage Permitted?

Clear rules for GDPR-compliant deployment of AI tools

AI usage is generally permissible when:

  • A DPA with the AI provider is in place (enterprise contract)
  • Legitimate interest is a viable legal basis
  • Third-country transfer requirements are met
  • Data is pseudonymized or anonymized before transmission
  • A Data Protection Impact Assessment (DPIA) has been conducted
  • Employees are trained on risks and correct usage

AI usage is critical or impermissible when:

  • Special category data (Art. 9 GDPR: health, religion, sexual orientation) is processed without additional safeguards
  • Data of minors is involved
  • No DPA exists and free versions are used
  • Professional secrets (legal, tax, medical privilege) are affected
  • No DPIA has been carried out

Core recommendation: Pseudonymization is the most pragmatic solution for most use cases. It significantly reduces data protection risk, strengthens the balancing test under Art. 6(1)(f) and enables productive use of AI tools without exposing personal data. Docuflair Mask automates this process entirely on-premises.

Use AI Tools in a GDPR-Compliant Way

With Docuflair Mask, you automatically pseudonymize documents before they are sent to AI tools. On-premises, GDPR-compliant and available to experience live in just 15 minutes.

Frequently Asked Questions

Answers to the most important questions about GDPR and AI

Is OpenAI a data processor under Art. 28 GDPR?

The classification is disputed. OpenAI processes data partly for its own purposes (e.g. model training), which argues against pure data processing. ChatGPT Enterprise includes a Data Processing Agreement, but responsibility for GDPR compliance still lies with the using organisation.

Can I send personal data to US-based AI tools?

Data transfers to the US are possible under certain conditions since the EU-US Data Privacy Framework. OpenAI is certified. However, residual risks remain as US authorities may access data under certain circumstances. Pseudonymization minimises the risk as no real personal data is transferred.

What does the EU AI Act say about processing personal data?

The EU AI Act classifies AI systems by risk level. For high-risk AI systems, strict requirements apply regarding transparency, documentation and data quality. Processing personal data must additionally meet GDPR requirements. Pseudonymization is recognised as an effective protective measure.

When is AI usage with personal data permitted?

AI usage is permitted when a legal basis exists (e.g. consent, legitimate interest), a DPA with the provider is in place, third-country transfer requirements are met and suitable technical safeguards such as pseudonymization are implemented.

See it live in 15 min

No obligation & free
Schedule Demo