Internal vs. commercial use of Open Source AI models under the European AI Act
Context
A company is committed to using Open Source AI models adapted for use in internal administrative tasks, but also for commercial activities, in the preparation of deliverables and the provision of services to clients.
The question is whether the AI Act applies in this context or not.
You’re reading ZERO PARTY DATA. The newsletter on current affairs, technopolies, and law by Jorge García Herrero and Darío López Rincón.
In the spare moments this newsletter leaves us, we specialize in solving complicated stuff in personal data protection. If you have one of those, give us a little wave. Or contact us by email at jgh(at)jorgegarciaherrero.com
Thanks for reading Zero Party Data! Sign up!
1.- Key premises
1.1.- “Deployer” and “Provider” in the AI Act: the thin (and shifting) red line
The AI Act classifies as a Deployer any company that uses an Open Source model—whether as a component of its own product, as an internal productivity tool, or as the engine of a service directed at third parties.
In addition, the Deployer may even end up also being a Provider when it substantially modifies the system or places it on the market under its own name (Art. 25), as we will see.
1.2.- Scope of the “Open Source” exemption
Article 2(12) provides that the AI Act shall not apply to AI components supplied under free and Open Source licences, unless:
• they are commercialised or put into service as high-risk AI systems (Annex III) or as systems subject to transparency obligations;
• the provider of the Open Source component is a general-purpose AI (GPAI) model with systemic risk.
But this exemption operates only for the provider of the component, not for the Deployer that integrates that component into a system and uses it—internally or externally—as stated.
The Deployer’s obligations arise from the effective use of the system and its risk classification, regardless of the licence under which the underlying model is distributed (Open Source or commercial).
2.- Internal use (administrative purposes, productivity)
In this scenario, the company deploys the Open Source model exclusively to optimise internal processes, such as: generating draft documents, summaries, data analysis, back-office task automation, email classification, assistance to departments (HR, finance, legal), etc.
That is, there is no direct interaction between the system and clients or external users.
However, the AI Act does not distinguish between internal or external contexts: the obligations attached to each risk classification apply in any case.
2.1. Internal use for “High-risk” purposes
If the AI system is used in any of the areas in Annex III—for example, to internally select candidates in recruitment processes (Annex III, point 4), assess employee performance, allocate work tasks, or make decisions affecting working conditions—the Deployer is subject to the high-risk obligations regime.
Accordingly, the Deployer’s main obligations are:
• Use in accordance with the Provider’s instructions (Art. 26(1)). The Deployer must use the system in accordance with the instructions for use provided by the provider, including the prescribed human oversight. In the case of Open Source models, this implies the company must internally document which provider instructions it has applied and, if it has adapted them, justify that adaptation.
• Effective human oversight (Art. 26(2)). The Deployer must ensure that the natural persons responsible for overseeing the system have the necessary competence, training and authority. And that they actually exercise it. Not easy at all.
• Monitoring and incident reporting (Art. 26(5)). When the Deployer detects that the system may present a risk, it must inform the provider without delay and, where appropriate, the competent authority. Serious incidents must be notified.
• Fundamental rights impact assessment (Art. 27). As with data protection PIAs, it is not always mandatory, but it is always advisable.
• Retention of logs or automatic records (Art. 26(6)). The Deployer must retain the records automatically generated by the system for a minimum period of six months, unless sector-specific legislation provides otherwise.
• Data protection impact assessment (DPIA) (Art. 26(9)). Where the high-risk system processes personal data, the Deployer must use the information provided by the provider to comply with its obligation to carry out a data protection impact assessment.
• Information to affected persons (Arts. 26(11)–(12)). The Deployer must inform the affected persons that they are subject to the use of a high-risk AI system. At present, these persons are most likely employees and, in the employment context, this obligation is reinforced: workers and their representatives must be informed in advance (Art. 26(7)).
2.2. Internal use for Limited-risk purposes
If the system does not fall within any of the high-risk categories in Annex III but directly interacts with persons (e.g., an internal chatbot for employees) or generates synthetic content, the Deployer’s obligations focus on transparency (Art. 50):
• Notice of interaction with AI (Art. 50(1)). If the system interacts with natural persons, the Deployer must ensure that those persons are informed that they are interacting with an AI system, unless this is evident from the context.
• Identification of synthetic content (Arts. 50(4)–(5)). In relation to synthetic text, audio, image or video outputs, the Deployer must disclose that the content has been artificially generated or manipulated. In internal contexts, this obligation could materialise, for example, by marking drafts produced by the system for internal use as “generated by AI”.
• Emotion recognition or biometric categorisation systems (Art. 50(3)). If such systems were deployed internally, the Deployer must inform the persons exposed. However, it is worth recalling here that certain uses of emotion recognition in the workplace are directly prohibited (Art. 5(1)(f)).
Writing assistants with no interaction with third parties, non-biometric data analytics engines) may not even fall within the limited-risk category, remaining as systems with no specific obligations under the AI Act.
But it would be better to have someone check that for you.
3.- External use (deliverables to clients / services to users)
In this case, the Open Source model powers products, services or deliverables aimed at clients, consumers or external users (SaaS platforms based on Open Source LLMs, automatically generated reports for clients, public-facing customer service chatbots, content generation for third parties, etc.).
Key detail: the Deployer may simultaneously acquire the status of provider (Provider) if:
a. it places the system on the market or puts it into service under its own name or brand (Art. 25(1)(a)), or
b. it substantially modifies a system already placed on the market (Art. 25(1)(b)), or
c. it modifies the intended purpose of a high-risk system (Art. 25(1)(c)).
In such cases, it would have the full provider obligations in addition to those of the Deployer.
3.1. External use for high-risk purposes
Where the service or product offered to clients falls within one of the Annex III categories, the applicable regime is the most stringent under the Regulation. In addition to the obligations already detailed in Section 2.1 above (which apply in full), the Deployer in an external context must pay particular attention to:
• Enhanced fundamental rights impact assessment (Art. 27). When the system is aimed at users or affects third parties, the FRIA becomes more practically relevant. The specific risks for the groups of persons that may be affected must be assessed, as well as the human oversight measures, action plans in case of adverse outcomes, and the governance mechanism. The results must be notified to the market surveillance authority.
• Information to persons affected by the output (Art. 26(11)). Where the system’s decisions affect external natural persons (clients, applicants, users), the Deployer must inform them in a timely, concise and intelligible manner.
• Enhanced cooperation with the model provider (Art. 26(4)). In the Open Source ecosystem, where the provider’s support is often non-existent, the Deployer must proactively assume responsibility for verifying that the system functions as intended. If the Deployer has modified the model, it may have assumed provider obligations (Art. 25).
• Obligation to comply with the provider’s quality management system (Art. 26(1)). In practice, most Open Source models lack a formalised quality management system compliant with Article 17. The Deployer will have to implement its own compensatory controls or assume the position of provider if such controls do not exist.
• Registration in the EU database (Art. 49(3)). Certain Deployers of high-risk systems are required to register the use of the system in the EU database. This applies particularly to Deployers that are public authorities or private entities providing public services.
A deployer that also assumes the status of provider will have to deal with the quality management system, risk management, technical documentation, conformity assessment, CE marking, and the EU declaration of conformity.
3.2. External use for limited-risk purposes
This will probably be the most frequent scenario in August 2026, when all of this takes effect: customer service chatbots, content generators, summarisation or translation tools offered as a service, virtual assistants, etc.
The Deployer’s obligations focus on transparency (Art. 50):
• Information about interaction with AI (Art. 50(1)). Users/clients must know they are interacting with an AI system. The communication must be clear, timely (before or at the start of the interaction) and accessible.
• Labelling of synthetic content generated for clients (Arts. 50(4)–(5)). Any text, image, audio or video content generated by AI and delivered to third parties must be identified as artificially generated or manipulated. This has direct implications for companies that generate reports, analyses, creative texts, images or videos for clients using Open Source models.
• Deepfakes (Art. 50(4)). If the system generates or manipulates image, audio or video content that closely resembles real persons, objects, places or other entities or real events, and may appear authentic (deepfake), the Deployer has the specific obligation to inform that the content has been artificially generated or manipulated.
• Technical marking of content (Art. 50(2), provider obligation). Although technical marking (watermarking, metadata) is primarily an obligation of the model provider, the Deployer must ensure it does not remove or disable such markers. In the Open Source ecosystem, the model is likely not to incorporate these mechanisms, which may create a practical gap that the Deployer should remedy through its own measures.
We will leave general-purpose AI (GPAI) models for another day.
Jorge García Herrero
Lawyer and Data Protection Officer


