Insights
AI Office Publishes Its First Code of Practice on AI‑Generated Content Transparency
Jan 26, 2026The EU AI office has published his first draft Code of practice on transparency of AI generated content (the “Code”) providing voluntary guidelines for marking and labelling AI output (audio, image, video or text) ensuring users know when content is AI-made or manipulated in application of the obligations found under article 50 of the AI Act which shall become enforceable in August 2026.
The Code offers practical, technical measures, but adherence is voluntary, though it provides legal certainty and reduced burden for deployers and providers of AI systems.
Obligations of AI systems providers
Marking
AI systems providers must ensure that the outputs are marked in a machine readable format and detectable as artificially generated or manipulated. The AI generated or manipulated content must be marked with an imperceptible watermark.
The marks on AI generated content must be preserved and not altered where such content is used as an input and subsequently transformed.
Detection
To make the outputs detectable, the providers of AI systems shall enable detection by users and other third parties. They must provide, free of charge, interface or publicly available detector to verify whether a content has been generated or manipulated by their AI Systems. Providers should maintain these detection mechanisms throughout the system and model’s lifecycle.
Providers of AI systems are invited to implement forensic detection mechanisms which do not depend on the presence of active AI marking.
The technical solutions promoted by providers shall be effective (computationally efficient) low cost, ensuring real time application without hampering the quality of the generated content, interoperable, robust (marking techniques must resist common alterations such as cropping, compression, changes in resolution or change of format) and reliable.
Obligation of AI systems deployers
Obligations of deployers are complementary to the technical solutions implemented by providers contributing to increase transparency along the AI value chain.
Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of interest (unless if a human has reviewed the text publication and a natural or legal person has assumed editorial responsibility) or that generate or manipulate audio, image or video (deepfake) must disclose that the content has been generated or manipulated by AI.
Deployers will have to identify AI generated and manipulated content by placing a common icon in a visible and consistent location. This icon is not yet developed and until it is deployers may use an interim icon to support consistent disclosure composed of a two-letter acronym referring to artificial intelligence such as the following icon:
Specific measures for deepfake disclosure
For real-time deepfake video, deployers will display the icon in a non-intrusive way consistently throughout the exposure where feasible. Furthermore, they will insert a disclaimer at the beginning of exposure that explain that this display content includes deepfake.
For non-real-time deepfake video, deployers may place a disclaimer at the beginning of the exposure or place the icon consistently throughout the exposure in an appropriate fixed place.
With regard to deep fake content that forms part of evidently artistic, creative, satirical, fictional work or programmes deployers can use the display the icon for at least five seconds. For video that forms part of evidently artistic, creative, satirical, fictional work or programmes the icon must appear for a “sufficient timing” without more precision.
For deepfake images, Signatories will place the common icon consistently at any exposure in a fixed place.
Audio deepfake will include a short audible disclaimer, in plain and simple natural language, of the content disclosing the artificial origin of the deep fake audio (only at the beginning of the content if it is shorter than 30 seconds and repeatedly for longer formats). For deep fake content that forms part of evidently artistic work, a non-intrusive audible disclaimer should be inserted at the latest at the time of the first exposure without the need to be repeated.
Specific measures for AI generated or manipulated text
For AI generated or manipulated text published with the purpose of informing the public on matters of public interest the icon must be displayed in a fixed, clear and distinguishable position. This fixed place could include but is not limited to placing the icon at the top of the text, beside the text, in the colophon or after the closing sentence of the text.
Conclusion
Although it is not mandatory, the Code contains useful but burdensome obligations for deployers and providers. It provides that the labelling process should not only be based on automation but also supported by appropriate human oversight which should result in complexification for deployers and providers.
In addition, both deployers and providers shall draw up, implement and update a compliance framework that outlines the marking commitments which shall be documented and updated to be shared with competent market surveillance authorities upon request. They must also provide appropriate training to personnel with regards to their obligations relating to AI content identification.
Where applicable, deployers and providers will ensure that the results of the detection mechanisms, and where applicable user interfaces, are accessible to persons with disabilities, in compliance with applicable accessibility requirements under European Union law.
Our BCLP team can assist you in implementing these measures, whether you are a provider or a deployer of AI systems.
Related Capabilities
-
Data Privacy & Security
-
General Data Protection Regulation