A provisional deal on EU rules on Artificial Intelligence, agreed Thursday by EU co-legislators, makes it easier for providers to comply with the AI Act, and also bans ‘nudifiers’ and AI-assisted creation of child sexual abuse material.

The law postpones the application of certain parts of the AI Act to ensure that necessary standards and support measures, needed to clarify the application of the rules, are in place.
Following the agreement, obligations on high-risk AI systems will apply:
- From 2 December 2027 for AI systems with a high-risk use case (including those involving biometrics, and those used in critical infrastructure, education, employment, law enforcement, and border management)
- From 2 August 2028 for AI systems used as safety components and covered by EU sectoral legislation on safety and market surveillance
The law also delays the application of watermarking obligations on AI-generated content until 2 December 2026 (instead of 2 February 2027 in the Commission proposal). Watermarking techniques allow for the detection and tracing of AI-generated content.
The EU Parliament and Council also agreed to ban AI systems that create child sexual abuse material or depict the intimate parts of an identifiable person, or them engaged in sexually explicit activities, without that person’s consent.
The prohibition applies to:
- placing AI systems on the EU market with the purpose of creating such content;
- placing them on the EU market without reasonable safety measures to prevent such creation;
- deployers using these systems for the purpose of creating such content.
The content in question can be images, video or audio. Companies will have until 2 December 2026 to bring their systems in line.
The following changes to the AI Act were also agreed:
- Removing overlapping requirements on AI for machinery products by clarifying that they only need to comply with sectoral safety rules (instead of both the AI Act and sectoral rules); with safeguards that ensure an equivalent level of health and safety;
- Narrowing down what qualifies as “safety component”, meaning that products with AI functions that only assist users or optimise performance will not automatically face high-risk obligations, if their failure or malfunction does not create health or safety risks;
- Possibility to process personal data where strictly necessary to detect and correct biases, with proper safeguards, both in high-risk and non-high-risk AI systems ;
- Extending SME exemptions from certain rules to small mid-cap enterprises (SMCs), to support their growth;
- Streamlining enforcement of certain general-purpose AI systems within the EU’s AI Office.
The provisional agreement now needs to be formally adopted by both Parliament and Council before it can enter into law. The co-legislators intend to adopt it before 2 August 2026, the start date for current rules on high-risk systems.






