Skip to main content

The Digital Omnibus on AI: recommendations from the EDPB and EDPS

Share this page

1 INTRODUCTION

On 20 January 2026, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) published their Joint Opinion 1/2026 regarding the European Commission's proposal to amend the AI Act, commonly referred to as the "Digital Omnibus on AI" (the "Proposal").

The Proposal, issued in November 2025, aims to simplify the implementation of the obligations imposed by the AI Act and reduce administrative burdens for stakeholders. To this end, the Proposal introduces significant amendments relating to several topics, including facilitating compliance with data protection legislation and delaying the implementation timeline of high-risk AI rules.

While the EDPB and EDPS generally support the European Commission’s efforts to address implementation challenges of the harmonised AI rules, they warn against reducing the existing protection provided by those rules without due consideration of individuals' fundamental rights, in particular the right to protection of personal data. Therefore, the EDPB and EDPS have provided a number of recommendations to the EU legislator with regard to those amendments that are most relevant to data protection.

Outlined below are the principal issues raised by the EDPB and EDPS on the Proposal, along with their corresponding recommendations.

2 PROCESSING OF SPECIAL CATEGORIES OF PERSONAL DATA FOR BIAS DETECTION AND CORRECTION

2.1    Current rules vs. proposed changes

Processing special categories of personal data (e.g., health data, biometric data, or data revealing ethnic or religious background) is in principle prohibited under EU data protection law. By way of exception, the AI Act allows providers of high-risk AI systems to process such sensitive personal data when strictly necessary for the purpose of detecting and correcting bias. 

The Proposal would broaden this exception to all AI systems and models, and would cover not only providers but also deployers (i.e., companies using AI systems). Importantly, the Proposal retains several safeguards: apart from the conditions already provided for under EU data protection law, providers and deployers of AI systems must demonstrate that alternative data are insufficient to fulfil the same objective, implement robust security measures, restrict data access, and delete the data once the bias has been corrected.

2.2    What the authorities say

The EDPB and EDPS are, in principle, supportive of expanding the scope of this exception. They acknowledge that biased AI can adversely affect both data subjects and society as a whole, and that not all AI systems capable of causing harm are formally classified as "high-risk".

However, the EDPB and EDPS express the following concerns:

  • there is a risk of abuse when use cases are not narrowly defined;
  • the Proposal weakens the legal standard from "strictly necessary", which currently applies to high-risk AI systems, to "necessary";
  • the Proposal lacks concrete guidance for the envisaged extension of scope of the exception; and
  • the language used creates ambiguity regarding its relationship with existing rules for processing special categories of personal data.

Therefore, the EDPB and EDPS recommend:

  • clearly defining the circumstances under which the exception applies, particularly for non-high-risk AI, limiting it to scenarios where the risk of adverse effects caused by such bias is sufficiently serious;
  • (re)instating the "strictly necessary" standard for all providers and deployers of AI systems and models;
  • including clear examples indicating when non-high-risk AI systems may qualify; and
  • revising ambiguous wording to ensure legal certainty.

3 REGISTRATION AND DOCUMENTATION REQUIREMENTS

3.1    Current rules vs. proposed changes

Under the AI Act, certain AI systems are by default considered as high-risk, unless the AI system in question does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. Providers who determine their AI systems are not high-risk - despite being classified as such - must still register these systems in the public EU database for high-risk systems before they are placed on the market or put into service.

The Proposal would eliminate this registration obligation, instead requiring providers to only document their reasoning internally and provide it to authorities upon request. 

Additionally, regulatory relief currently available to small and medium-sized enterprises (SMEs) would be extended to "small mid-caps" (SMCs), slightly larger entities.

3.2    What the authorities say

Although supportive of reducing administrative burdens, the EDPB and EDPS insist on retaining the registration requirement for sake of transparency and accountability. Their reasoning includes:

  • removing registration could diminish accountability of providers of AI systems and potentially encourage unjustified exemption claims;
  • public registration provides visibility to all stakeholders, including potential users, about the risk assessment of such AI systems;
  • registration enables authorities to act proactively in response to emerging risks; and
  • the administrative savings that would arise from the proposed modification are minimal, estimated at only EUR 148,500 per year for all companies in scope.

Regarding the extension of regulatory relief to larger companies, the authorities note that the possible harm posed by high-risk AI systems placed on the market does not correlate with company size or headcount, given the scalable nature of AI systems.

4 AI REGULATORY SANDBOXES AT EU LEVEL

4.1    Current rules vs. proposed changes

The AI Act requires the creation of regulatory sandboxes at national level: controlled environments for developing and testing AI systems under regulatory oversight. National competent authorities must ensure that, to the extent the innovative AI systems involve the processing of personal data, the national data protection authorities participate in the operation and supervision of the AI regulatory sandbox.

The Proposal empowers the AI Office to establish EU-wide regulatory sandboxes for specific AI systems under its supervision. These regulatory sandboxes at EU level would complement national sandboxes and give priority access to SMEs. 

4.2    What the authorities say

EU-level sandboxes are welcomed as a way to foster innovation. However, the authorities identify several shortcomings:

  • national sandboxes must involve data protection authorities, but this is not mandated for EU-level sandboxes, even though they may involve processing of personal data;
  • legal uncertainty exists regarding which data protection authority is competent for EU-level sandboxes and how this aligns with the existing cooperation mechanism provided under the GDPR.

The EDPB and EDPS recommend:

  • ensuring that the competent national data protection authorities are associated with the operation of EU-level sandboxes and involved in the supervision and enforcement of the corresponding data processing;
  • clarifying the competence of national data protection authorities and its interplay with the GDPR cooperation mechanism;
  • granting the EDPB an advisory role to ensure a consistent approach on data protection, specifically when several data protection authorities are concerned by the AI system developed in the EU-level sandbox; and
  • granting the EDPB observer status at the European AI Board to ensure continuous involvement in data protection matters.

5 SUPERVISION AND ENFORCEMENT BY THE AI OFFICE

5.1    Current rules vs. proposed changes

Currently, the AI Office is exclusively competent to monitor and supervise compliance with the AI Act of AI systems based on a general-purpose AI model, when the same provider develops the model and system.

The Proposal extends this exclusive oversight to AI systems that constitute or are integrated into very large online platforms or very large online search engines. When exercising its oversight tasks, the AI Office is required to actively cooperate with other authorities involved in the application of the AI Act.

5.2    What the authorities say

While acknowledging the benefits of centralised oversight, the authorities are concerned that the active cooperation requirement may not be sufficient, in that national authorities could be prevented from acting if the AI Office does not intervene.

According to the EDPB and EDPS:

  • the AI Office should coordinate closely with national data protection authorities where the relevant AI systems present privacy and data protection risks;
  • the AI Act should explicitly define which general-purpose AI models trigger the AI Office's exclusive oversight competence, so as to ensure effective supervision of all AI systems; and
  • the AI Act should clearly state that the AI Office will not oversee AI systems placed on the market, put into service, or used by EU institutions, which fall under the supervision of the EDPS.

6 COOPERATION BETWEEN FUNDAMENTAL RIGHTS AND MARKET SURVEILLANCE AUTHORITIES

6.1    Current rules vs. proposed changes

The AI Act is without prejudice to the competences, tasks, powers and independence of relevant national public authorities or bodies which supervise the application of Union law protecting fundamental rights, including equality bodies and data protection authorities (together, FRABs). Currently, where necessary for their mandate, those FRABs can request and access any documentation that is created or maintained for the purpose of AI Act compliance directly from AI providers and deployers.

The Proposal would channel these requests through market surveillance authorities (MSAs), establishing a centralised contact point. The proposed amendments aim to clarify cooperation between FRABs and MSAs.

6.2    What the authorities say

The authorities support efforts to streamline cooperation and welcome the creation of a central contact point. However, routing all information requests through MSAs may lead to inefficiencies. In addition, the Proposal lacks an elaborate procedure for information exchange, especially in case of cross-border cooperation.

Therefore, the EDPB and EDPS recommend:

  • specifying that MSAs act solely as administrative intermediaries, without assessing the validity of information requests; 
  • safeguarding the independence and existing powers of data protection authorities to obtain information directly for the purpose of monitoring compliance with data protection law;
  • requiring MSAs to provide the requested information without undue delay; and
  • clarifying the relationship with current cross-border cooperation frameworks for AI systems.

7 AI LITERACY OBLIGATIONS

7.1    Current rules vs. proposed changes

The AI Act requires AI providers and deployers to ensure, through appropriate measures, their staff and other relevant persons possess adequate understanding of AI concepts, benefits, and risks. 

The Proposal would transform the current obligation for providers and deployers of AI systems to take AI literacy measures into an obligation for the European Commission and Member States to encourage providers and deployers to do so. 

7.2    What the authorities say

The authorities recommend maintaining the requirement as is. Downgrading it to a mere encouragement would undermine the provision's intended effect, which is to ensure appropriate knowledge and skills throughout the AI lifecycle in order to protect fundamental rights, including the right to data protection, and support compliance with AI rules, including the provisions on processing of personal data.

An encouragement role for the European Commission and Member States can be introduced, but it should only apply in parallel to providers’ and deployers’ AI literacy obligation, not replace it. 

8 IMPLEMENTATION TIMELINE OF HIGH-RISK RULES

8.1    Current rules vs. proposed changes

Currently, high-risk AI obligations are set to take effect on 2 August 2026 for AI systems in areas such as biometrics, law enforcement, and employment, and on 2 August 2027 for AI systems embedded in products regulated by EU law, such as medical devices. However, high-risk AI systems already placed on the EU market before 2 August 2026 are largely excluded from the scope of the AI Act unless and until the moment that they are subject to significant change in design.

First, the Proposal would postpone these dates until, respectively, six (6) or twelve (12) months after the European Commission has confirmed that adequate measures for compliance are in place, but no later than, respectively, 2 December 2027 or 2 August 2028. Second, the Proposal would also extend the “grandfathering” period to 2 December 2027, permitting more legacy AI systems to avoid the requirements of the AI Act unless they undergo substantial changes.

8.2    What the authorities say

Certain implementation challenges may seem to justify the proposed delay of application (e.g., delays in designating national competent authorities and lack of harmonised standards). 

However, the EDPB and EDPS advise against the proposed amendment as delaying the application of the rules could jeopardise the protection of fundamental rights, especially in a rapidly evolving AI environment, and movable deadlines may create uncertainty for businesses. Furthermore, the EDPB and EDPS remind that they had already objected to the inclusion of a “grandfathering” clause in the AI Act before.

Therefore, the authorities suggest keeping the current timeline, at least for certain high-risk AI obligations such as transparency. If the proposed delay is nevertheless adopted, all stakeholders – in particular, the European Commission – should work to minimise the delay to the extent possible.

9 KEY TAKEAWAYS

The Joint Opinion underscores that, while data protection authorities endorse efforts to make AI rules more practicable, they are unwilling to accept changes that would erode fundamental rights safeguards. Main themes emerging from their recommendations are:

  • preserving accountability by maintaining robust registration and documentation requirements to prevent risky AI systems from evading oversight;
  • sustaining the powers and independence of data protection authorities, ensuring new oversight structures do not undermine coordination on privacy and data protection matters;
  • ensuring legal certainty through clear and unambiguous rules on processing of special categories of personal data, sandbox oversight, and implementation timelines.

Organisations should closely monitor the legislative process, as the Digital Omnibus on AI may have significant implications for the practical operation of the AI Act.

Our Lydian Information & Communication Technology (ICT) and Information Governance and Data Protection (Privacy) teams are available to assist you with any questions regarding the AI Act and/or Digital Omnibus on AI. Please do not hesitate to contact us for further assistance.

Authors

  • Olivia Santantonio
    Partner

    Olivia Santantonio

    Download VCARD