RegRally Insights: Personal Data Protection and ICT Regulation – April 2026

Welcome to the April 2026 edition of RegRally Insights: Personal Data Protection and ICT Regulation. Our briefing summarises practical takeaways, upcoming coordinated enforcement actions, and evolving guidance to help organisations ensure compliance, mitigate risks, and maintain trust in an increasingly complex digital environment.

EDPB 2025 Report on the Right to Erasure under GDPR

The European Data Protection Board (EDPB) has published its 2025 report on the right to erasure (“right to be forgotten”) under GDPR, reviewing how the right is applied across the EU.

The report notes that many organisations have processes in place, but practices remain inconsistent. Key challenges include balancing erasure requests with legal retention duties (e.g. tax or criminal law), verifying requester identity, and deleting data from backups or third-party processors.

The EDPB encourages organisations to implement clear procedures, strengthen technical deletion capabilities, and provide regular staff training.

The report is a practical resource for DPOs, legal teams, and businesses seeking stronger GDPR compliance and more effective handling of erasure requests.

Our recommendations:

1. Implement clear and consistent erasure procedures. Ensure standardised processes for handling deletion requests across all systems and business units. Avoid fragmented or ad hoc approaches.

2. Balance erasure requests with legal obligations. Organisations must carefully assess when data can be deleted versus when retention is required (e.g., tax, legal, or regulatory obligations). Document these decisions to demonstrate compliance.

3. Strengthen identity verification processes. Establish secure and proportionate methods to verify requesters’ identities, reducing the risk of unauthorised data deletion while avoiding excessive data collection.

4. Address technical limitations in data deletion. Ensure that personal data can be effectively erased from:

  • Active systems
  • Backups (where feasible, or ensure restricted access and eventual deletion)
  • Third-party processors

5. Improve third-party and vendor coordination. Controllers should ensure that processors:

  • Can support erasure requests
  • Act promptly on deletion instructions
  • Provide confirmation of deletion

6. Maintain transparency and communication. Clearly inform individuals about:

  • When their data will be deleted
  • When deletion is not possible and why
  • Applicable retention periods

7. Invest in tools and automation. Leverage technology to track, manage, and execute erasure requests efficiently, especially in complex IT environments.

8. Train staff and raise awareness. Regular training for customer-facing teams, legal, and IT staff is essential to ensure requests are recognised and handled correctly.

Cyber Incident Affecting European Commission Web Infrastructure

The European Commission disclosed a cyberattack affecting the cloud infrastructure hosting its Europa.eu website.

The incident was quickly contained, with mitigation measures implemented to protect services and data. Website availability was not disrupted, and the Commission confirmed that its internal systems were not affected.

Preliminary findings indicate that data may have been taken from affected websites, and potentially impacted EU entities are being notified. Investigations into the full scope of the incident remain ongoing.

The Commission stated it will use the findings to further strengthen its cybersecurity resilience amid growing cyber and hybrid threats across Europe.

Our recommendations:

The recent cyber incident affecting the European Commission highlights that even highly secured public institutions remain vulnerable to sophisticated cyber-attacks. While the incident was swiftly contained and did not disrupt services, initial findings indicate potential data exfiltration.

In light of this, companies are encouraged to:

  • Strengthen cloud security controls – ensure that cloud infrastructure is regularly audited, monitored, and protected against unauthorized access.
  • Implement incident response procedures – establish clear internal processes to detect, contain, and report cybersecurity incidents without delay.
  • Ensure data breach readiness – be prepared to assess whether a breach triggers notification obligations under applicable data protection laws.
  • Regularly test systems and access controls – including penetration testing and review of user permissions.
  • Monitor third-party risks – especially where external platforms or hosting services are used.

This incident also underscores the importance of aligning with EU cybersecurity requirements, particularly under the NIS2 Directive, which strengthens obligations in risk management, incident reporting, and resilience across critical sectors.

EU Single Entry Point for Security Incident Reporting

The European Parliament highlighted progress toward creating a Single Entry Point (SEP) for incident reporting under the Commission’s Digital Omnibus package.

The SEP would be a central ENISA-managed platform that allows organisations to submit notifications required under GDPR, NIS2, DORA, CRA, and other EU rules through a single interface, replacing multiple national reporting channels.

Key points:

  • Existing reporting triggers remain unchanged.
  • GDPR breach notification deadline would extend from 72 to 96 hours.
  • ENISA would forward reports to the relevant authorities.
  • If unavailable, existing fallback reporting channels must still be used.

The SEP is expected to become operational within 18 months of the legislation entering into force, with possible delays due to security or technical readiness.

One reporting channel: Organisations will be able to report incidents through a single EU platform instead of multiple national authorities.

No change to reporting scope: Existing obligations (what must be reported) remain unchanged.

Extended GDPR deadline: The GDPR’s personal data breach notification period is expected to increase from 72 to 96 hours.

Forwarding mechanism: ENISA will distribute reports to the relevant authorities.

Fallback required: If the SEP is unavailable, organisations must still use existing reporting channels.

Timeline: The SEP is expected to become operational within 18 months after the legislation enters into force (with possible delays).

Spain Fines FC Barcelona €500,000 for Deficient Biometric DPIA

Spain’s data protection authority, the AEPD, fined FC Barcelona €500,000 for failing to carry out a GDPR-compliant Data Protection Impact Assessment (DPIA) before processing biometric data.

The case involved a 2023 digital census of around 143,000 members that included identity scans, facial recognition with liveness checks, and optional voice recordings.

The AEPD found the DPIA was inadequate because it:

  • failed to clearly identify biometric data processing;
  • did not properly assess less intrusive alternatives;
  • underestimated risks linked to sensitive biometric identifiers.

The authority stressed that DPIAs must be substantive, completed before processing starts, and genuinely assess necessity, proportionality, risks, and alternatives.

The decision signals increased scrutiny of biometric technologies and confirms that formal compliance alone is insufficient without a robust risk assessment.

The recent enforcement action by Spanish Data Protection Agency against FC Barcelona highlights critical expectations under the General Data Protection Regulation when deploying biometric technologies.

1. Treat DPIAs as a substantive exercise, not a formality

A Data Protection Impact Assessment must go beyond templates. It should clearly describe the nature of biometric data (e.g., facial recognition, voice patterns), processing purposes, and data flows.

2. Justify the necessity of biometrics

Financial institutions should explicitly assess whether less intrusive alternatives (e.g., MFA, document-based verification) could achieve the same objective. Regulators expect a documented comparison – not assumptions.

3. Identify and assess risks realistically

Biometric data is highly sensitive. DPIAs must include:

Risk of misuse or identity theft

False positives/negatives in authentication

Long-term implications of biometric data breaches

4. Ensure lawful basis and valid consent (if applicable)

If consent is used, it must be:

  • Freely given (no coercion or “mandatory” biometrics without alternatives)
  • Specific and informed
  • Revocable

5. Complete DPIA before processing begins

Regulators emphasize timing. A DPIA conducted after deployment – or one that is incomplete – does not meet compliance requirements.

6. Validate third-party vendors rigorously

Where external providers (e.g., biometric verification vendors) are involved:

  • Assess their security and compliance posture
  • Clearly define roles (controller vs processor)
  • Ensure contractual safeguards are in place

7. Maintain transparency with customers

Clearly communicate:

  • What biometric data is collected
  • Why it is necessary
  • How long it is retained
  • What alternatives exist

8. Document mitigation measures and decisions

Authorities expect to see not just risks, but concrete steps taken to reduce them – and why certain approaches were chosen.

Dutch DPA Warns Against OpenClaw AI Agents

The Dutch Data Protection Authority (AP) has warned users and organisations against using OpenClaw and similar experimental autonomous AI agents due to serious cybersecurity and privacy risks.

These tools can receive broad access to a user’s computer, email, files, and online accounts, enabling them to act independently. According to the AP, this creates significant exposure to data breaches, credential theft, malware, and account takeovers.

Reported risks include:

  • malicious third-party plug-ins stealing credentials or crypto-assets;
  • hidden commands embedded in websites, emails, or chats;
  • remote compromise of entire systems;
  • unauthorised extraction of personal or confidential data.

The AP advises against using such tools on systems containing sensitive data and reminds organisations and individuals that GDPR compliance and risk mitigation responsibilities remain with the user and deployer, even for open-source systems.

Our recommendations:

1. Avoid deploying experimental AI agents in sensitive environments

Tools like OpenClaw, which require broad system access (email, files, internal systems), should not be used on devices or networks containing confidential or personal data.

2. Apply strict access controls and the principle of least privilege

AI assistants should never be granted unrestricted access. Limit permissions to only what is strictly necessary and segregate critical systems wherever possible.

3. Treat third-party plugins as high-risk components

Open-source ecosystems may include unverified or malicious plugins. Organizations should:

  • Restrict or disable external plugins
  • Perform security vetting before use
  • Continuously monitor for suspicious behaviour

4. Mitigate risks of prompt injection and hidden commands

Autonomous agents can be manipulated via emails, websites, or messages containing hidden instructions. Implement safeguards such as:

  • Input filtering and monitoring
  • Isolation of AI environments from critical systems

5. Strengthen credential and identity protection

Given the risk of account takeover:

  • Enforce multi-factor authentication (MFA)
  • Rotate passwords and access tokens regularly
  • Immediately reset credentials if exposure is suspected

6. Conduct risk assessments before deployment

Even for open-source tools, organisations remain fully responsible for compliance. A documented risk assessment (and, where applicable, a DPIA) should be completed prior to any implementation.

7. Raise internal awareness

Ensure employees understand:

  • The risks of installing AI agents locally
  • The potential for data leakage and system compromise
  • Approved vs. prohibited tools within the organisation

8. Extend controls to remote and home environments

Where employees work remotely, ensure policies cover the use of personal devices for work. Also, encourage vigilance regarding family- or shared-device usage.

Luxembourg Court Annuls €746 Million GDPR Fine Against Amazon

The High Administrative Court of Luxembourg annulled the €746 million fine imposed by the CNPD on Amazon, while confirming that Amazon had breached several GDPR provisions in relation to targeted advertising.

The court upheld findings that Amazon lacked a valid legal basis for behavioural advertising and breached transparency and data subject rights. However, it found procedural flaws in the sanction decision.

Key reasons for annulment:

  • failure to properly assess whether Amazon acted intentionally or negligently;
  • insufficient analysis of proportionality;
  • failure to consider alternative corrective measures before imposing the fine.

The case has been referred back to the CNPD to reassess fault and proportionality before any new sanction is imposed.

The ruling reinforces that even where GDPR breaches are established, supervisory authorities must ensure sanctions are legally robust, proportionate, and procedurally sound.

Our recommendations:

  • Reassess your legal basis for targeted advertising (especially if relying on legitimate interest).
  • Ensure consent mechanisms meet GDPR standards (granular, documented, withdrawable).
  • Review and update privacy notices for clarity and completeness. Strengthen processes for handling data subject rights.
  • Document internal assessments (e.g., balancing tests, risk assessments) to demonstrate accountability.
Newsletter SubscriptionGet in touch