Skip to content

AI Landscape in the UK: Regulatory Perspective as of June 2025

UK Legal Advisory: Judges Alert Legal Profession on Abuse of Artificial Intelligence

AI Landscape in the United Kingdom: Regulatory Perspective June 2025
AI Landscape in the United Kingdom: Regulatory Perspective June 2025

AI Landscape in the UK: Regulatory Perspective as of June 2025

The UK Information Commissioner’s Office (ICO) launched its AI and biometrics strategy on June 5, 2025, marking a significant step in shaping the regulation of artificial intelligence (AI) in the country. The strategy aims to enhance scrutiny of AI and biometric technologies in three key areas: where stakes are high, public concern exists, and regulatory clarity can have immediate impact.

The ICO’s strategy focuses on three priority issues: transparency and explainability, bias and discrimination, and rights and redress. The strategy highlights the importance of ensuring people know when and how AI affects them, addressing the risks from AI systems trained on flawed or incomplete data, and guaranteeing system accuracy, safeguarding rights, and creating ways to challenge harmful outcomes.

The ICO plans to publish a statutory code of practice for AI and automated decision-making (ADM), providing clear rules for responsible AI use under data protection law. The code is expected to provide regulatory clarity and public trust in fast-evolving AI technologies.

The ICO intends to update and consult on its ADM and profiling guidance by Autumn 2025, reflecting recent legal reforms in the Data (Use and Access) Act 2025. The consultation on draft guidance is open until September 7, 2025.

Special attention will be on ensuring AI model developers safeguard personal data, particularly sensitive data such as child sexual abuse material, and prevent misuse in training datasets. The ICO signals readiness to enforce against unlawful AI training risks or harms.

The ICO will scrutinize automated decision-making use in recruitment by major employers and platforms, setting clear responsible-use expectations.

As the main data protection regulator, the ICO leads AI regulation, but other UK regulators like the Financial Conduct Authority (FCA) and Bank of England (BoE) are engaging stakeholders to manage AI risks in their sectors without yet releasing comprehensive AI-specific guidance.

Besides the ADM guidance update, the ICO is consulting on draft guidance for smart product manufacturers and developers until September 7, 2025.

The ICO’s strategy includes foundation model development, face recognition systems used by the police, automated decision-making, and emerging areas like agentic AI.

The Data (Use and Access) Act 2025 has passed both Houses of Parliament and received royal assent on 19 June. The full Automated Vehicles Act 2024, which enables the implementation of a new legal framework for the use of automated vehicles, will become law from the second half of 2027.

The transport secretary has announced plans to set up pilot services of self-driving vehicles in spring 2026. The government is seeking views on what safety standards should be sought for self-driving vehicles in the UK as required by the Act. The call for evidence on safety standards for self-driving vehicles closes on 1 September 2025.

The ICO’s AI and biometrics strategy marks a major step in shaping UK AI regulation, with a statutory code of practice expected soon and consultations underway to provide regulatory clarity and public trust in fast-evolving AI technologies.

  1. The ICO's AI and biometrics strategy emphasizes the need for transparency and explainability in AI usage, ensuring people know when and how it impacts them.
  2. The strategy also targets addressing the risks from AI systems trained on flawed or incomplete data, with an aim to guarantee system accuracy and safeguard rights.
  3. The ICO will Publish a statutory code of practice for AI and automated decision-making (ADM), providing clear rules for responsible AI use under data protection law.
  4. Special attention will be given to AI model developers, focusing on safeguarding personal data, particularly sensitive data like child sexual abuse material, to prevent misuse in training datasets.
  5. In the realm of finance and business, the Financial Conduct Authority (FCA) and Bank of England (BoE) are also engagement stakeholders to manage AI risks in their respective sectors.
  6. The emerging areas of focus in AI regulation include foundation model development, face recognition systems, automated decision-making, and agentic AI.
  7. The Data (Use and Access) Act 2025, which encompasses AI and biometrics, has passed both Houses of Parliament, marking a significant step in shaping UK AI regulation, especially in the transport sector, where self-driving vehicles are to be implemented by piloting services in spring 2026.

Read also:

    Latest