With generative artificial intelligence (“AI”) foundation models and products becoming more understood, utilised and actively implemented across sectors, and following the UK’s entry into the first binding international treaty governing the safe use of AI, we set out the events that are shaping the regulatory landscape as AI technology rapidly advances.

We look more specifically at the last 18 months of heightened activity, the post ChatGPT era, considering the key strands that will shape regulation and determine the environment for the use of AI by purposeful businesses and charities. As the European Commission identified when first introducing the EU AI Act in April 2021, AI will pose an additional governance challenge for SMEs and regulation must provide “a proper solution to support all organisations who use or intend to use models which fall into the high-risk category.”

2023: The UK leans “pro-innovation”, Kamala Harris appears at Bletchley Park, and questions of copyright intensify

While AI systems have been in use for some time, the widespread attention brought by ChatGPT to the powers of generative AI heightened society’s recognition of AI technology and the need for bespoke regulations to mitigate risk.

The UK’s “pro-innovation” approach

In March, the previous UK Government’s White Paper, AI regulation: a pro-innovation approach, was launched reflecting the then Government’s ambition to foster innovation by taking a light touch approach to regulation using non-binding principles (in contrast with the comprehensive regulation the EU was introducing at the time). The White Paper focussed on providing guidance and resources to current regulators rather than enforcing strict legal obligations on AI developers/providers and/or the introductions of an overarching regulatory body for AI. While the White Paper did not call for the comprehensive approach to AI regulation seen in the EU, the previous Government did stress that binding measures would be required for “highly-capable general-purpose AI” in future – a stance that we expect the current Labour government to put into action.

As the regulator in charge of protecting personal data, the Information Commissioner’s Office (ICO) will be on the front line of AI regulation owing to the extensive use of data sets in AI systems. Leading up to the release of the White Paper, the ICO updated their guidance on AI and Data Protection recognising the crucial role they have and stated that AI is a priority area for their work.

The quest for collaborative “Global AI Governance”

The United Nations responded to the need for a globally coordinated AI response with the Secretary-General of the UN forming the AI Advisory Body in October 2023. The interim report of this multi-stakeholder body was released 3 months later in December 2023 titled “Governing AI for Humanity.”

The following month, the UK hosted the AI Safety Summit at Bletchley Park where 28 countries agreed to the Bletchley Declaration. The Bletchley Declaration recognises the need to collectively, and on a global scale, manage potential risks posed by AI and to ensure it is developed and used in a safe and responsible way. Vice President Harris represented the United States focussing on the need for the benefits of AI to be shared equitably and stating that “technology with global impact requires global action.”

Other players

While the EU has, so far, led the way in approaching AI regulation in a comprehensive, enforceable and cross-sectoral manner, both the United States and China have made significant movements in AI regulation.

Earlier in 2023, President Biden issued an Executive Order on Safe, Secure and Trustworthy Artificial Intelligence. This somewhat sharpens the “soft law” approach that had been taken in the United States up to this point by requiring actions from a limited category of private companies and expanding the mandate of federal agencies in relation to AI best practice and safeguards. This followed the voluntary commitments made by 15 prominent US companies to comply with AI safeguards facilitated by President Biden’s administration.

Colorado became the first state in the United States to pass comprehensive AI legislation. Similarly to data protection laws in the US, a jigsaw of approaches to AI at state level is likely. In comparison and by virtue of not having a federal system, should the UK legislate, it will be able to provide a unified and clear approach to those building and utilising AI systems in the UK.

China has moved quickly on generative AI regulation introducing a range of measures in quick succession. One such regulation, the Interim Measures for the Management of Generative Artificial Intelligence Services (the “AI Measures”) came into force in August 2023. These AI Measures are enforceable regulations, rather than voluntary codes or principles, which apply to all companies who provide generative AI services to people within China. The content of the AI Measures shares similarities with the principles set out in the UK White Paper and President Biden’s Executive Order, although reflecting China’s own values through the requirement for generative AI services to “uphold China’s socialist values.”

Intellectual property rights tested on both sides of the Atlantic

Across jurisdictions, the tension between companies developing generative AI tools and content creators continues to mount. To cap off what had already been a jam-packed year of developments, on 27 December 2023, The New York Times Company launched a legal challenge against OpenAI regarding the use of copyrighted New York Times’ material in training ChatGPT. In June 2024, the Recording Industry Association of America (RIAA) filed two copyright infringement cases on behalf of Sony, Universal and Warner Records against two AI song generators. In the UK, a similar case has been brought by Getty Images alleging that Stability AI’s use of Getty Images content, is infringing Getty Images’ intellectual property rights. With generative AI developers being reliant on access to digital databases, the court’s response to these claims will be instrumental in setting the parameters that generative AI developers will need to operate within and determining the necessary direction of regulation.

Other IP related cases have included patent cases concerning whether an AI system could be an inventor for the purposes of patent law (a critical question as the identity of the inventor determines who owns the patent). Across multiple jurisdictions, including the UK, Australia and Germany, it has been found that an inventor must be a natural person.

In the UK, the previous Government took steps to address this AI v IP issue with an industry-led consultation involving both AI developers and content creators to create an AI copyright code that aimed to make licences for data mining more available. In February 2024, these efforts were abandoned.

These cases and the abandoned AI copyright code initiative highlight the inherent difficulty for the UK in setting regulation that protects creative industries while also encouraging investment from the burgeoning, and training-data-reliant, generative AI sector.

2024: The EU AI Act becomes law, the UK has a new Government and the environmental impacts of AI become more widely understood

The first comprehensive regulation on AI

The AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) (“EU AI Act”) was passed by the European Parliament in March with the provisions becoming enforceable over the next three years. In the lead up to the legal deadline, the European Commission is encouraging organisations to adopt the AI Pact which is a voluntary commitment for organisations to take concrete actions to understand, adapt, and prepare for the future implementation of the EU AI Act.

The aim of the EU AI Act is to harmonise the laws on AI across the EU to ensure safety, transparency and respect for fundamental rights. The Act is intended to apply broadly and extra-territorially in that any entity that provides an AI system to people in the EU (as either a developer or a deployer) will need to comply with the associated regulations. Entities that use the output of an AI system in the EU are also included.

The EU AI Act categorises AI systems by level of risk. The most high-risk and particularly harmful AI systems are prohibited which includes systems which use “subliminal techniques beyond a person’s consciousness”. Most of the Act focuses on AI systems that are classed as high-risk, which includes those that are used in critical infrastructure, education and vocational training, recruitment or selection of workers, assessment of public/private benefit eligibility and law enforcement. A number of obligations must be met by developers and deployers of high-risk AI system’s including having robust risk management systems, conducting fundamental rights impact assessments and ensuring a level of human oversight.

Some commentators see the EU AI Act as overly burdensome and likely to stifle innovation while others welcome the certainty and clarity it will provide to innovators and the prominence given to human health, safety and fundamental rights.

The UK elects a new Government

In the King’s Speech the Government confirmed its manifesto commitment to introduce binding regulation on “the most powerful artificial intelligence models.” This focus on the “most powerful” models and “a handful of companies” suggests a much narrower piece of regulation can be expected as compared to the EU AI Act. In one of the new Government’s first significant actions in AI regulation since coming into power, the UK became a signatory of the first legally binding international treaty governing the safe use of AI on 5 September.

Prior to the election, an indication of the Labour Government’s direction on AI regulation was provided by the now technology and science secretary Peter Kyle MP. Kyle stated that Labour would move from a voluntary code to a statutory code. AI companies would be legally required to share testing data with the Government, inform the Government when developing an AI system over a certain level of capability and conduct independently verified safety tests. The UK has played an integral role in developing an AI system capability assessment. The UK’s AI Safety Institute (the world’s first state-backed AI Safety Institute) released a freely and globally available software library to enable testers to assess the specific capabilities of an AI tool earlier this year.

With the new UK Government appointing the first ever Minister for Data Protection and the Labour Party Manifesto promising a Regulatory Innovation Office (whose functionality would include cross-sectoral issues), it will be interesting to see whether a sector specific approach to AI regulation will be maintained by this Government (i.e. resource the current regulators to address AI issues within their remit) and for how long. Alternatively, the Labour Government could establish an overarching AI regulator to specialise in, and address, the complexity of AI technology (as this technology is inherently not sector specific) with a mandate much beyond that of the current Digital Regulation Cooperation Forum. Based on the UK Government’s current rhetoric being focused on the balancing of the books, the costly process of establishing a new and specific regulator in this AI area, beyond the Regulatory Innovation Office is an unlikely step at this stage.

On 9 September, Lord Clement Jones introduced a new Private Members’ Bill that focusses on regulating the use of AI in the public sector. The Private Members’ Bill will put pressure on the Government and instigate discussion that will likely influence the form of the Government’s eventual AI legislation.

But what about the environment?

The National Grid’s Chief Executive, John Pettigrew, delivered a speech in March 2024 predicting a six-fold surge in the consumption of power by data centres (a crucial resource for AI systems) over the next decade. The speech touched upon the need to develop sustainable and high-capacity grid infrastructure to meet these needs, and the urgency of developing sustainable AI technologies that align with global emissions targets.

The scale of this issue was exemplified with Google’s Environmental Report 2024. The report highlighted a 48% increase in Google’s greenhouse gas emissions in the period from 2019 – 2023. The report attributes this increase to the rapid developments in, and usage of, AI by Google and its customers.

The EU AI Act has been criticised for its lack of reference to the relationship between AI and environmental sustainability. Previous versions of the Act, as it progressed through the European Parliament, contained much more pointed obligations concerning the environment. This could be a missed opportunity for the EU to lead on this fundamental issue.

The UK, United States and EU unite to protect consumers

The UK Competition and Markets Authority (CMA) has been active in its engagement with the issues surrounding the rapid advancement of AI. The CMA is empowered by the new Digital Markets, Competition and Consumer Act to directly enforce consumer law and has stated that “they are ready to use these new powers to tackle firms that do not play by the rules in AI-related markets.” Significantly, on 23 July 2024, the CMA, the European Commission, the US Department of Justice, and the US Federal Trade Commission issued a joint statement on competition in generative AI foundation models and AI products. The joint statement identifies key risks to competition, including the control of essential AI inputs, the potential for market power entrenchment by large tech companies, and the influence of partnerships and financial investments on market outcomes. It also highlights the importance of fair dealing, interoperability, and consumer choice in fostering innovation and competition.

This unified approach marks a pivotal moment in regulatory efforts to prevent market dominance by a few key players and demonstrates that collaborative and global AI governance is moving forward.

Looking forward

With the UK Government stating that “UK AI innovation is at the heart of the government’s plans to spark economic growth through a productivity revolution”, we await news of the next steps regulation in the UK will take and how closely those steps will follow the comprehensive approach of the EU. A different path that embraces the same high-level principles seems likely, and we expect the UK to continue to be a key player is supporting global AI governance. It is doubtful that the UK will lead the AI and sustainability conversation through regulation; with the empowering of technology companies to develop AI tools to help with the energy crisis and directly combat climate change seeming the more likely route. At this stage, purposeful businesses and charities will need to look beyond the current regulations in the UK to meet the best ethical practices for developing and utilising AI that ensures the protection of people and planet.