Bates Wells convenes a regular Impact Counsels’ Forum for senior counsel working within purpose-driven businesses. A recent meeting of the forum discussed the regulatory landscape for artificial intelligence (AI), the risks of using AI in certain contexts and what approaches could help businesses deploy AI responsibly. Inspired by the forum discussion, we share our thoughts on where we are now and what’s on the horizon for AI regulation.

Recent discourse on AI appears to have raised the general level of concern over the possible negative impacts of the technology, given its potential to create fundamental, global, economic and social change. The impact of AI on jobs, its potential as a tool for misinformation, and concerns over data security and discriminatory practices are all causes for concern. A recent open letter signed by experts calls for a pause on developing evermore powerful AI to refocus on safety, transparency and accuracy in these systems.

The strength of these apprehensions puts greater focus on how technology is being regulated and, for businesses, provides an uncertain landscape at a time when AI use is growing. Fortunately, there is some reassurance for those that are willing to take a purpose-driven approach. Whilst this may mean going beyond basic regulatory compliance requirements, using AI in a way that is consistent with the business’ purpose and values, and accounts for its impact on multiple stakeholders, can help businesses manage the risk of negative impacts.

Understanding the regulatory landscape for AI

The current regulatory approach for AI varies between jurisdictions. In the UK, businesses looking to develop or deploy AI tools must consider a ‘patchwork’ of rules made up of various regulations that can apply to AI. In contrast, the EU’s proposed AI Act will create a strict regime that will extend beyond the EU’s borders.

The UK government recently published a white paper, currently subject to consultation, setting out its plans for a principles-based approach to AI regulation that is described as being pro-innovation. Not long after this publication, the Prime Minister and Technology Secretary announced £100m in funding for a Foundation Model Taskforce, which aims to establish the UK as a world leader in ‘foundation models’ (AI systems trained on massive data sets that can be used for a range of tasks across the economy) and AI safety. More recently, the Prime Minister has commented on the need for ‘guard rails’ for AI development and an international approach to regulation.

As AI develops and becomes more prevalent, we expect to see regulatory development in response to these big-picture perspectives, as the economic and social impacts of this technology become better understood. Particularly while the regulatory landscape is still nascent, businesses need to think beyond minimum compliance requirements and take a more purposive approach to risk management that accounts for the negative impacts that can potentially arise from using AI:

Discrimination: Businesses must be wary of AI tools causing unintentional harm to marginalised groups. Research shows worrying trends, such as job adverts for STEM roles being shown more to men than women, and facial recognition technology creating disproportionate challenges for Black and marginalised ethnicities, with higher failure rates being observed in these groups.

Transparency and Accountability: AI models make decisions about users but, in some cases, the algorithm is so complicated that it becomes incomprehensible, making it harder to scrutinise decisions and determine if they are discriminatory or to explain their workings. Another issue is a lack of accountability across AI value chains which often have many participants, from the company designing the AI tool to the one deploying it. Businesses should be aware that, depending on the circumstances, legal responsibility for the negative outcomes of an AI tool may still fall on the final company in the chain, which deploys the tool.

Data Protection and Human Rights: All individuals are afforded the right to privacy as part of human rights frameworks, so AI models should be assessed for their intrusiveness, such as where employers use AI to monitor worker activity levels. Similarly, a data privacy lens helps us consider the security and protections required in relation to individuals whose data is processed within the AI model.

What can businesses do to mitigate the risks posed by AI tools?

Businesses should consider the full ‘life cycle’ of AI tools – from the solution that the tool is built to provide, to data curation and use, to the development of training, through to validation and deployment. As part of this, good governance for AI use requires a multi-stakeholder, participatory approach, that draws on a breadth of expertise across different areas.

This could involve engaging those affected by the AI tool in the decision-making process for its development or proposed use – how are front-line users and wider stakeholders considered? Who makes decisions on the deployment and who defines what a successful outcome looks like? And what perspectives are the business seeking on its AI tool? This may include legal and technical assessments, but input from other disciplines may be helpful, including external sources that could offer third party perspectives, such as academic research, relevant government bodies, or organisations that are representative of wider stakeholder interests.

Ongoing monitoring of AI tools is vital – machine learning algorithms can drift, particularly as the data used by the algorithm is updated, meaning that the AI may function differently over time. Keeping a ‘human in the loop’ is also good protocol, rather than becoming entirely dependent on an AI tool. And consider how AI oversight is fed into the business’ overarching governance structures, including consideration by the board.

Organisations seeking to use AI developed by third parties should complete a thorough due diligence process before purchase. The impact of AI deployment is likely to be highly context-specific and should be assessed on that basis, but there are questions that could be asked, at a minimum, to help understand the risks of bias that might be present. For example:

  • Was bias considered in the AI tool’s development or verification, and if so, what bias metrics were used? What sensitive attributes were considered by any verification process carried out on the tool?
  • What data set was used? Does the data set cover the same population in relation to which you intend to use the tool? Was the data set balanced? (If not, the tool may only be trained to deal with a specific set of persons.)
  • Is the model intrinsically interpretable? (This goes to transparency and explainability of the tool.) Can the tool be replayed? (If the data is input again, would you get the same answer?).

Future considerations

The environmental impact of using AI has often been overlooked by businesses but is gaining importance given the potentially high energy consumption of digital technology. As a business, you may begin to ask questions like “how much electricity should we use to de-bias our model”, or “how much carbon emission should be released to improve performance of the model we use?” For example, although energy usage in training large language models is a concern, it will likely be balanced with the expected utility and efficiency of the AI tool.  

Although it’s not yet clear how regulation will respond to the emerging social and environmental impacts of AI, a social impact-lens will help purpose-driven businesses apply such tools responsibly. Bates Wells works with businesses to help ensure that their use of AI is consistent with their purpose and values, including as a means to help manage risks that may arise.

If you would like to discuss any of the themes mentioned in this article, please do get in touch.