AI will change the world.  As with previous innovations that revolutionised society, the nature of those changes are difficult to predict.  Legislators risk stifling innovation or imposing ineffective regulation if they act too hastily.  But gaps in regulation are also problematic because of the risk of discrimination, division and bias.  In the absence of clear guidelines, organisations should think carefully about the outcomes they want to achieve with AI and proceed with caution.

What is AI?

There is no consensus on the definition of AI.  Stanford University defines intelligence as “the ability to learn and perform suitable techniques to solve problems” in a way that is “appropriate to the context in an uncertain, ever varying world.” AI is “the science and engineering of making intelligent machines” (McCarthy, 1955).  In other words, AI describes processes by which machines can learn in a similar way to humans. 

How will AI change the world?

The impact of AI is going to be as spectacular as the invention of the printing press in Europe which brought about an “intellectual revolution”:  the flourishing of literacy, culture and knowledge. AI has also been compared to the Industrial Revolution which transformed society, creating new political parties and enabling a more liberal culture:  enforcing social codes was not possible in cities in the way it was in rural communities. 

The legislative conundrum

Policy makers in previous eras struggled to understand how to deal with profound change. The risk with AI is that politicians move too quickly to introduce new regulation which stymies the potential positive impacts of new technologies. On the other hand, without appropriate regulation AI could undermine democracy through misinformation and deepfakes and entrench discrimination and division through the widespread deployment of biased algorithms.

The EU – first mover advantage?

The EU’s GDPR was the forerunner and has become the global “gold standard” on data protection regulation. The EU appears to be seeking the same “first mover” advantage with its draft AI Act which takes a “risk-based approach”.  Uses of AI are categorised, from prohibited (indiscriminate use of facial recognition technology in public spaces) to low-risk (use of chatbots).  Regulatory requirements (such as conformity assessments, testing, human oversight) are imposed depending on perceived risk levels.

Critics say the EU’s definition of AI is overly broad and would regulate technologies which are not usually described as AI.  The risk-based approach is also insufficiently nuanced.  Heavy compliance costs could have a detrimental effect on growth and innovation.

The UK and US approach

Neither the UK nor the US intend to legislate on AI at present:  Instead, non-statutory principles will guide the deployment and use of AI systems. Existing laws will protect against discrimination or infringement of fundamental rights.  But this risks inconsistency in different sectors of the economy and gaps in protection for citizens. 

How should organisations proceed? There are practical tools to support organisations to ask helpful questions about the AI systems they are considering deploying, enabling them to achieve transparent, fair and ethical outcomes. A good example is the UK ICO’s AI and data protection risk toolkit.   The key to using AI safely will be to proceed carefully and thoughtfully.  Keeping on top of developments is vital:  change is unpredictable and will be exponential.  In order to respond appropriately, organisations will need to demonstrate agility and adaptability.  But they will also need to be vigilant and clear about how the use of these technologies aligns with their strategic priorities and values.

If you’d like to learn more about responsible and ethical uses of AI, you can find our insights here.