Regulating Artificial Intelligence: What the OECD’s new guidelines mean for UK businesses

artificial intelligence

This week the Organisation for Economic Cooperation and Development (OECD) formally adopted its Recommendation on Artificial Intelligence (AI) – the first intergovernmental standard on this area.

The OECD represents all major industrialised nations, including the USA and EU states, although China is not a member. The new recommendation is highly persuasive but falls short of advocating regulation of this area.

Instead, a set of broad non-binding principles are outlined to ensure that as AI technology develops, it benefits humanity rather than harms it.

The five principles for the responsible stewardship of trustworthy AI are inclusive growth, sustainable development and well-being, human-centred values and fairness, transparency and explainability, robustness and safety, and accountability.

The Recommendation also makes recommendations for national policies and international co-operation on AI, with particular regard to SMEs – these will be very important for shaping government policy in this area.

Is AI regulation needed at all?

Whether AI should be regulated globally is a much-debated topic. Public awareness of this issue is growing.  There have been well-publicised cases of AI recruitment tools that discriminate against women, and predictive policing software being biased against black people, to give just a couple of examples.

In the US some robo-financial advisers have even faced regulatory action. And a recent report by the UNESCO highlights gender bias in AI. AI typically uses algorithms to make decisions underpinned by machine learning, and these processes are not necessarily free from bias.

A lot depends on the human programmer and how the data used to “train” an AI system is itself chosen and used.

Some argue AI affects too many sectors – from autonomous cars to recruitment, and from to health to criminal justice – which means one-size-fits-all rules are inappropriate.  Also, questions of accountability and liability are already addressed by existing laws, although some say these are not fit for purpose.

The OECD’s principles will influence policy-makers to ensure that as AI develops, countries have policies – and regulation where appropriate – in place to address the ethical issues surrounding AI.

EU and UK Developments

In addition to the OECD, the EU and the UK are already active in this area.  In April this year, the EU released detailed ethical guidelines for AI, and views building trust in AI as key.

The UK Government has also been looking at this area for several years, and this year, the UK’s Centre for Data Ethics and Innovation (CDEI) announced it would investigate algorithmic bias in decision-making. The sectors under the microscope could include financial services, local government, recruitment and crime and justice.

These sectors are seen as particularly important to investigate, given the significant impact decisions in these sectors can have on people’s lives, along with the risk of bias.

Implications for UK business

The OECD’s Recommendation does not have the force of law and won’t immediately change the current piecemeal legal regime that applies to AI in the UK.

However, they will be very influential in shaping how governments in the UK and elsewhere approach future AI regulation and policy.  So, if and until we see some AI-specific laws UK businesses using AI or who intend to do so will need to be alert both to general laws which have an impact on AI (such as the GDPR and the Equality Act 2010) as well as sector-specific regulation and guidance.  They also need to be aware of the increasing use of codes of conduct in this area.

Codes of conduct have the benefit of being able to be developed quickly (unlike hard law) and can be applied swiftly and flexibly updated in light of experience.  For example in February 2019 the UK Government published an updated Code of Conduct for data-driven healthcare, setting out ten principles.

While the principles of the code apply to a health data and MedTech context, the principles themselves are largely sector-independent and are worth consideration by any AI-driven business. We can expect to see other sectors developing similar Codes of Conduct.

Ultimately the successful use of AI requires trust, which both the EU and the OECD highlight. Transparency and legal compliance will help build trust. For example, making sure any personal data used is ethically sourced, and its use is GDPR compliant; ensuring algorithms avoid unfair bias; making security integral to the design and working with regulators (where relevant) from an early stage to ensure sector-specific issues are addressed.

We are seeing the use of regulatory sandboxes – safe spaces to try out AI and other disruptive technologies.

These considerations, along with the broader current policy context around transparency and accountability, are all crucial to the successful implementation of AI in business. In this sense, the OECD’s Recommendation is a perfect place to start.


Simon Stokes

Simon Stokes is a Partner with law firm Blake Morgan . He leads the firm's technology practice in London and specialises in information technology law.

http://www.blakemorgan.co.uk

Simon Stokes is a Partner with law firm Blake Morgan . He leads the firm's technology practice in London and specialises in information technology law.