Simon Stokes - Partner - Blake Morgan https://bmmagazine.co.uk/author/simon-stokes/ UK's leading SME business magazine Sun, 03 Apr 2022 11:17:20 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://bmmagazine.co.uk/wp-content/uploads/2025/09/cropped-BM_SM-32x32.jpg Simon Stokes - Partner - Blake Morgan https://bmmagazine.co.uk/author/simon-stokes/ 32 32 Rogue employees and personal data breaches – when are employers liable? https://bmmagazine.co.uk/in-business/advice/rogue-employees-and-personal-data-breaches-when-are-employers-liable/ https://bmmagazine.co.uk/in-business/advice/rogue-employees-and-personal-data-breaches-when-are-employers-liable/#respond Wed, 01 Apr 2020 10:24:02 +0000 https://www.bmmagazine.co.uk/?p=82205 morrisons profits

It's every employer's nightmare.  An employee with a grudge misuses personal data relating to their employer

Read more:
Rogue employees and personal data breaches – when are employers liable?

]]>
morrisons profits

It’s every employer’s nightmare.  An employee with a grudge misuses personal data relating to their employer – telling the world about staff salaries by publishing the data on the web, for example.

Maybe they’ve told the Press too.  ICO investigate.  Staff also find out and sue their employer for damages in the hundreds of thousands if not millions of pounds in a “class action” lawsuit.

Far fetched? Certainly not – as supermarket chain Morrisons found out recently in a court case that unusually went all the way to the UK Supreme Court.  The Supreme Court ruled in Morrisons’ favour on 1 April but the case has been passing through the courts for several years costing Morrisons one assumes millions in legal fees not to mention management time and disruption.

In this case Morrisons had deep enough pockets to take the case all the way on appeal and won.  By doing so employers have been given a favour.  The Supreme Court judgement will be closely scrutinised by lawyers defending other businesses who have suffered data breaches due to rogue staff.  But it’s not a get out of jail free card either.

The background is that in 2014 an employee of Morrisons, Andrew Skelton, intentionally leaked the personal data of thousands of his colleagues.  The data disclosed included employees’ names, addresses, telephone numbers and bank details.    Subsequently he sent the same information to three newspapers.

One newspaper contacted Morrisons and it took immediate action to remove the online data and to inform the police.  Skelton was imprisoned for 8 years and Morrisons spent over £2.26m dealing with the aftermath of the breach.

A number of employees brought a claim under the Data Protection Act 1998 (“DPA”) against Morrisons.  Damages were claimed in respect of alleged “distress, anxiety, upset and damage” caused by the data breaches. The High Court held that Morrisons was not primarily responsible for the breaches but they were nevertheless vicariously liable on the basis that there was a sufficient connection between Skelton’s role as an employee and his conduct.

Vicarious liability is where an employer can be liable for the wrongdoing of its employee.  This can happen where there is a sufficiently close connection between the person’s employment and their wrongdoing.

Morrisons appealed on two grounds:-

  • That Skelton did not act in the course of his employment when he committed the data breaches so there could be no vicarious liability – he had uploaded and shared the personal data in his own time in pursuit of a personal grudge; and
  • A more technical legal ground that the DPA excluded any scope for liability on an employer for wrongful processing of personal data by an employee and therefore it was implicit that there could not be any vicarious liability.

The Court of Appeal upheld the decision of the High Court and Morrisons appealed to the Supreme Court.

The Supreme Court unanimously held that Skelton did not act in the ordinary course of his employment and that it would be unfair and improper to hold otherwise.  The fact that his employment gave him the opportunity to commit wrongdoing was not sufficient to make Morrisons vicariously liable.  An employer would not usually be vicariously liable where the employee is pursuing a personal grudge outside their field of activities for the employer rather than pursuing their employer’s business.

Whilst this meant Morrisons won, the Court did not conclude that the DPA itself excludes vicarious liability. This is an important caveat, because it does leave the door open for such claims to be brought in the future.

Nevertheless the judgement does provide some comfort to employers as they are unlikely to be held vicariously liable for rogue data breaches committed by their employees in their own time for purely personal reasons with malicious intent.  However a closer connection with Skelton’s work could have led to a different result.  It all depends on the facts – here they were in Morrisons’ favour.

To minimise the risk of data breaches and to protect their organisation, employers need to train staff on data protection and ensure awareness of the law and their staff’s responsibilities for compliance.  This is an ongoing requirement and needs regular refreshing.

Employers also need to have clear and up-to-date internal and staff privacy policies and privacy notices that comply with the GDPR.  In addition they need to ensure personal data is secure and protected (e.g. by password protecting and encrypting files) and accessed only on a strict need to know basis with its distribution monitored where possible.

Whilst the Morrisons case was brought under the Data Protection Act 1998 (the law applicable at the time) the increased responsibilities and sanctions on employers under the GDPR make data protection compliance even more important for employers.

Read more:
Rogue employees and personal data breaches – when are employers liable?

]]>
https://bmmagazine.co.uk/in-business/advice/rogue-employees-and-personal-data-breaches-when-are-employers-liable/feed/ 0
Fog and edge computing: revolutionary new technologies operating at the edge of the law https://bmmagazine.co.uk/legal/fog-and-edge-computing-revolutionary-new-technologies-operating-at-the-edge-of-the-law/ https://bmmagazine.co.uk/legal/fog-and-edge-computing-revolutionary-new-technologies-operating-at-the-edge-of-the-law/#respond Thu, 27 Feb 2020 06:26:02 +0000 https://www.bmmagazine.co.uk/?p=80614 cloud computing

Most businesses will be familiar with cloud computing. Many companies have moved their IT operations to the cloud, or consume cloud-based software as a service (SaaS) applications and tools such as Salesforce, Dropbox or Microsoft Office 365.

Read more:
Fog and edge computing: revolutionary new technologies operating at the edge of the law

]]>
cloud computing

Most businesses will be familiar with cloud computing. Many companies have moved their IT operations to the cloud, or consume cloud-based software as a service (SaaS) applications and tools such as Salesforce, Dropbox or Microsoft Office 365.

But cloud computing is not without its risks. What if hackers attack your cloud provider or its service is unavailable? As a data controller, you will be primarily liable under data protection law.

Cloud providers may also operate business models based on standard non-negotiable terms which can pass the risk onto the user. Ensuring compliance with data protection law (GDPR) can be a headache when appointing a cloud provider.

Also, it’s worth considering how easy it will be to get your data back or port it elsewhere if required,

There is also the risk of losing control of your data unexpectedly; Google has just announced the transfer of UK user data to the USA, for example.

Although many cloud providers will offer to localise data for you, this may cost you more than their basic service.

Regulators too have had cloud computing in their sights.

The use of the cloud in the financial services sector in particular has been the subject of regulatory scrutiny.

Financial services companies can’t outsource their regulatory responsibilities here – they remain firmly on the hook.

Currently, the legal risks of cloud computing are generally well-understood, even if legal advice is required to navigate those risks. However, the future of the cloud, while exciting, presents new threats to businesses.

These risks are due to the growth of the Internet of Things (IoT), which means connected devices at home, in the street or at work.

Smart devices for personal and home use – including autonomous vehicles and drones as well as industrial applications such as factory robots – are all data intensive. They collect massive amounts of data, which needs processing exceptionally quickly in the case of a robot, a drone or an autonomous vehicle – where an individual’s life or property might be at stake.

Data-intensive artificial intelligence (AI) is also increasingly used for image recognition as well as the operation of the devices themselves.

All this data requires processing at the edge of a network close to the devices generating the data. Here the classic cloud computing model breaks down.

It’s too slow to pass all this data to a centralised cloud server at a large data centre for processing. The device needs the ability to process the data very quickly and minimise “latency” – the time taken for data to travel over a network.

This can be achieved by using edge computing – putting processing power at the edge of a network, close to where sensors collect the data. Advances in computing make this possible. Already “mini data centres” are springing up putting computing power close to where it’s needed.

Edge computers, when connected over a network, form what is called a “fog” – a network of distributed computing resources – which process data very quickly as needed and also connect with the cloud for overall communication and control.

The growth of 5G networks will only encourage this as they allow fast high-capacity local data flows but also require local data processing resource too – so more mini data centres.

This interconnected environment is set to expand rapidly but raises a lot of new legal issues. Who owns all the data sensors collect and which is then processed?

If the data is personal data, how is compliance with data protection law ensured? How do users exercise the rights data protection law gives them?

Also, what about the risk of security breaches? The general view is that a distributed network with lots of remote devices connected to it is likely to be inherently less secure than a large data centre or cloud server ring-fenced with security, which are easier to monitor for breaches.

There are ways around this – for example encrypting data both when in transit and “at rest” (when stored on a device) but encryption is power intensive and can also slow things down.

A problem with regulation is that it can quickly become out of date as technology advances. The GDPR was an attempt to reboot data protection law for the age of Facebook and Google. But when it comes to the IoT and AI, the GDPR already risks being left behind.

Complying with the GDPR in this new world requires several steps including:

  • a data mapping exercise to examine the personal data flows involved and if the data is lawfully collected and processed
  • considering how the computing resources/AI involved make decisions
  • identifying who are the data processors and data controllers
  • looking at further compliance steps – data processing/data sharing and transfer agreements, updated transparency notices and security due diligence, for example.

These steps may become unworkable as 5G and AI take off, and the IoT expands to every facet of our lives at work, when we travel and at home.

We are literally at the edge of an IoT, AI and 5G revolution – how the law responds to this challenge and protects both our privacy yet facilitates innovation will be an increasingly pressing topic as the new decade advances.

Read more:
Fog and edge computing: revolutionary new technologies operating at the edge of the law

]]>
https://bmmagazine.co.uk/legal/fog-and-edge-computing-revolutionary-new-technologies-operating-at-the-edge-of-the-law/feed/ 0
Why Cyber risk is the number one business risk in 2020 https://bmmagazine.co.uk/legal/why-cyber-risk-is-the-number-one-business-risk-in-2020/ https://bmmagazine.co.uk/legal/why-cyber-risk-is-the-number-one-business-risk-in-2020/#comments Thu, 23 Jan 2020 05:37:53 +0000 https://www.bmmagazine.co.uk/?p=79179 Dixons Carphone

In January the Information Commissioner’s Office (ICO) fined DSG Retail Limited (DSG) £500,000 after a ‘point of sale’ computer system was compromised as a result of a cyber-attack, affecting at least 14 million people.

Read more:
Why Cyber risk is the number one business risk in 2020

]]>
Dixons Carphone

In January the Information Commissioner’s Office (ICO) fined DSG Retail Limited (DSG) £500,000 after a ‘point of sale’ computer system was compromised as a result of a cyber-attack, affecting at least 14 million people.

An ICO investigation found that an attacker installed malware on 5,390 tills at DSG’s Currys PC World and Dixons Travel stores between July 2017 and April 2018, collecting personal data during the nine-month period before the attack was detected.

The company’s failure to secure the system allowed unauthorised access to 5.6 million payment card details used in transactions and the personal information of approximately 14 million people, including full names, postcodes, email addresses and failed credit checks from internal servers.

Because the data breach occurred before the General Data Protection Regulation (GDPR) came into effect, DSG were found to have breached the earlier Data Protection Act 1998.

The ICO cited poor security arrangements and a failure to take adequate steps to protect personal data. This included vulnerabilities such as inadequate software patching, absence of a local firewall, and lack of network segregation and routine security testing.

The ICO said that the contraventions in this case were so serious that they imposed the maximum penalty under the previous law, but the fine would inevitably have been much higher under the GDPR.

The ICO considered that the personal data involved would significantly affect individuals’ privacy, leaving affected customers vulnerable to financial theft and identity fraud. The ICO received 158 complaints between June 2018 and November 2018 from DSG’s customers. As of March 2019, the company reported that nearly 3,300 customers had contacted them directly in relation to this data breach.

The ICO stressed that while cyber-attacks are becoming more frequent, organisations still have responsibilities under the law to take serious security steps to protect systems, and most importantly, people’s personal data.

This incident will have cost DSG a great deal, both in direct costs to deal with the breach, and also in terms of its reputation.  DSG  may also face claims from its customers – especially given the ICO’s findings of poor security.

Given such incidents  it’s unsurprising that the threat of cyber attacks is keeping many business leaders up at night and sadly, if business leaders aren’t worried, then they aren’t paying attention. In fact, the latest Allianz Risk Barometer 2020 from insurers Allianz – which identifies the top corporate risks for 2020 – highlights cyber risk as the number one business risk for 2020.  Seven years ago cyber risk was ranked just 15th.

A top priority for all businesses in 2020 must be to take all reasonable and practicable steps to make their businesses as cyber risk proof and as resilient as possible.  There’s plenty of guidance and support available – the National Cyber Security Centre (NCSC) promotes cyber essentials which should be a first port of call for any SME (https://www.cyberessentials.ncsc.gov.uk/about).

Businesses should also consider whether they should take out cyber insurance.  It should not be assumed cyber risks are covered in your existing insurance policies.

A number of cyber policies are now available and a specialist insurance broker should be able to assist you and help explain what’s available and what is and what is not covered.   Such policies can help protect against financial losses (including for business interruption, privacy breach costs, cyber extortion, hacker damage, and media liability) but many also offer assistance at the time of an incident e.g. by providing cyber forensic support.

Such policies do pay out – last year the Association of British Insurers revealed that 99% of claims made (207) on ABI-member cyber insurance policies in 2018 were paid – this is one of the highest claims acceptance rates across all insurance products.

As the NCSC advise:

“Organisations that are considering cyber insurance should understand that it will not protect you from an attack, but it may provide you with additional resources during and after an incident. So cyber insurance can be considered as an additional risk management tool, but do take time to:

  • understand the scope and scale of the cover provided
  • ensure that you are able to meet any operational requirements placed on you by the insurer”

As always when buying insurance you need to read the fine print of the cover. Crucially you must also ensure you meet any security or other IT requirements placed on you by the insurer.  If you have pre-existing IT issues you knew or ought to have known about and these lead to a breach of security you are unlikely to be covered.

Insurance is not a panacea, of course. You need to implement appropriate technical and organisational measures to ensure a level of security appropriate to the risks your organisation faces.  This is required by the General Data Protection Regulation (GDPR) in any event where you process personal data.

Ensuring your business is protected against cyber security risks should be a recurring New Year’s resolution, no matter what type of business you run.

Read more:
Why Cyber risk is the number one business risk in 2020

]]>
https://bmmagazine.co.uk/legal/why-cyber-risk-is-the-number-one-business-risk-in-2020/feed/ 4
Who owns what Artificial Intelligence creates? https://bmmagazine.co.uk/in-business/advice/who-owns-what-artificial-intelligence-creates/ https://bmmagazine.co.uk/in-business/advice/who-owns-what-artificial-intelligence-creates/#comments Thu, 19 Dec 2019 18:24:45 +0000 https://www.bmmagazine.co.uk/?p=77969 AI

As Artificial Intelligence (AI) systems become more advanced, it is being used to create and invent an ever-increasing range of new things.

Read more:
Who owns what Artificial Intelligence creates?

]]>
AI

As Artificial Intelligence (AI) systems become more advanced, it is being used to create and invent an ever-increasing range of new things.

In October last year, for example, AI-generated art hit the headlines when auction house Christie’s New York sold an AI-created artwork for $432,000.

AI is also being used in music production, with a new industry being built around the use of AI in music. The musician Taryn Southern has used an artificial intelligence platform called Amper to create an entire album, called I AM AI.  The album was the first LP to be entirely composed and produced using AI.

Then there’s AI as inventor.  A patented AI system called “DABUS”, created by Dr Stephen Thaler, can devise and develop new ideas.  This patented creativity machine is already coming up with inventions such as the “fractal container” – a food container that uses fractal designs to create pits and bulges in its sides. This design can help the containers fit more tightly, making them safer during transportation. Another benefit is that robotic arms should find it easier to pick them up and grip them.

It’s also invented a lamp called a “neural flame, which is designed to flicker in a rhythm mimicking patterns of neural activity that accompany the formation of ideas. This design makes it difficult to ignore and so attracts enhanced attention.

In a test case the human creator of DABUS has worked with patent attorneys to file patents for these two inventions with DABUS named as the inventor.  This is causing quite a stir in patent law circles as it’s generally been assumed only a human being can be an inventor for patent law purposes.  But as AI advances, why shouldn’t the machine be an inventor for patent law purposes?  At the very least a debate needs to be had around this question.

Some believe AI-generated inventions shouldn’t be eligible for patent protection at all, and that at most they should get a lesser form of protection.  Others argue they should be granted patent protection.  But if they were granted patents who would be named as the inventor and “own” them?

When it comes to copyright protected works such as art and music, many countries require a human creator.  Copyright law internationally is based on the idea of a human “author”.  If there’s no such human author, copyright protection doesn’t exist.  But some countries including the UK have worked out a way around this problem.

In the UK a copyright work (e.g. a book, software, art or music) created by AI can be eligible for copyright protection.  This is because UK law “deems” a human author in such cases – this is the person who undertakes the arrangements necessary to create the work.  So, the person who wrote the algorithm(s) and neural networks used, set up and trained them would be the “author”.

Clarifying who owns AI-generated works and inventions and how they are protected (through patents, copyright or some other more limited form of IP protection) will be the next big debate in global intellectual property (IP) law.

But the wheels of global IP law move slowly and international treaties to deal with such things can take years to come to fruition.  In the meantime, we can expect individual countries to make their own laws where possible.  This will be easier for copyright than for patents.

So it may be a while before we have clarity on how IP law protects AI-generated works.  Until then businesses using AI will need to take a number of practical and contractual steps to ensure what is generated is owned by them and protected by IP law.

At the moment AI will often just be a tool used in the creative process. If a business doesn’t own the AI in question then care will need to be taken in agreeing to any licence terms for its use, and what they say about who owns what the tool is used to create.

AI typically needs to be “trained” by feeding it e large amounts of data – this is how machine learning works.  Who owns this data and any terms required for its use will also need to be analysed to ensure there is no leakage of IP to others.

Keeping algorithms confidential where possible is also prudent as is documenting who programmed and/or created the AI and any training data used.  Everyone involved in creating the IP ought to assign their rights to you as business owner and agree to respect confidentiality where relevant.

Read more:
Who owns what Artificial Intelligence creates?

]]>
https://bmmagazine.co.uk/in-business/advice/who-owns-what-artificial-intelligence-creates/feed/ 6
Can you own bitcoin and do smart contracts work? https://bmmagazine.co.uk/opinion/can-you-own-bitcoin-and-do-smart-contracts-work/ https://bmmagazine.co.uk/opinion/can-you-own-bitcoin-and-do-smart-contracts-work/#respond Thu, 21 Nov 2019 10:40:39 +0000 https://www.bmmagazine.co.uk/?p=76895 bitcoin

Cryptocurrency crime is on the rise.  Earlier this year hackers "stole" $40m in bitcoin from one of the largest cryptocurrency exchanges in the world, Binance. 

Read more:
Can you own bitcoin and do smart contracts work?

]]>
bitcoin

Cryptocurrency crime is on the rise.  Earlier this year hackers “stole” $40m in bitcoin from one of the largest cryptocurrency exchanges in the world, Binance.

And it was reported by cybersecurity firm CipherTrace that in 2018 cryptocurrency crime and investor scams rose by more than 400 percent with about $1.7billion in losses.

But you can only “steal” bitcoin and other cryptoassets if they are defined as property.  If they are merely data stored in a distributed ledger, as some have argued, then they cannot be property.

So if the courts were to decide bitcoin and other cryptoassets were not property this would have huge ramifications.  They couldn’t be owned either – so what would happen on the death (or bankruptcy) of the “owner”?  Nor could they be used as collateral or any form of security or held on trust.

A related area is “smart contracts” – contracts automatically performed by a computer and written in computer code.  Such contracts often work alongside cryptoassets and are implemented using similar technology.  Smart contracts can be used to transfer money, shares, and property, for example.

If smart contracts couldn’t be enforced in the courts that would also be catastrophic – the growth of online commerce would be impeded and trust in this burgeoning area would be fatally damaged.  Some have argued such contracts are just computer programs and not contracts, for example.

These two questions – can cryptoassets be defined as property and are smart contracts true contracts enforceable by the courts – are questions the English courts haven’t yet definitively answered.  And they may not for some time, not until cases work through the system that address these two fundamental questions in detail.

Some countries have enacted laws in these areas to clarify the position and give legal certainty.  But nothing is currently proposed in the UK. Are we in danger of being left behind?  And should businesses and individuals using cryptoassets and smart contracts be worried?

The answer from distinguished lawyers, including a leading judge, the Chancellor of the High Court Sir Geoffrey Vos, who have recently been looking into this area is a resounding no.

On 18 November 2019 the UK Jurisdiction Taskforce, Chaired by Sir Geoffrey Vos, announced to a crowded hall of lawyers, computer scientists and business people in the City of London that cryptoassets should be treated in principle by the courts as property and that in principle a smart contract can be identified, interpreted and enforced by the courts using ordinary and well-established legal principles.

The 38-page report (56 pages including notes and appendices) presented at the meeting is unusually short and succinct for a detailed piece of legal analysis.  It is intended to inform and give comfort to businesspeople and policy makers and not just be read by lawyers.

It also provides encouragement to businesses and individuals using cryptoassets and smart contracts to use the law of England and Wales, and the English and Welsh courts.  As the report notes, English law is well able to deal with technological developments and has an impressive track record of doing so.

Having said that, whether English law applies to the matter at hand may not always be an easy question to answer. Those involved may choose for it to apply (so they have legal certainty) or the courts may hold it does because there is a close connection to England & Wales.

The Legal Statement inevitably leaves many questions unanswered which were outside its remit – this it leaves to other organisations including the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO), and HM Revenue and Customs (HMRC) for example.

Such questions include:

  • Are cryptoassets “money” and how should they be regulated?
  • What is the tax treatment of cryptoassets?
  • If a smart contract processes personal data who is liable and can data protection law be complied with if the data is held in a ledger that cannot be altered/erased?

We can expect continuing guidance from the FCA, ICO and HMRC on these areas which businesses operating in this space will need to be familiar with.  It is also possible the Legal Statement may encourage new legislation, but this remains unclear.

But there is no doubt the Legal Statement is to be welcomed.  It ought to encourage business and investor confidence in this fast-moving and fast-growing field.  And it dispels some of the myths that have sprung up that the law has no application or any place here; that the code is the law.  That simply isn’t true.

English law based on the common law made by judges (rather than just legislation) has adapted to electronic signatures and ecommerce transactions in the last thirty years and there is no reason to suppose its flexibility won’t continue to apply in the world of cryptoassets and smart contracts.

Such ground-breaking and innovative technologies and products don’t operate in a legal vacuum.  Those involved in transactions involving cryptoassets and smart contracts still need to be mindful of the law.  But the Legal Statement also means they can take comfort that English law (at least) is unlikely to leave them high and dry if things go wrong – although as lawyers always say (as indeed the Legal Statement also states) it does all depend on the facts.

Read more:
Can you own bitcoin and do smart contracts work?

]]>
https://bmmagazine.co.uk/opinion/can-you-own-bitcoin-and-do-smart-contracts-work/feed/ 0
Automated Facial Recognition Technology (AFR): Big Brother in action? https://bmmagazine.co.uk/columns/automated-facial-recognition-technology-afr-big-brother-in-action/ https://bmmagazine.co.uk/columns/automated-facial-recognition-technology-afr-big-brother-in-action/#respond Wed, 30 Oct 2019 07:34:00 +0000 https://www.bmmagazine.co.uk/?p=76285 facial recognition

Artificial intelligence continues to be in the news. One area that has hit the headlines recently is the use of AFR – automated facial recognition technology – which is now well established.

Read more:
Automated Facial Recognition Technology (AFR): Big Brother in action?

]]>
facial recognition

Artificial intelligence continues to be in the news. One area that has hit the headlines recently is the use of AFR – automated facial recognition technology – which is now well established.

In simple terms AFR is able to assess whether two facial images depict the same person.

Live footage from CCTV can be used to extract digital images of faces of members of the public and convert this into related biometric information.  This information can then be compared with that in a watchlist in real time.

CCTV footage can capture a digital photo of a person’s face, which can then be processed using software to extract unique biometric information, such as measurements of facial features.  This data is then compared with facial biometric data from images contained in a database.

The comparison is scored by the software, with a higher number indicating a greater likelihood of the CCTV facial image being a match with one on the database.  Such technology clearly lends itself to use in public places and for public events and is in fact already in use.

For example, between 2017 – 2019 South Wales Police (SWP) used the technology at a number of events including the 2017 UEFA Champions League Final, rugby matches and at an Elvis Presley Festival. Such technology will scan many faces –  21,500 at a rugby international for example.  A person’s image is personal data, so capturing images in this way will be processing personal data as is any comparison against a watchlist of crime suspects or other persons searched against.  Clearly such use could be seen as Big Brother in action.  Equally, it can be justified as a way of preventing crime, detecting criminals and help ensuring public safety.

The use of AFR by South Wales Police was recently challenged in the courts but the outcome was in favour of the police force.  The SWP had taken care in using AFR and had considered the privacy aspects of using AFR at the outset – it was only used for specific and limited purposes.  The CCTV information was deleted unless there was a match, which a human being (not a machine) then assessed. It was found that the force had taken care to comply with the Data Protection Act 2018.

But the use of AFR continues to generate controversy.  Its use by a property developer in the busy King’s Cross area of London has been controversial.  It was uncertain why the property developer had been using AFR and what their legal basis for processing the information was.  The Information Commissioner (ICO) decided to investigate and commented: “Scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all. That is especially the case if it is done without people’s knowledge or understanding.”

Facial recognition technology is a priority area for the Information Commissioner’s Office (ICO) and when necessary, ICO has stated it will not hesitate to use its investigative and enforcement powers to protect people’s legal rights.

In its investigation of the use of AFR in King’s Cross the ICO will require detailed information from the relevant organisations about how the technology is used. They will also inspect the system and its operation on-site to assess whether or not it complies with data protection law.

As the ICO has highlighted any organisations wishing to use facial recognition technology must comply with the law – and they must do so in a fair, transparent and accountable way. They must document how and why they believe their use of the technology is legal, proportionate and justified.

While the ICO supports keeping people safe, it stresses that new technologies and new uses of sensitive personal data must always be balanced against people’s legal rights.

In the USA such technology has also been used in job interviews to assess how candidates respond to questions during the interview.  There is evidence that such techniques can be biased e.g. discriminating against female applicants on the basis that previous hires for the company in question were mainly men.

Regardless of this we can expect to see AFR being used more often.  It is not currently illegal to use it in the UK – but its use must be proportionate, transparent and in line with appropriate regulatory codes as well as the GDPR and Data Protection Act 2018.

This means those using it must give its use careful thought, carry out a privacy impact assessment, avoid the risk of bias and have appropriate policy documents in place to justify its use.

Consulting with the ICO first is also a good idea.  Given that the ICO are actively prioritising this area, businesses can expect further guidance shortly.  But even if lawful, do businesses want the adverse publicity of being perceived as using “Big Brother” tactics?

Read more:
Automated Facial Recognition Technology (AFR): Big Brother in action?

]]>
https://bmmagazine.co.uk/columns/automated-facial-recognition-technology-afr-big-brother-in-action/feed/ 0
Brexit, data and the EU – the forgotten border question? https://bmmagazine.co.uk/opinion/brexit-data-and-the-eu-the-forgotten-border-question/ https://bmmagazine.co.uk/opinion/brexit-data-and-the-eu-the-forgotten-border-question/#respond Sun, 22 Sep 2019 10:06:52 +0000 https://www.bmmagazine.co.uk/?p=75095 Brexit Border

Much discussion of Brexit centres on the trade in goods. If we leave without a deal, will there need to be a hard border in Ireland? Will Kent become a lorry park? What will we be paying in tariffs?

Read more:
Brexit, data and the EU – the forgotten border question?

]]>
Brexit Border

Much discussion of Brexit centres on the trade in goods. If we leave without a deal, will there need to be a hard border in Ireland? Will Kent become a lorry park? What will we be paying in tariffs?

However, a hard Brexit isn’t just about tangible border issues. Britain is primarily a service economy, and this sector of the economy is particularly reliant on the free flow of intangible items between the UK and Europe (or more precisely the EEA – the EU plus Iceland, Norway and Liechtenstein).

Chief amongst these is personal data.

Many businesses use data centres outside the UK, outsource data processing offshore, or share personal data with affiliated offshore organisations.

Many may also have online businesses where they receive personal data from offshore, e.g. they have customers in the EU. The rules applying to such cross-border flow of personal data between the UK and the EEA are currently enshrined in the General Data Protection Regulation (GDPR).

Before Brexit, the UK is part of an EEA-wide free data flow area underpinned by the GDPR. After Brexit, if we leave the EU without a deal addressing such issues, we will be left with a personal data border between the UK and the EU.

After we leave the EU, a UK-specific version of the GDPR (UK GDPR) will apply. Outside the UK, the GDPR as it currently exists will continue to apply in the remaining EU states (EU GDPR).

If we leave without a deal, some surprising consequences will ensue. The immediate result is that the UK will become a “third country” in the eyes of the EU.

This means our data protection law (the UK GDPR) will no longer be considered “adequate” by the EU – at least in the short term. If EU-based organisations want to transfer personal data to the UK (or the UK wishes to import such data), they are likely to need to rely on another legal mechanism to enable such flows to continue.

This would typically take the form of an EU-approved transborder data agreement – so-called “standard contractual clauses” (or SCCs).

It will be permitted to export personal data to the EU as the UK GDPR allows this. However, if you’ve exported it for processing in an EU state, there may be issues in having this data sent back to you if the UK becomes a “third country”.

There are other potential implications, as well. For example, if you sell goods and services to nationals in EU states online, you are also likely to need to comply with the EU GDPR as well as the UK GDPR.

Likewise, if you have an operation (a branch office, for example) in an EU state.

If however you only operate in the UK and are not concerned with transborder data flows, then you can relax.

However, how many businesses will be in this category?

Advising businesses what to do about Brexit without being alarmist is a difficult task. Moreover, the GDPR is a particularly complex area.

Most businesses have already spent much time and effort to comply with the GDPR. Now, there’s the significant risk of additional complexity if free movement of data is to be maintained.

The UK’s Information Commissioner (ICO) is well aware of the issues here and has been providing more and more resources online to help assist businesses.

Their latest guidance is that if the UK leaves the EU without a deal, most of the data protection rules affecting small to medium-sized businesses and organisations will stay the same. They also note that:

  • If you are a UK business or organisation that already complies with the GDPR with no contacts or customers in the EEA, you do not need to do much more to prepare for data protection compliance after Brexit.
  • If you are a UK business or organisation that receives personal data from contacts in the EEA, you need to take extra steps to ensure that the data can continue to flow after Brexit. As noted above, these additional steps may include cross-border data transfer agreements (SCCs) in the approved form.
  • If you are a UK business or organisation with an office, branch or other established presence in the EEA, or if you have customers in the EEA, you will need to comply with both UK and EU data protection regulations after Brexit. You may need to designate a representative in the EEA.

Given the current political uncertainty, it is prudent for businesses to plan for a no-deal scenario.

The best place to start is by mapping the data flows between your business and the EU. In light of this, the impact of a no-deal Brexit then needs to be examined.

The ICO has an interactive tool to help with this.

While there may be some work required to enable your business to continue to receive data from the EU, these steps are relatively modest compared to those needed to help ensure free trade in goods after a no-deal Brexit.

Read more:
Brexit, data and the EU – the forgotten border question?

]]>
https://bmmagazine.co.uk/opinion/brexit-data-and-the-eu-the-forgotten-border-question/feed/ 0
Betting on a bundle of rights – how the law protects data https://bmmagazine.co.uk/opinion/betting-on-a-bundle-of-rights-how-the-law-protects-data/ https://bmmagazine.co.uk/opinion/betting-on-a-bundle-of-rights-how-the-law-protects-data/#respond Fri, 30 Aug 2019 11:48:30 +0000 https://www.bmmagazine.co.uk/?p=74432 data

Data is the oil of the digital economy – whether it's personal data or other sorts of data.

Read more:
Betting on a bundle of rights – how the law protects data

]]>
data

Data is the oil of the digital economy – whether it’s personal data or other sorts of data.

Following the advent of the General Data Protection Regulation in May last year, people are far more aware of their data protection and privacy rights.

As a result, businesses are rightfully taking data protection far more seriously.

The growth of the Internet of Things – connected cars, home devices and sensors, for example – will only drive up data generation and new 5G mobile networks will help facilitate this.

The protection of personal data is increasingly well understood and applied. However, vast amounts of data now being generated isn’t personal. One of the things lawyers are debating more frequently is whether data (whether personal or non-personal) can be “owned” – is it a form of property?

This question has enormous economic importance and is a separate issue from data protection law, which is rooted in protecting certain fundamental human rights concerning data rather than property rights.

All businesses will be generating and processing data.  This data will have considerable value – it may give your business a competitive edge or be a vital business asset. So how do you protect it?

In English law, the legal protection of data is fragmentary and complex. First, it’s generally understood there is no such thing as “property” in information. So, for example, an electronic database of information cannot be said to be property in the way that the IT server on which it sits is property.

The law says that you need to distinguish between the information/data itself (in which there is no right of property), the physical medium where it is recorded (which is tangible property – it can be exclusively “owned” and “possessed”) and the intangible intellectual property rights (copyright, database right and rights in confidential information) that may nevertheless still protect the information.

The EU has been thinking about a new legal right that would protect data – a “data producer’s right” – but such a right is a long way off. Because of the uncertainty surrounding how the law protects data, contracts are often used to deal with the matter even though the legal basis for the protection of the data may be unclear.

Every so often, the courts will explore how data is protected. One recent case decided in May this year concerned rights to horse racing data. Here the data owners tried to rely on a bundle of legal rights they claimed in their data, including:

  • Breach of copyright (here the data would need to be an “original” copyright work – a literary work in this case)
  • Breach of database right (where there was an investment in creating and maintaining a protected database)
  • Breach of confidence (the law of confidence protected the data; the argument was that the data was commercially valuable, and the racecourse owners imposed restrictions on its use, including on those attending the races concerned. It was argued it should be treated as confidential by those who had access to it)

At trial, the claimants or data “owners” failed in all their claims apart from in relation to breach of confidence. Here the court held the pre-race data was commercially valuable, and as the claimants had sought to prevent its distribution off-course they were entitled to protect it – even though the information was potentially publicly available.

What is also interesting is that the court rejected any argument that the relevant data was protected by copyright, deeming it was not “original” enough as an algorithm generated it by “pure routine work”.

They found that compiling the data didn’t involve sufficient skill, labour and judgment to merit copyright protection. As for database right, the use made of the data did not amount to database right infringement.

The case illustrates the challenges that can arise in protecting data. The data “owners” here had to make several arguments only one of which – breach of confidence – succeeded at trial.

However, the case isn’t the last word on the subject – as always cases are fact-dependent, and businesses need to have policies and procedures in place to protect their data.

Obvious areas to focus on are:

  • Using the law of confidence where possible by ensuring you have a trade secret policy identifying valuable data you wish to protect and putting steps in place to document it, protect it and keep it confidential. In practice, you can achieve this by limiting access only to those bound by confidentiality restrictions, whether in employment or consultancy contracts or NDAs. Using robust physical and electronic security to keep it confidential is also crucial.
  • Ensuring where possible database right and copyright are available through keeping records of how your data and databases are created and maintained. Also, consider reviewing your contracts with creators and database developers to ensure you own the IP rights.
  • Where you provide access to your data to others, use appropriately protective contracts to do so and think through whether you can claim rights to what is done with your data.

Photo by Glenn Carstens-Peters on Unsplash

Read more:
Betting on a bundle of rights – how the law protects data

]]>
https://bmmagazine.co.uk/opinion/betting-on-a-bundle-of-rights-how-the-law-protects-data/feed/ 0
The Information Commissioner intends to fine British Airways and Marriott International over £283m in total for breach of the GDPR  –  should businesses be worried? https://bmmagazine.co.uk/opinion/the-information-commissioner-intends-to-fine-british-airways-and-marriott-international-over-283m-in-total-for-breach-of-the-gdpr-should-businesses-be-worried/ https://bmmagazine.co.uk/opinion/the-information-commissioner-intends-to-fine-british-airways-and-marriott-international-over-283m-in-total-for-breach-of-the-gdpr-should-businesses-be-worried/#comments Sat, 27 Jul 2019 07:38:11 +0000 https://www.bmmagazine.co.uk/?p=73465 British Airways plc

Earlier in July the UK's data protection regulator – the Information Commissioner's Office (ICO) - hit the headlines by announcing its intention to impose £283m in total in fines in quick succession.

Read more:
The Information Commissioner intends to fine British Airways and Marriott International over £283m in total for breach of the GDPR  –  should businesses be worried?

]]>
British Airways plc

Earlier this month the UK’s data protection regulator – the Information Commissioner’s Office (ICO) – hit the headlines by announcing its intention to impose £283m in total in fines in quick succession.

First, British Airways (£183.39m) then Marriott International (£99.2m) – both due to cyber/IT security incidents where customer personal data was compromised.

Since 25 May 2018 when the General Data Protection Regulation (GDPR) came into effect data protection experts have been anxiously waiting to see what fines the ICO would levy under the GDPR . The ICO now has the power to potentially levy fines of the greater of Euro 20m or 4% of group worldwide turnover – far above the previous cap of £500,000.  And now we have two whopping intended fines.  Yet a sense of perspective is needed.

Firstly, such fines are only “intended” fines at this stage – the ICO may reduce them after hearing representations from the companies concerned.

Secondly, whilst we don’t yet have the full rationale for the fines it seems reasonable to assume that the fines will be higher than the fines the ICO itself would impose just in the UK.  This is because in these two cases the ICO is acting as the “lead supervisory authority” under the GDPR and so is representing the interests of other EU/EEA data protection authorities as well.

Thirdly, these appear to be very serious incidents at large corporates involving significant numbers of customers and taking place over an extended period of time with the risk of serious prejudice to those affected – so the fines were always going to be significant.

In Marriott International’s case the problem arose due to IT systems that were originally part of the Starwood hotels group acquired by Marriott in 2016.  It took Marriott until 2018 to discover the incident (which had its origins in a 2014 compromise of Starwood’s systems) and the ICO found that Marriott failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems.

In BA’s case the cyber incident was notified to the ICO by BA in September 2018. Personal data of approximately 500,000 customers were compromised in this incident, which is believed to have begun in June 2018.  The ICO’s investigation found that a variety of information was compromised by poor security arrangements at the company, including log in, payment card, and travel booking details as well as name and address information.

Nevertheless, the days of a £500,000 cap on data protection fines are now well and truly over.  Also it’s not just fines that should concentrate the mind – there’s the reputational damage, the legal and administrative costs in dealing with the matter and perhaps most ominously the threat of class action data breach lawsuits on behalf of affected data subjects.  If significant numbers of data subjects are affected the claims here can easily outstrip the level of any fines.

Implications for business

As we await to see how these two cases proceed some initial conclusions can be drawn.

Don’t assume you can pass the blame onto others: the fact you’ve suffered a cyber/IT security incident caused by the criminal behaviour of others (as it appears Marriott and BA did) doesn’t necessarily get you off the hook – did you put in place appropriate procedures to help prevent, detect and then swiftly respond to and contain such an attack? – if you failed in your duty of care you will have to face the consequences.  Businesses need to take IT security very seriously and to embed this into how employees behave as well – frequently human error or worse will be responsible, not necessarily just a technical failure.

Respond immediately: If you are affected by a cyber or other “personal data breach” contact the ICO immediately where the law requires this (any breach of any substance will inevitably require this). Ensure you promptly assess the risk to those individuals affected and notify them as well where the law requires this or where it is sensible to do so (e.g. to mitigate damage to those involved) and provide full cooperation to the ICO throughout. Take immediate steps to contain and then stop the incident.  This will also help in mitigation of any fine.

Buyer beware: If you acquire another business you need to carry out robust GDPR and IT security due diligence to ensure you do not inherit a problem.

Don’t neglect compliance: take GDPR compliance seriously, be prepared for the worst and ensure you have appropriate technical and organisational security measures in place to ensure a level of security appropriate to the risk, and regularly test the measures in place.

Review or take out appropriate insurance cover: this is not a panacea but there are an increasing number of products available.

Learn from your mistakes:it is likely most businesses will suffer some sort of personal data breach or cyber/IT security incident at some point – not necessarily major. It is imperative to learn from the experience and prevent a repeat.

Read more:
The Information Commissioner intends to fine British Airways and Marriott International over £283m in total for breach of the GDPR  –  should businesses be worried?

]]>
https://bmmagazine.co.uk/opinion/the-information-commissioner-intends-to-fine-british-airways-and-marriott-international-over-283m-in-total-for-breach-of-the-gdpr-should-businesses-be-worried/feed/ 43
Regulating technology – move fast and stop breakages? https://bmmagazine.co.uk/opinion/regulating-technology-move-fast-and-stop-breakages/ https://bmmagazine.co.uk/opinion/regulating-technology-move-fast-and-stop-breakages/#respond Sun, 07 Jul 2019 20:24:54 +0000 https://www.bmmagazine.co.uk/?p=72804 Tech businesses

The British tech sector is growing significantly faster than the rest of the economy and makes a hundred billion pound plus contribution to the economy each year. 

Read more:
Regulating technology – move fast and stop breakages?

]]>
Tech businesses

The British tech sector is growing significantly faster than the rest of the economy and makes a hundred billion pound plus contribution to the economy each year.

London in particular has established itself as one of the world’s leading FinTech hubs – figures released in June by TechNation and Dealroom indicate that one unicorn (a start-up valued at over $1billion) has been created every month for the past year in London. London has 45 unicorn firms out of 72 based in Britain, making the UK the third most successful for unicorns in the world behind the USA and China. Eighteen of these are FinTech companies.  Perhaps surprisingly London has more unicorns than San Francisco.

The Government wants to ensure the UK remains a very attractive place to build a global tech business. In one of the last acts of her premiership, in June Theresa May highlighted that the UK is one of the best places in the world to start and grow a tech business. She also announced a number of initiatives as well as new investments worth more than £1.2 billion.

Sustaining growth will be a challenge, as will ensuring the benefits of tech are spread across the UK and not just confined to London and the South East.  UK Government’s support for the sector is arguably modest compared to a number of other leading industrialised nations.

While the UK is excellent at science and technology research and early stage innovation, often what we invent is exploited commercially elsewhere in the world.  Post-Brexit we need an environment which continues to attract the best global talent, and we need to ensure we train our own technologists of the future. The 2,500 places for AI and data conversion courses announced by the Government in June is to be welcomed.

There’s an ongoing debate among lawyers around whether the legal framework underpinning two technologies with great potential –- blockchain and smart contracts – is clear and fit for purpose.  A number of countries have decided to take the lead in legislating in these areas and there is a fear in some quarters that the UK may get left behind.

In some ways this mirrors the legal debate of the late 1990s and early 2000s, when there was a concern that electronic signatures and contracts would be unenforceable, stifling e-commerce as a result.  Those fears proved groundless to a large extent; the law of England and Wales is remarkably flexible and in this area largely judge made. However laws were still introduced to clarify certain areas, largely as a result of EU legal developments.

Some very senior lawyers and judges have raised concerns that the lack of a clear legal framework around blockchain and smart contracts will dent investor confidence.  As a result there was a recent Government-supported consultation by the UK Jurisdiction Taskforce (UKJT) about ensuring the law in England and Wales provides the legal certainty needed to support cutting-edge technologies such as blockchain and smart contracts, where the code IS the contract.

The consultation closed on 21 June and will explore whether the law needs revision or whether it is already fit for purpose. The UKJT will report in late summer of this year.

Basic questions are being asked about what we mean by “property” when we talk about blockchain and cryptoassets, and about whether smart contracts can be legally binding and enforceable. It is important to note that the legal review here is at a very basic and fundamental level – it will not lead to laws regulating cryptocurrencies which use blockchain, or to regulation of initial coin offerings (ICOs), for example.  But the outcome of the review will inform future regulation of this area.

One challenge will be for the lawyers involved to understand the technology so that any analysis is both legally correct as well as properly reflecting the underlying technologies. This is a fast-moving area so there is a risk that a snapshot of the law in August 2019 could quickly become out of date.

One factor behind tech’s rise and the growth of unicorns is that many successful tech innovators have, to paraphrase Mark Zuckerberg of Facebook, “moved fast and broken things.” The law in a range of areas – including data privacy, consumer law and competition law – has been struggling to catch up.  But there is a growing realisation amongst tech companies that regulation is required and in the EU at least, lawmakers seem keen to oblige.

The hope is that the current legal review of smart contracts and blockchain will assist not hinder the already rapid growth and deployment of such technology in the UK.  The USA has traditionally taken a light touch approach to tech regulation, and its companies have become dominant players globally – some see these two factors as intertwined.  How to legislate and regulate without curtailing innovation is quite a challenge.  Watch this space.

Read more:
Regulating technology – move fast and stop breakages?

]]>
https://bmmagazine.co.uk/opinion/regulating-technology-move-fast-and-stop-breakages/feed/ 0
Regulating Artificial Intelligence: What the OECD’s new guidelines mean for UK businesses https://bmmagazine.co.uk/opinion/regulating-artificial-intelligence-what-the-oecds-new-guidelines-mean-for-uk-businesses/ https://bmmagazine.co.uk/opinion/regulating-artificial-intelligence-what-the-oecds-new-guidelines-mean-for-uk-businesses/#comments Fri, 24 May 2019 10:53:43 +0000 https://www.bmmagazine.co.uk/?p=71349 artificial intelligence

This week the Organisation for Economic Cooperation and Development (OECD) formally adopted its Recommendation on Artificial Intelligence (AI) – the first intergovernmental standard on this area.

Read more:
Regulating Artificial Intelligence: What the OECD’s new guidelines mean for UK businesses

]]>
artificial intelligence

This week the Organisation for Economic Cooperation and Development (OECD) formally adopted its Recommendation on Artificial Intelligence (AI) – the first intergovernmental standard on this area.

The OECD represents all major industrialised nations, including the USA and EU states, although China is not a member. The new recommendation is highly persuasive but falls short of advocating regulation of this area.

Instead, a set of broad non-binding principles are outlined to ensure that as AI technology develops, it benefits humanity rather than harms it.

The five principles for the responsible stewardship of trustworthy AI are inclusive growth, sustainable development and well-being, human-centred values and fairness, transparency and explainability, robustness and safety, and accountability.

The Recommendation also makes recommendations for national policies and international co-operation on AI, with particular regard to SMEs – these will be very important for shaping government policy in this area.

Is AI regulation needed at all?

Whether AI should be regulated globally is a much-debated topic. Public awareness of this issue is growing.  There have been well-publicised cases of AI recruitment tools that discriminate against women, and predictive policing software being biased against black people, to give just a couple of examples.

In the US some robo-financial advisers have even faced regulatory action. And a recent report by the UNESCO highlights gender bias in AI. AI typically uses algorithms to make decisions underpinned by machine learning, and these processes are not necessarily free from bias.

A lot depends on the human programmer and how the data used to “train” an AI system is itself chosen and used.

Some argue AI affects too many sectors – from autonomous cars to recruitment, and from to health to criminal justice – which means one-size-fits-all rules are inappropriate.  Also, questions of accountability and liability are already addressed by existing laws, although some say these are not fit for purpose.

The OECD’s principles will influence policy-makers to ensure that as AI develops, countries have policies – and regulation where appropriate – in place to address the ethical issues surrounding AI.

EU and UK Developments

In addition to the OECD, the EU and the UK are already active in this area.  In April this year, the EU released detailed ethical guidelines for AI, and views building trust in AI as key.

The UK Government has also been looking at this area for several years, and this year, the UK’s Centre for Data Ethics and Innovation (CDEI) announced it would investigate algorithmic bias in decision-making. The sectors under the microscope could include financial services, local government, recruitment and crime and justice.

These sectors are seen as particularly important to investigate, given the significant impact decisions in these sectors can have on people’s lives, along with the risk of bias.

Implications for UK business

The OECD’s Recommendation does not have the force of law and won’t immediately change the current piecemeal legal regime that applies to AI in the UK.

However, they will be very influential in shaping how governments in the UK and elsewhere approach future AI regulation and policy.  So, if and until we see some AI-specific laws UK businesses using AI or who intend to do so will need to be alert both to general laws which have an impact on AI (such as the GDPR and the Equality Act 2010) as well as sector-specific regulation and guidance.  They also need to be aware of the increasing use of codes of conduct in this area.

Codes of conduct have the benefit of being able to be developed quickly (unlike hard law) and can be applied swiftly and flexibly updated in light of experience.  For example in February 2019 the UK Government published an updated Code of Conduct for data-driven healthcare, setting out ten principles.

While the principles of the code apply to a health data and MedTech context, the principles themselves are largely sector-independent and are worth consideration by any AI-driven business. We can expect to see other sectors developing similar Codes of Conduct.

Ultimately the successful use of AI requires trust, which both the EU and the OECD highlight. Transparency and legal compliance will help build trust. For example, making sure any personal data used is ethically sourced, and its use is GDPR compliant; ensuring algorithms avoid unfair bias; making security integral to the design and working with regulators (where relevant) from an early stage to ensure sector-specific issues are addressed.

We are seeing the use of regulatory sandboxes – safe spaces to try out AI and other disruptive technologies.

These considerations, along with the broader current policy context around transparency and accountability, are all crucial to the successful implementation of AI in business. In this sense, the OECD’s Recommendation is a perfect place to start.

Read more:
Regulating Artificial Intelligence: What the OECD’s new guidelines mean for UK businesses

]]>
https://bmmagazine.co.uk/opinion/regulating-artificial-intelligence-what-the-oecds-new-guidelines-mean-for-uk-businesses/feed/ 3