top of page

AI Regulation in the UK

CG Professional

4

Minute Read

14 Apr 2023

The UK Department for Science, Innovation and Technology's has published its “pro-innovation” White Paper on AI regulation. The White Paper examines the approach of the UK government to promoting the use of artificial intelligence (“AI”) in the UK while ensuring that it is used ethically and safely.



To test just how far AI has come since the days of “Deep Blue”, and to explore some of the challenges of AI, we asked OpenAI’s ChatGPT to summarise the key themes of the White Paper. Here are the key messages it identified…


  • “The government's support for innovation and AI development in the UK:

o AI can bring significant economic and social benefits that AI can bring to the UK, such as improving healthcare, enhancing environmental sustainability, and driving productivity and growth.


o The government aims to create an environment that supports innovation and investment in AI by providing funding for research and development, promoting collaboration between businesses, academia, and government, and creating regulatory frameworks that enable responsible innovation.


  • The need for ethical and responsible use of AI to ensure public trust:

o Use of AI can raise ethical and societal concerns, such as bias, discrimination, and loss of privacy.


o To address these concerns, the government aims to ensure that AI is developed and used in a way that is consistent with ethical and human rights principles. This includes promoting responsible innovation, ensuring transparency and accountability in the development and deployment of AI systems, and upholding data protection and privacy rights.


  • The importance of building public trust and transparency through better regulation and oversight of AI applications:

o Responsible development and use of AI requires effective regulation and oversight.


o The government plans to establish a regulatory framework that provides clarity and certainty for businesses, promotes transparency and accountability, and ensures that the benefits of AI are realized while minimising its risks.


  • The need for a risk-based approach to AI regulation, based on the potential harm that could be caused by the technology:

o different AI applications carry different levels of risk, depending on their potential harm to individuals or society.


o The government aims to adopt a risk-based approach to AI regulation that focuses on managing the risks associated with specific AI applications while fostering innovation and economic growth.


  • The importance of international cooperation and collaboration to address the challenges posed by AI:

o The challenges posed by AI are global in nature and require international cooperation and collaboration.


o The government aims to work with international partners to develop common standards and best practices for the development and use of AI, promote responsible innovation, and ensure that the benefits of AI are shared by all.


Overall, the policy paper advocates for a pro-innovation approach to AI regulation that seeks to foster growth in the sector while ensuring that the technology is developed and deployed in a way that is safe, ethical, and respects fundamental human rights.”


What is clear from the above is that, whilst AI has the potential to massively transform any number of tasks and provide businesses that utilise it with a real competitive edge, it also creates significant risks, issues and challenges which must be addressed both at a macro level by Governments and regulators and at a micro level by organisations looking to deploy and commercialise AI-generated content.

One of the most immediate questions is as to who owns the intellectual property rights in the output produced by ChatGPT.


According to OpenAI’s terms of use:


  • Input provided to ChatGPT (i.e. the command / request made by the user) is owned by the user providing such input (assuming that intellectual property rights arise in relation to the same); and

  • OpenAI assigns to its user “all its right, title and interest” in the output, meaning the rights which arise in the text generated on the basis of the user command will also be owned by the user, again to the extent that any rights are actually created.

This reflects the common understanding that, usually, software cannot be recognised as an ‘author’ for copyright purposes. On this basis, the suggestion is that output from ChatGPT can be readily commercialised and monetised.


This is not, however, as straightforward as it seems. In particular, we would flag that:

  • OpenAI is clear in its terms of use that output may not be unique and that the function may repeat itself. For example, if two users input the exact same request, the response may be exactly the same.

  • There are also questions as to how reliant ChatGPT is on the sources of information it accesses. There is, therefore, scope for:

o it to reproduce materials which are themselves copyrighted and so create a risk that the output infringes the original materials which it is based on; and / or


o for material which is produced to lack the originality required for copyright protection.

  • Uploading personal data or confidential information to ChatGPT creates additional exposure in these areas which must be assessed for risk on a case-by-case basis. As an aside, there are currently significant privacy concerns over the use of OpenAI, with ChatGPT being recently banned in Italy and other countries potentially following suit.

In each case, the terms of use are clear that this is the user’s responsibility.


AI regulation is likely to be an area which needs to move as quickly as development in AI itself and will require regular attention.


Please get in touch with our commercial team if you’d like to discuss.

bottom of page