Generative AI in the Context of UK/EU Regulation

The UK Government released its AI White Paper on March 29, 2023, outlining its plans for overseeing the implementation of artificial intelligence (AI) in the United Kingdom. The White Paper is a follow-up to the AI Regulation Policy Paper, which outlined the UK Government's vision for a future AI regulatory system in the United Kingdom that is supportive of innovation and tailored to specific contexts.

The White Paper presents an alternative methodology for regulating AI in contrast to the EU's AI Act. Rather than enacting comprehensive legislation to govern AI in the United Kingdom, the UK Government is prioritising the establishment of guidelines for the development and utilisation of AI. Additionally, it aims to enhance the authority of existing regulatory bodies such as the Information Commissioner's Office (ICO), the Financial Conduct Authority (FCA), and the Competition and Markets Authority (CMA) to provide guidance and oversee the use of AI within their respective domains.


What is the key information from the UK White Paper and EU Legal Framework for Artificial Intelligence?

In contrast to the proposed EU AI Act, the AI White Paper does not put out a comprehensive definition of the terms "AI" or "AI system" as intended by the UK Government. The White Paper defines AI based on two key attributes - adaptivity and autonomy - in order to ensure that the proposed regulatory framework remains relevant and effective in the face of emerging technology. Although the absence of a clear-cut definition of AI may cause legal ambiguity, it will be the responsibility of various regulators to provide instructions to firms, outlining their requirements on the use of AI within their jurisdiction.

The regulatory framework outlined in the AI White Paper, put forth by the UK Government, encompasses the entirety of the United Kingdom. The White Paper does not suggest altering the territorial scope of current UK legislation pertaining to AI. Essentially, this implies that if the current laws regarding the use of AI have jurisdiction outside national borders (like the UK General Data Protection Regulation), the instructions and enforcement by existing regulatory bodies may also apply outside of the United Kingdom. For a comparison of the UK and EU approaches to AI regulation, see Table 1 in the Appendix.


What will be the impact of the EU AI Act on the UK?

Once the EUAI Act is implemented, it will apply to UK firms who utilise AI systems within the EU, make them available on the EU market, or participate in any other activity regulated by the AI Act. These UK organisations must guarantee that their AI systems are in compliance, or else they may face the potential consequences of financial penalties and damage to their brand.

Nevertheless, the AI Act may have broader ramifications, perhaps causing a ripple effect for UK firms only operating within the UK market. The AI Act is expected to establish a global benchmark in this domain, much like the General Data Protection Regulation (GDPR) has done for data protection. There are two possible implications: firstly, UK companies that actively adopt and adhere to the AI Act can distinguish themselves in the UK market, attracting customers who value ethical and responsible AI solutions; secondly, as the AI Act becomes a benchmark, we may witness the UK's domestic regulations aligning with the AI Act in order to achieve consistency.

Moreover, the EU AI Act is a crucial legislative measure that promotes voluntary adherence, even for companies that may not initially be subject to its provisions (as emphasised in Article 69, which pertains to Codes of Conduct). Consequently, the Act is expected to have an effect on UK companies, especially those that provide AI services in the EU and utilise AI technologies to deliver their services within the region. It is essential to remember that numerous UK enterprises have a market presence that extends well beyond the borders of the UK, therefore making the EU AI Act very pertinent to them.

How does the United Kingdom's approach compare to those of other countries?

The UK Government is charting its own course in respect of AI implementation and its objective is to establish regulations for AI that promote innovation while safeguarding the rights and interests of individuals. The AI White Paper incorporates several ideas that align with the European Union's position on artificial intelligence. As an illustration, the Government plans to establish a novel regulatory framework for AI systems that pose substantial risks. Additionally, it intends to mandate that enterprises perform risk evaluations before utilising AI tools. This need is logical, particularly when considering that the AI tool is handling personal data, as data protection by design and default are important tenets of the UK GDPR. Nevertheless, the AI White Paper specifies that these ideas will not be implemented by legislation, at least not at first. Thus, the level of acceptance and the impact of the voluntary nature of these principles on their adoption by organisations throughout the UK remain uncertain.

The UK government has expressed its ambition to become a dominant force in the field of AI, taking the lead in establishing global regulations and standards to ensure the safe deployment of AI technology. As part of this endeavor, the UK hosted the AI Safety Summit in the autumn of 2023. Establishing a global agreement on AI regulation will effectively mitigate the negative consequences arising from emerging technology advancements.

Nevertheless, the international community's history of coordinating regulation does not instill trust. The initial legislation in social media, influenced by certain technology companies, granted legal immunity to platforms for hosting content created by users, hence creating challenges in regulating online harms at a later stage. The potential for this error to be replicated with AI exists. Although there have been recent demands for the establishment of a counterpart to the Intergovernmental Panel on Climate Change, as expressed by both the Prime Minister and the EU Commission President, reaching a unified agreement on climate change response has proven challenging due to conflicting national interests, similar to those observed in the context of artificial intelligence.

The UK's present strategy for regulating AI differs from the EU's proposed method outlined in the EU AI Act. The EU's proposal involves implementing strict controls and transparency obligations for AI systems deemed "high risk," while imposing less stringent standards for AI systems considered "limited risk." The majority of general-purpose AI systems are considered to have a high level of risk. This means that there are specific rules that developers of foundational models must follow, and they are also required to provide detailed reports explaining how the models are trained.

Additionally, there exists a collaborative effort between the United States and the European Union to create a collection of non-binding regulations for companies, known as the "AI Code of Conduct," in accordance with their shared plan for ensuring reliable and secure AI and mitigating any risks. The code of conduct will be accessible via the Hiroshima Process at the G7 to foster global agreement on AI governance. If this endeavour is successful, there is potential for the UK to diminish its influence on the formulation of international AI regulations. However, the publication of the AI bill of rights in the USA in October 2022 has the potential to result in a more principles-oriented approach that is in line with the United Kingdom.

Despite these potential dangers, the UK is establishing itself as a nation where companies can create cutting-edge AI technology and perhaps become a global leader in this field. This could be beneficial provided that a suitable equilibrium can be achieved between innovation and the secure advancement of systems.

What will be the effect of the EU AI Act on UK companies utilising Generative AI?

Due to the increasing popularity and widespread influence of Generative AI and Large Language Models (LLMs) in 2023, the EU AI Act underwent significant modifications in June 2023, specifically addressing the utilisation of Generative AI.

Foundation models are a category of expansive machine learning models that form the fundamental framework for constructing a diverse array of artificial intelligence applications. These models have undergone pre-training using extensive datasets, which allows them to acquire knowledge and comprehension of intricate patterns, relationships, and structures present in the data. Developers can achieve impressive skills in natural language processing, computer vision, and decision-making by refining foundation models for specific applications or domains. Some examples of foundation models include OpenAI's ChatGPT, Google's BERT, and PaLM-2. Foundation models have been essential in the advancement of sophisticated AI applications in diverse industries, owing to their versatility and adaptability.

Companies now engaged in the development of apps utilising Generative AI Large Language Models (LLMs) and comparable AI technologies, such as ChatGPT, Google Bard, Anthropic's Claude, and Microsoft's Bing Chat or 'Bing AI', must carefully consider the consequences of the EU AI Act. These companies should be cognisant of the potential ramifications of the Act on their operations and proactively take measures to assure adherence, irrespective of whether they are specifically targeted by the legislation. By doing this, they can remain at the forefront and sustain a robust presence in the always changing AI landscape.

Companies utilising these AI tools and 'foundation models' to provide their services must carefully assess and handle risks in accordance with Article 28b, and adhere to the transparency requirements outlined in Article 52 (1).

The primary objective of the EU AI Act is to establish a benchmark for ensuring AI safety, ethics, and responsible utilisation, while also enforcing requirements for openness and responsibility. Article 52 (3) of the EU AI Act, as revised in June 2023, establishes certain requirements on the utilisation of Generative AI.

In conclusion

Regulating AI in all it’s forms is a daunting and pressing task, but an essential one. Amidst the prevalent and rapidly increasing acceptance of AI, regulations must guarantee the reliability of AI systems, minimise AI-related risks, and establish mechanisms to hold accountable the individuals involved in the development, deployment, and utilisation of these technologies in case of failures and malpractice.

The UK's involvement in this challenge is appreciated, as is its commitment to advancing the goal of AI governance on the global stage. The UK has the chance to establish itself as a thought leader in global AI governance by introducing a context-based, institutionally focused framework for regulating AI. This approach might potentially be adopted by other global jurisdictions as a standard. The emergence and rapid advancement of Generative AI places heightened responsibility on the UK to assume this thought leadership role.


Table 1: Comparison between UK and EU: AI White Paper vs Legal Framework for Artificial Intelligence
Aspects UK EU
Approach 1.Ensure the safe utilization of AI: Safety is expected to be a fundamental concern in specific industries, such as healthcare or vital infrastructure. Nevertheless, the Policy Paper recommends that regulators adopt a context-dependent approach in assessing the probability of AI endangering safety and adopt a proportional strategy in mitigating this risk. 1. The European Parliament ratified the EU AI Act on June 14, 2023.
2. Ensure the technical security and proper functioning of AI: AI systems must possess robust technical security measures and operate according to their intended design and functionality. The Policy Paper proposes that AI systems undergo testing to assess their functionality, resilience, and security, taking into account the specific context and proportionality considerations. Additionally, regulators are expected to establish the regulatory requirements for AI systems in their respective sectors or domains. 2. European institutions will now commence negotiations to achieve consensus on the ultimate document. Consequently, the earliest possible implementation of the EU AI Act would be in 2025, even if it is adopted promptly.
3. Ensure that AI is adequately transparent and explainable: The Policy Paper recognizes that AI systems may not always be easily explicable, and in most cases, this is unlikely to present significant risks. Nevertheless, the Policy Paper proposes that in specific circumstances with a high level of risk, decisions that cannot be adequately justified may be disallowed by the appropriate regulatory body. This could include situations such as a tribunal decision where the absence of a clear explanation would prevent an individual from exercising their right to contest the tribunal's ruling. 3. Jurisdictional scope: If implemented, the EU AI Act will enforce a series of responsibilities on both providers and deployers of AI systems that fall within its scope and are used within or have an impact on the EU, regardless of where they are based.
4. Integrate fairness into AI: The Policy Paper suggests that regulators provide a clear definition of "fairness" within their specific sector or area and specify the circumstances in which fairness should be taken into account (such as in the context of job applications).
5. The Policy Paper asserts that legal people must bear responsibility for AI governance, ensuring that they are held accountable for the results generated by AI systems and assuming legal obligation. This responsibility applies to an identified or identifiable legal entity. 4. Broadening the ban on specific applications of AI systems to encompass remote biometric identification in publicly accessible areas, as well as emotion recognition and predictive policing technologies.
6. Elucidate pathways for seeking redress or challenging decisions: As stated in the Policy Paper, the use of AI should not eliminate the opportunity for individuals and groups to protest a decision, if they have the right to do so outside the realm of AI. Hence, the UK Government will need regulators to guarantee that the results produced by AI systems can be challenged in "pertinent regulated circumstances". 5. The scope of high-risk AI systems has been extended to encompass systems employed for voter manipulation or utilized in recommender systems of extremely large online platforms (referred to as VLOPs).
6. Establishing regulations for providers of foundation models, which are AI systems trained on extensive data, designed to produce general outputs, and can be customized for various specific purposes, including those that drive generative AI systems.
7. Prohibited risks, such as social scoring or systems that exploit vulnerabilities of specific groups of individuals, are considered unacceptable.
8. High-risk activities may be allowed, provided that they adhere strictly to requirements for conformity, documentation, data governance, design, and incident reporting obligations. These encompass systems utilized in civil aviation security, medical gadgets, and the administration and functioning of vital infrastructure.
9. Systems that directly engage with humans, such as chatbots, are allowed as long as they meet specific transparency requirements. These requirements include informing end-users that they are dealing with a machine and ensuring that the risk is limited.
10. Provide evidence through suitable design, testing, and analysis that potential risks that might be reasonably anticipated have been correctly identified and minimized;
11. Utilize only datasets that adhere to proper data governance protocols for foundational models, ensuring that data sources are suitable and potential biases are taken into account.
12. Create and construct a model that attains optimal levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity. 
13. Generate comprehensive technical documentation and clear instructions for use that enable downstream providers to fulfil their obligations effectively. 
14. Implement a quality management system to guarantee and record adherence to the aforementioned obligations.
15. Enroll the foundational model in a European Union database that will be upheld by the Commission.
In addition, the creators of foundational models utilized in generative AI systems would be required to openly acknowledge that the content was generated by AI and guarantee that the system includes protective measures to prevent the creation of content that violates European Union (EU) regulations. In addition, they would need to provide a summary of the training data utilized, which is safeguarded by copyright law.
Regulators The Policy Paper designated the Information Commissioner's Office (ICO), Competition and Markets Authority (CMA), Ofcom, Medicine and Healthcare Regulatory Authority (MHRA), and Equality and Human Rights Commission (EHRC) as the principal regulators in its new system. 1. National competent authorities for supervising the application and implementation. 
Note: Although several UK regulators and government agencies have initiated measures to promote the appropriate use of AI, the Policy Paper underscores the existing hurdles encountered by businesses, such as a dearth of transparency, redundancies, and incongruity among several regulatory bodies. 2. European Artificial Intelligence Board for coordination and advice.

About the Author

Dr. Hao Zhang is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. He holds a PhD in Finance from the University of Glasgow, Adam Smith Business School. Hao held the position of Senior Project Manager at the Information Center of the Ministry of Industry and Information Technology (MIIT) of the People's Republic of China. His recent research has focused on asset pricing, risk management, financial derivatives, intersection of technology and data science.


Photo by Kelly :