How transparency, explainability and fairness are being connected under UK and EU approaches to AI regulation

Article written by Kushagra Jain, research associate for the Financial Regulation Innovation Lab and scholar at the Michael Smurfit Graduate Business School, University College Dublin, Dublin, Ireland.


Introduction and global perspective

Rapid and continuing advances in artificial intelligence (AI) have had profound implications. These have and will continue to reshape our world. Regulators have responsibly and proactively responded to these paradigm shifts. They have begun to put in place regimes to govern AI use.

Global collaboration is taking place in developing these frameworks and policies. For instance, an AI Safety Summit was held in the UK in November 2023. Participants included 28 nations representing the EU, US, Asia, Africa, and the Middle East. Its aim, with internationally coordinated action, was to mitigate AI development “frontier” risks. At the summit, the necessity to collaboratively test next generation AI models against critical national security, safety and societal concerns was identified. Alongside this, the need to develop a report to build international consensus on both risk and capabilities was acknowledged. Two further summits are planned in the next 6 and 12 months respectively. Subsequent summits are expected to continue these topical and crucial global dialogues. These could perhaps build on the first summit's key insights and realisations.[1]

The UK's pro-innovation regulation policy paper similarly emphasises continued work with international partners to deliver interoperablility. Further it hopes to incentivise responsible application design, and development of AI. The paper aims for the UK's AI innovation to be seen as the most attractive in the world. To achieve this aim, it seeks to ensure international compatibility between approaches. Consequently, this would attract international investments and encourage exports (Secretary of State for Science, 2023).[2] Notably however, different regions have taken distinct approaches to regulation applicable within their jurisdiction.

 

Distinctions between the EU and UK approaches

Broadly, the draft EU Artificial Intelligence Act seeks to codify a risk-based approach within its legislative framework. The framework categorises unacceptable, high, and low risks which threaten users' safety, human safety, and fundamental rights. It also institutes a new AI regulator (Yaros et al., 2023, Yaros et al., 2021). In contrast, the UK's approach generally espouses being iterative, agile and context dependent. It is designed to make responsible innovation easier. Existing regulators are responsible for its implementation. All of this is outlined in their AI Regulatory Policy Paper and AI White Paper (Secretary of State for Science, 2023, Prinsley et al., 2023, Yaros et al., 2022).

Another key distinction demarcates the two. No all-encompassing definition of what “AI” or an “AI system” constitute exists in the UK's case. AI is instead framed in the context of autonomy and adaptivity. The objective is ensuring continued relevance of the proposed legislation for new technologies. This means legal ambiguity is inherent in such an approach. However, individual regulator guidance is expected to resolve this within each regulator's remit (Prinsley et al., 2023, Yaros et al., 2022).

The EU legislation would apply to all AI system providers in the EU. Further, it also applies to users and providers of AI systems, where the system produced output is utilised in the EU. This applicability is regardless of where they are domiciled. It is envisioned as a civil liability regime to redress AI-relevant problems and risks. At the same time, it seeks to do so without unduly constraining or hindering technological development. Maintaining excellence and trust in AI technology at the same time are the dual targets within it (Yaros et al., 2023, Yaros et al., 2021).

Conversely, the UK regulation applies to the whole of the UK. However, it is also territorially relevant beyond the UK in terms of enforcement and guidance applicability. Initially, it is on a non-statutory footing. The rationale is that it could create obstacles for innovativeness and businesses. Moreover, rapid and commensurate responses may also be impeded if statutory duty is imposed straightaway. During this transitory period, existing regulators' domain expertise is relied upon for implementation. The eventual intention is assessing if a statutory duty needs to be imposed. Another aim is further strengthening regulator mandates for implementation. Finally, allowing regulators flexibility to exercise judgment in applying principles is a target. Over and above these, coordination through central support functions for regulators is envisaged. Innovation-friendly, yet effective and proportionate risk responses would be the desired outcome of such functions. These functions would be within government. However, they would leverage expertise and activities more broadly across the economy. Additionally, they will be complemented and aligned. This will be achieved through voluntary guidance, and technical standards. Assurance techniques will similarly be deployed, alongside trustworthy AI tools, whose use would be encouraged (Secretary of State for Science, 2023, Prinsley et al., 2023).

 

Shared focus on fairness, transparency and explainability

In spite of varied approaches, both the EU and UK share an emphasis on aspects such as fairness, transparency, and explainability. These in particular are of interest owing to their human, consumer, and fundamental rights implications. For the UK, this emphasis is apparent from two of their white paper's five broad cross-sectoral principles (Secretary of State for Science, 2023, Prinsley et al., 2023, Yaros et al., 2022):

  • Appropriate transparency and explainability: These are traits sought to be present in AI systems. Their decision-making processes should be accessible to parties to ensure heightened public trust, which non-trivially drives AI adoption. It remains to be discovered how relevant parties may be encouraged to implement appropriate transparency measures. This is acknowledged within the white paper.
  • Fairness: Overall involves avoidance of discriminating unfairly, unfair outcomes, and undermining of individual and organisational rights by AI systems. It is understood that developing and publishing appropriate fairness definitions and illustrations for AI systems may become a necessity for regulators within their domains.

This was also encapsulated in the UK's earlier AI Regulation Policy Paper as follows (Yaros et al., 2022):

  • Appropriately transparent and explainable AI. AI systems may not always be meaningfully explainable. While largely unlikely to pose substantial risk, in specific high-risk cases, such unexplainable decisions may be prohibited by relevant regulators (e.g., a tribunal may decide where a lack of explainability may deprive an individual's right to challenge the tribunal's decision).
  • Fairness considerations embedded into AI. Regulators should define “fairness” in their domain/sector. Further, they ought to outline the relevance of fairness considerations (e.g., for job applications).

In contrast, for the EU, this takes the following shape as encoded in the legislation (Yaros et al., 2023, Yaros et al., 2021):

  • Direct human interface systems (such as chat bots) are of limited risk and acceptable if in compliance with certain transparency obligations. Put differently, end-user awareness of machine interaction is needed. For foundation models[3], intelligible instructions and extensive technical documentation preparation may fall into the explainability and transparency bucket. This enables providers downstream to comply with their respective obligations.
  • Prohibition of a premise such as social scoring/ systems exploiting vulnerabilities of specific groups of persons. This is termed an unacceptable risk and can be considered linked to fairness. For foundation models, this may be framed as only incorporating datasets subject to appropriate data governance measures. Examples of these measures include data suitability and potential biases. Fairness may also take the form of context-specific fundamental rights impact assessments. These would bear in mind use context before deploying high-risk AI systems. More dystopian possibilities exist that may irreparably harm fairness. Such scenarios are avoided through outright bans on certain systems. These include those with indiscriminate scraping of databases, sensitive characteristic bio-metric categorisation, bio-metric real-time identity, emotion recognition, face recognition, and predictive policing.

 

Conclusions and future topics

In conclusion, merits and demerits come to mind when considering both the EU's and UK's paths to regulating AI innovation. The EU's approach may be perceived as more bureaucratic. Owing to its stricter compliance approach, it would require anyone to whom it applies to expend significantly more time, cost, and effort. Only then will they ensure they do not fall foul of regulatory guidelines.

That being said, its stronger ethical grounding ensures the best interests of relevant stakeholders. In a similar vein to GDPR, it may serve as a blueprint for future AI regulations adopted by other countries around the world. Coupled with the EU's new rules on machinery products ensuring new machinery generations guarantee user and consumer safety, it is a very comprehensive legal framework (Yaros et al., 2023, Yaros et al., 2021).

On the other hand, the UK's approach has received acclaim from industry for its pragmatism and measured approach. The UK Science and Technology Framework singles out AI as one of 5 critical technologies as part of the government’s strategic vision. The need to establish such regulation was highlighted by Sir Patrick Vallance in his Regulation for Innovation review. In response to these factors, the AI Regulation Policy and White Papers were penned. The regulation's ability to learn from experience while flexibly and continuously adopting best practices will catalyse industry innovation (Secretary of State for Science, 2023, Intellectual Property Office, 2023).

Nonetheless, a dark side of innovation may also manifest as a consequence. Bad players proliferating and exploiting the lack of statutory regulatory oversight may cause reputational damage to the UK, in so far as AI is concerned, if not handled rigorously. This is especially pertinent in insidious cases, such as those illustrated earlier by the banned AI systems under EU law.

Despite significant differences between the EU and UK's approaches, commonalities exist in pivotal regulatory priorities such as transparency, explainability and fairness. Blended pro-innovation and risk-based regulatory approaches might achieve the best results for these priorities. Such a blend can be ascertained based on how efficacious each approach is in achieving its goals over time. and given the context of its application.

Given the systematic importance of the US in shaping the global economic landscape, it may be interesting to explore in a future blog its approach to AI regulation. In particular, investigating how transparency, explainability and fairness are dealt with in contrast with the EU, and juxtaposed against the UK, might shed new light on how AI regulation should evolve (Prinsley et al., 2023, Yaros et al., 2022, Yaros et al., 2021), with the dawn of what may one day be called the AI age in human history.

References

Intellectual Property Office (2023, 06 29). Guidance: The government’s code of practice on copyright and AI. Retrieved from: https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai

Prinsley, Mark A. and Yaros, Oliver and Randall, Reece and Hadja, Ondrej and Hepworth, Ellen (2023, 07 07). Mayer Brown: UK's Approach to Regulating the Use of Artificial Intelligence. Retrieved from: https://www.mayerbrown.com/en/perspectives-events/publications/2023/07/uks-approach-to-regulating-the-use-of-artificial-intelligence

Secretary of State for Science, Innovation & Technology (2023, 08 03). Policy paper: A pro-innovation approach to AI regulation. Retrieved from: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

Yaros, Oliver and Bruder, Ana Hadnes and Leipzig, Dominique Shelton and Wolf, Livia Crepaldi and Hadja, Ondrej and Peters Salome (2023, 06 16). Mayer Brown: European Parliament Reaches Agreement on its Version of the Proposed EU Artificial Intelligence Act. Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2023/06/european-parliament-reaches-agreement-on-its-version-of-the-proposed--eu-artificial-intelligence-act

Yaros, Oliver and Bruder, Ana Hadnes and Hadja, Ondrej (2021, 05 05). Mayer Brown: The European Union Proposes New Legal Framework for Artificial Intelligence. Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2021/05/the-european-union-proposes-new-legal-framework-for-artificial-intelligence

Yaros, Oliver and Hadja, Ondrej and Prinsley, Mark A. and Randall, Reece and Hepworth, Ellen (2022, 08 17). Mayer Brown: UK Government proposes a new approach to regulating artificial intelligence (AI). Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2022/08/uk-government-proposes-a-new-approach-to-regulating-artificial-intelligence-ai

 

About the author

Kushagra Jain is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. His research interests include artificial intelligence, machine learning, financial/regulatory technology, textual analysis, international finance, and risk management, among others. He was awarded doctoral scholarships from the Financial Mathematics and Computation Cluster (FMCC), Science Foundation Ireland (SFI), Higher Education Authority (HEA) and Michael Smurfit Graduate Business School, University College Dublin (UCD). Previously, he worked within wealth management and as a statutory auditor. He completed his doctoral studies in Finance from UCD in 2023, and obtained his MSc in Finance from UCD, his Accounting Technician accreditation from the Institute of Chartered Accountants of India and his undergraduate degree from Bangalore University. He was formerly FMCC Database Management Group Data Manager, Research Assistant, PhD Representative and Teaching Assistant for undergraduate, graduate and MBA programmes.

[1] These details, and further information can be found here, here, and here.

[2] This information and further context can be found here.

[3] AI systems adaptable to a wide range of distinctive tasks, designed for output generality, and trained on broad data at scale.


Photo by Tara Winstead: https://www.pexels.com/photo/robot-pointing-on-a-wall-8386440/