Transparency, explainability and fairness in approaches to AI regulation: Takeaways from the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
a Financial Regulation Innovation Lab, Strathclyde Business School, University of Strathclyde, Glasgow, Scotland
b Michael Smurfit Graduate Business School, University College Dublin, Dublin, Ireland
Introduction and Purpose
AI offers amazing opportunities, but has the potential for both harm and good. Used responsibly it can perhaps redress urgent concerns. Conversely, careless use may worsen societal harms – fraud, discrimination, bias, and disinformation among others. AI deployment for good and towards achieving its many benefits necessitates mitigation of its considerable risks, demanding efforts from government, the private sector, academia, and civil society (Biden Jr., 2023).
Thus, on the 30th of October 2023 an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI) was issued from the White House’s Briefing Room under the authority of President Biden (Biden Jr., 2023). Through the order’s authority, the utmost priority was placed on AI development and use governance via a coordinated, Federal Government-wide approach. The pace of AI capability advancements compelled this action (Biden Jr., 2023).
The order’s impact is assured by the force of law, and federal/executive departments and agencies[1] were made accountable for several duties within it. The aim is to achieve a more innovative, secure, productive, and prosperous future for equitable AI governance (Biden Jr., 2023). Consequently, they have undertaken initiatives to assist in shaping AI policy and advance the safe and responsible development and utilization of AI.[2]
The US’s systematic importance in shaping the global economic landscape makes it interesting to explore its approach to AI regulation (Jain, 2024). Thus, aspects centred around transparency, fairness and explainability within the Executive Order are outlined and form the basis of this piece. A particular emphasis is placed on Sections 7 (Advancing Equity and Civil Rights) and Section 8 (Protecting Consumers, Patients, Passengers, and Students), given the relevance of their respective content to explainability, transparency, and fairness in the context of this article. Finally, a juxtaposition against EU and UK regulatory approaches is made to draw out similarities and differences.
Executive Order Structure
The executive order is structured into the following sections:
- Purpose.
- Policy and Principles.
- Definitions.
- Ensuring the Safety and Security of AI Technology.
- Promoting Innovation and Competition.
- Supporting Workers.
- Advancing Equity and Civil Rights.
- Protecting Consumers, Patients, Passengers, and Students.
- Protecting Privacy.
- Advancing Federal Government Use of AI.
- Strengthening American Leadership Abroad.
- Implementation.
- General Provisions.
Policy and principles
Eight guiding priorities and adhering principles are outlined for agencies, to comply with the order’s mandate, as appropriate and consistent with applicable law, while, where feasible, considering the views of other agencies, industry, academia, civil society, labor unions, international allies and partners, and other relevant organizations (Biden Jr., 2023). In synopsis, they are:[3]
(a) Safe and secure AI, requiring robust, reliable, repeatable, and standardized AI system evaluations, as well as policies, institutions, and other mechanisms to test, understand, and mitigate risks before use. This includes addressing the most pressing security risks of AI systems, while navigating AI’s opacity and complexity (Biden Jr., 2023).
(b) Promote responsible innovation, competition, and collaboration for AI leadership, and unlock potential for society’s most difficult challenges, through related education, training, development, research, and capacity investments. Concurrently, tackle novel intellectual property (IP) questions and other problems to shield inventors and creators (Biden Jr., 2023).
(c) Responsible AI development and use requiring commitment to supporting workers. As new jobs and industries are created, workers need a seat at the table, including collective bargaining, so they benefit from opportunities. Job training and education to be adapted for a diverse workforce and providing access to AI-created opportunities (Biden Jr., 2023).
(d) AI policies consistent with the Administration’s dedication to advancing equity and civil rights. AI use to disadvantage those already too often denied equal opportunity and justice should not be tolerated. From hiring to housing to healthcare, AI use can deepen discrimination and bias, rather than improving quality of life (Biden Jr., 2023).
(e) Protect interest of those increasingly using, interacting with, or purchasing AI and enabled products in daily lives. New technology usage does not excuse organizations from legal obligations, and hard-won consumer protections are more important in moments of technological change (Biden Jr., 2023).
(f) Protect privacy and civil liberties as AI continues advancing. AI makes it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires. AI’s capabilities in these areas can increase the risk that personal data is exploited and exposed (Biden Jr., 2023).
(g) Manage the risks from Federal Government’s own AI use and increase its internal capacity to regulate, govern, and support responsible AI use for better results. Steps are to be taken to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines and ease AI professionals’ path into the Federal Government to help harness and govern AI (Biden Jr., 2023).
(h) Lead the way to global societal, economic, and technological progress, as in previous eras of disruptive innovation and change. This is not measured solely by technological advancements the country makes. Effective leadership also means pioneering systems and safeguards to deploy technology responsibly — and building and promoting safeguards with the rest of the world (Biden Jr., 2023).
Definitions
“Artificial intelligence” or “AI” is defined in the order as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action (Biden Jr., 2023).
Further, “AI model” in the order means a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs (Biden Jr., 2023).
Finally, the order’s “AI system” definition is any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI (Biden Jr., 2023).
Transparency, explainability and fairness
While some notable elements of transparency, explainability and fairness are present, directly or indirectly, in other sections of the order, given their emphasised pertinence for human, consumer, and fundamental rights implications (Jain, 2024), over and above the guiding principles and policies discussed earlier, Section 7 and Section 8 delve into the greatest detail on these areas of particular interest.
Section 7 Advancing Equity and Civil Rights provides edification and guidance predominantly in relation to bias and discrimination from an AI perspective. This is in the context of varied rights including those related to the dispensation of criminal justice, and government benefits and programs. Finally, this is also done in the context of the broader economy: specifically, in so far as AI decision making is concerned, whether for disabilities, hiring, housing, consumer financial markets, tenant screening, among others (Biden Jr., 2023).[4]
Section 8 Protecting Consumers, Patients, Passengers, and Students illustrates, from the lens of AI, the direction and principles in relation to aspects of healthcare, public health, and human services. It also clarifies in relation to facets of bias and discrimination in such contexts. Moreover, it details guidance on transportation, education, and communication insofar as AI is concerned (Biden Jr., 2023).[5]
Disparities and parities viz-a-viz the UK and EU
Unlike the UK, and like the EU, explicit definitions for AI are mapped out within the order as highlighted earlier (Jain, 2024). For the most part, the order is phrased in the context of the US and its applicability is for the most part confined to the US, but similar to both the UK and EU, instances exist where international applicability comes into play (Jain, 2024). Notably however, the onus is largely laid upon existing regulatory bodies for the implementation of the order like the UK, albeit with the distinction that some existing US bodies (for example, TechCongress) mostly, if not entirely, have AI within their remits. Thus, in the latter respect, approach of the US is more similar to that of the EU, and perhaps most accurately defined as a combination of the two (Jain, 2024).
In so far as fairness, explainability and transparency are concerned, there is a very holistic emphasis from US lawmakers along several unique considerations. In this, the approach is more akin to that of the EU. As far as caveats and advantages are concerned, a comparison between the US and the UK can be drawn that is broadly parallel to the contrast between the EU and the UK. Specifically, due to its stricter approach, and bureaucratic structure, it will necessitate expending significantly more compliance time, cost, and effort. However, such regulatory guidelines have stronger ethical grounding, possibly ensuring the best interests of relevant stakeholders, and avoiding dark innovation, bad players, reputational damage, and insidious misuse (Jain, 2024). Lastly as seen for the EU and UK (Jain, 2024), fairness, explainability, and transparency once again come to the fore as key considerations in regulating AI within the order. They are also ubiquitously present principles in the approach of the US as evidenced above, underlining their importance and salience in lawmakers’ minds.
Future topics
Expounding upon and assessing the evolution of this regulatory space may be compelling subjects for future articles, as they could hold manifold implications for explainability, transparency and fairness. Further iterations or final versions of specific draft guidance (referenced in footnotes earlier in this piece) created in response to this order could be analysed in further detail (for instance, see here), and comparisons with other similar frameworks (for instance, see here) may be of interest.
References
Biden Jr., J. R. (2023, October 30). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Retrieved from The White House’s Official Website – Briefing Room – Presidential Actions: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Jain, K. (2024, April 03). How transparency, explainability and fairness are being connected under UK and EU approaches to AI regulation. Retrieved from FinTech Scotland: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.fintechscotland.com%2Fhow-transparency-explainability-and-fairness-are-being-connected-under-uk-and-eu-approaches-to-ai-regulation%2F&data=05%7C02%7Ckushagra.jain%40strath.ac.uk%7C1f806I
Image created by OpenAI’s DALL·E, based on an article summary provided by ChatGPT.
How transparency, explainability and fairness are being connected under UK and EU approaches to AI regulation
Article written by Kushagra Jain, research associate for the Financial Regulation Innovation Lab and scholar at the Michael Smurfit Graduate Business School, University College Dublin, Dublin, Ireland.
Introduction and global perspective
Rapid and continuing advances in artificial intelligence (AI) have had profound implications. These have and will continue to reshape our world. Regulators have responsibly and proactively responded to these paradigm shifts. They have begun to put in place regimes to govern AI use.
Global collaboration is taking place in developing these frameworks and policies. For instance, an AI Safety Summit was held in the UK in November 2023. Participants included 28 nations representing the EU, US, Asia, Africa, and the Middle East. Its aim, with internationally coordinated action, was to mitigate AI development “frontier” risks. At the summit, the necessity to collaboratively test next generation AI models against critical national security, safety and societal concerns was identified. Alongside this, the need to develop a report to build international consensus on both risk and capabilities was acknowledged. Two further summits are planned in the next 6 and 12 months respectively. Subsequent summits are expected to continue these topical and crucial global dialogues. These could perhaps build on the first summit’s key insights and realisations.[1]
The UK’s pro-innovation regulation policy paper similarly emphasises continued work with international partners to deliver interoperablility. Further it hopes to incentivise responsible application design, and development of AI. The paper aims for the UK’s AI innovation to be seen as the most attractive in the world. To achieve this aim, it seeks to ensure international compatibility between approaches. Consequently, this would attract international investments and encourage exports (Secretary of State for Science, 2023).[2] Notably however, different regions have taken distinct approaches to regulation applicable within their jurisdiction.
Distinctions between the EU and UK approaches
Broadly, the draft EU Artificial Intelligence Act seeks to codify a risk-based approach within its legislative framework. The framework categorises unacceptable, high, and low risks which threaten users’ safety, human safety, and fundamental rights. It also institutes a new AI regulator (Yaros et al., 2023, Yaros et al., 2021). In contrast, the UK’s approach generally espouses being iterative, agile and context dependent. It is designed to make responsible innovation easier. Existing regulators are responsible for its implementation. All of this is outlined in their AI Regulatory Policy Paper and AI White Paper (Secretary of State for Science, 2023, Prinsley et al., 2023, Yaros et al., 2022).
Another key distinction demarcates the two. No all-encompassing definition of what “AI” or an “AI system” constitute exists in the UK’s case. AI is instead framed in the context of autonomy and adaptivity. The objective is ensuring continued relevance of the proposed legislation for new technologies. This means legal ambiguity is inherent in such an approach. However, individual regulator guidance is expected to resolve this within each regulator’s remit (Prinsley et al., 2023, Yaros et al., 2022).
The EU legislation would apply to all AI system providers in the EU. Further, it also applies to users and providers of AI systems, where the system produced output is utilised in the EU. This applicability is regardless of where they are domiciled. It is envisioned as a civil liability regime to redress AI-relevant problems and risks. At the same time, it seeks to do so without unduly constraining or hindering technological development. Maintaining excellence and trust in AI technology at the same time are the dual targets within it (Yaros et al., 2023, Yaros et al., 2021).
Conversely, the UK regulation applies to the whole of the UK. However, it is also territorially relevant beyond the UK in terms of enforcement and guidance applicability. Initially, it is on a non-statutory footing. The rationale is that it could create obstacles for innovativeness and businesses. Moreover, rapid and commensurate responses may also be impeded if statutory duty is imposed straightaway. During this transitory period, existing regulators’ domain expertise is relied upon for implementation. The eventual intention is assessing if a statutory duty needs to be imposed. Another aim is further strengthening regulator mandates for implementation. Finally, allowing regulators flexibility to exercise judgment in applying principles is a target. Over and above these, coordination through central support functions for regulators is envisaged. Innovation-friendly, yet effective and proportionate risk responses would be the desired outcome of such functions. These functions would be within government. However, they would leverage expertise and activities more broadly across the economy. Additionally, they will be complemented and aligned. This will be achieved through voluntary guidance, and technical standards. Assurance techniques will similarly be deployed, alongside trustworthy AI tools, whose use would be encouraged (Secretary of State for Science, 2023, Prinsley et al., 2023).
Shared focus on fairness, transparency and explainability
In spite of varied approaches, both the EU and UK share an emphasis on aspects such as fairness, transparency, and explainability. These in particular are of interest owing to their human, consumer, and fundamental rights implications. For the UK, this emphasis is apparent from two of their white paper’s five broad cross-sectoral principles (Secretary of State for Science, 2023, Prinsley et al., 2023, Yaros et al., 2022):
- Appropriate transparency and explainability: These are traits sought to be present in AI systems. Their decision-making processes should be accessible to parties to ensure heightened public trust, which non-trivially drives AI adoption. It remains to be discovered how relevant parties may be encouraged to implement appropriate transparency measures. This is acknowledged within the white paper.
- Fairness: Overall involves avoidance of discriminating unfairly, unfair outcomes, and undermining of individual and organisational rights by AI systems. It is understood that developing and publishing appropriate fairness definitions and illustrations for AI systems may become a necessity for regulators within their domains.
This was also encapsulated in the UK’s earlier AI Regulation Policy Paper as follows (Yaros et al., 2022):
- Appropriately transparent and explainable AI. AI systems may not always be meaningfully explainable. While largely unlikely to pose substantial risk, in specific high-risk cases, such unexplainable decisions may be prohibited by relevant regulators (e.g., a tribunal may decide where a lack of explainability may deprive an individual’s right to challenge the tribunal’s decision).
- Fairness considerations embedded into AI. Regulators should define “fairness” in their domain/sector. Further, they ought to outline the relevance of fairness considerations (e.g., for job applications).
In contrast, for the EU, this takes the following shape as encoded in the legislation (Yaros et al., 2023, Yaros et al., 2021):
- Direct human interface systems (such as chat bots) are of limited risk and acceptable if in compliance with certain transparency obligations. Put differently, end-user awareness of machine interaction is needed. For foundation models[3], intelligible instructions and extensive technical documentation preparation may fall into the explainability and transparency bucket. This enables providers downstream to comply with their respective obligations.
- Prohibition of a premise such as social scoring/ systems exploiting vulnerabilities of specific groups of persons. This is termed an unacceptable risk and can be considered linked to fairness. For foundation models, this may be framed as only incorporating datasets subject to appropriate data governance measures. Examples of these measures include data suitability and potential biases. Fairness may also take the form of context-specific fundamental rights impact assessments. These would bear in mind use context before deploying high-risk AI systems. More dystopian possibilities exist that may irreparably harm fairness. Such scenarios are avoided through outright bans on certain systems. These include those with indiscriminate scraping of databases, sensitive characteristic bio-metric categorisation, bio-metric real-time identity, emotion recognition, face recognition, and predictive policing.
Conclusions and future topics
In conclusion, merits and demerits come to mind when considering both the EU’s and UK’s paths to regulating AI innovation. The EU’s approach may be perceived as more bureaucratic. Owing to its stricter compliance approach, it would require anyone to whom it applies to expend significantly more time, cost, and effort. Only then will they ensure they do not fall foul of regulatory guidelines.
That being said, its stronger ethical grounding ensures the best interests of relevant stakeholders. In a similar vein to GDPR, it may serve as a blueprint for future AI regulations adopted by other countries around the world. Coupled with the EU’s new rules on machinery products ensuring new machinery generations guarantee user and consumer safety, it is a very comprehensive legal framework (Yaros et al., 2023, Yaros et al., 2021).
On the other hand, the UK’s approach has received acclaim from industry for its pragmatism and measured approach. The UK Science and Technology Framework singles out AI as one of 5 critical technologies as part of the government’s strategic vision. The need to establish such regulation was highlighted by Sir Patrick Vallance in his Regulation for Innovation review. In response to these factors, the AI Regulation Policy and White Papers were penned. The regulation’s ability to learn from experience while flexibly and continuously adopting best practices will catalyse industry innovation (Secretary of State for Science, 2023, Intellectual Property Office, 2023).
Nonetheless, a dark side of innovation may also manifest as a consequence. Bad players proliferating and exploiting the lack of statutory regulatory oversight may cause reputational damage to the UK, in so far as AI is concerned, if not handled rigorously. This is especially pertinent in insidious cases, such as those illustrated earlier by the banned AI systems under EU law.
Despite significant differences between the EU and UK’s approaches, commonalities exist in pivotal regulatory priorities such as transparency, explainability and fairness. Blended pro-innovation and risk-based regulatory approaches might achieve the best results for these priorities. Such a blend can be ascertained based on how efficacious each approach is in achieving its goals over time. and given the context of its application.
Given the systematic importance of the US in shaping the global economic landscape, it may be interesting to explore in a future blog its approach to AI regulation. In particular, investigating how transparency, explainability and fairness are dealt with in contrast with the EU, and juxtaposed against the UK, might shed new light on how AI regulation should evolve (Prinsley et al., 2023, Yaros et al., 2022, Yaros et al., 2021), with the dawn of what may one day be called the AI age in human history.
References
Intellectual Property Office (2023, 06 29). Guidance: The government’s code of practice on copyright and AI. Retrieved from: https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai
Prinsley, Mark A. and Yaros, Oliver and Randall, Reece and Hadja, Ondrej and Hepworth, Ellen (2023, 07 07). Mayer Brown: UK’s Approach to Regulating the Use of Artificial Intelligence. Retrieved from: https://www.mayerbrown.com/en/perspectives-events/publications/2023/07/uks-approach-to-regulating-the-use-of-artificial-intelligence
Secretary of State for Science, Innovation & Technology (2023, 08 03). Policy paper: A pro-innovation approach to AI regulation. Retrieved from: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
Yaros, Oliver and Bruder, Ana Hadnes and Leipzig, Dominique Shelton and Wolf, Livia Crepaldi and Hadja, Ondrej and Peters Salome (2023, 06 16). Mayer Brown: European Parliament Reaches Agreement on its Version of the Proposed EU Artificial Intelligence Act. Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2023/06/european-parliament-reaches-agreement-on-its-version-of-the-proposed–eu-artificial-intelligence-act
Yaros, Oliver and Bruder, Ana Hadnes and Hadja, Ondrej (2021, 05 05). Mayer Brown: The European Union Proposes New Legal Framework for Artificial Intelligence. Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2021/05/the-european-union-proposes-new-legal-framework-for-artificial-intelligence
Yaros, Oliver and Hadja, Ondrej and Prinsley, Mark A. and Randall, Reece and Hepworth, Ellen (2022, 08 17). Mayer Brown: UK Government proposes a new approach to regulating artificial intelligence (AI). Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2022/08/uk-government-proposes-a-new-approach-to-regulating-artificial-intelligence-ai
About the author
Kushagra Jain is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. His research interests include artificial intelligence, machine learning, financial/regulatory technology, textual analysis, international finance, and risk management, among others. He was awarded doctoral scholarships from the Financial Mathematics and Computation Cluster (FMCC), Science Foundation Ireland (SFI), Higher Education Authority (HEA) and Michael Smurfit Graduate Business School, University College Dublin (UCD). Previously, he worked within wealth management and as a statutory auditor. He completed his doctoral studies in Finance from UCD in 2023, and obtained his MSc in Finance from UCD, his Accounting Technician accreditation from the Institute of Chartered Accountants of India and his undergraduate degree from Bangalore University. He was formerly FMCC Database Management Group Data Manager, Research Assistant, PhD Representative and Teaching Assistant for undergraduate, graduate and MBA programmes.
[1] These details, and further information can be found here, here, and here.
[2] This information and further context can be found here.
[3] AI systems adaptable to a wide range of distinctive tasks, designed for output generality, and trained on broad data at scale.
Photo by Tara Winstead: https://www.pexels.com/photo/robot-pointing-on-a-wall-8386440/