Critique of the UK’s pro-innovation approach to AI regulation and implications for financial regulation innovation

Article written by Daniel Dao – Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde.


Recently, artificial intelligence (AI) is widely recognised as a pivotal technological advancement with the capacity to profoundly reshape societal dynamics. It is celebrated for its potential to enhance public services, create high-quality employment opportunities, and power the future. However, there remains a notable opacity regarding the potential threats it poses to life, security, and related domains, thus requiring a pro-active approach to regulation. To address this gap, the UK Government has released an AI white paper outlining its pro-innovation approach to regulating AI. While this white paper symbolises the contributions and endeavours aimed at providing innovative and dynamic solutions to tackle the significant challenge posed by AI, it is important to acknowledge that there are still certain limitations which the white paper may refine in subsequent iterations.

The framework of the UK Government’s AI regulations in general is underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: Safety, security, and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; Contestability and redress. The pro-innovation approach outlined in the UK Government's AI white paper proposes a nuanced framework reconciling the trade-off between risks and technological adoption. While the regulatory framework endeavours to identify and mitigate potential risks associated with AI, it also acknowledges the possibility that stringent regulations could impede the pace of AI adoption. Instead of prescribing regulations tailored to specific technologies, the document advocates for a context-based, proportionate approach. This approach entails a delicate balancing act, wherein genuine risks are weighed against the opportunities and benefits that AI stands to offer. Moreover, the white paper advocates for an agile and iterative regulatory methodology, whereby insights from past experiences following evolving technological landscapes inform the ongoing development of a responsive regulatory framework. Overall, this white paper presents an initial standardised approach that holds promise for effectively managing AI risks while concurrently promoting collaborative engagement among governmental bodies, regulatory authorities, industry stakeholders, and civil society.

However, notwithstanding the numerous advantages and potential contributions, certain limitations are often associated with inaugural documents addressing complex phenomena such as AI. Firstly, while the white paper offers extensive commentary on AI risks, its overarching thematic orientation predominantly centers on promoting AI through "soft laws" and "deregulation." The white paper seems to support AI development with various flexibilities rather than provide some certain stringent policies to mitigate AI risks, thus raising awareness regarding “balance”. The mechanism of "soft laws" hinges primarily on voluntary compliance and commitment. Specifically, without legal forces, there is a risk that firms may not fully adhere to their commitments or may only partially implement them.

Ambiguity or uncertainty is also one critical issue with the “soft laws” mechanism. There exists an absence of detailed regulatory provisions within the proposed framework outlined in the white paper. While the document espouses an "innovative approach" with promising prospects, its nature leaves industries and individuals to speculate about necessary actions, thereby raising the potential for inconsistencies in practical implementation and adoption. Firms lack a systematic, step-by-step process and precise mechanisms to navigate through various developmental stages. Crafting stringent guidelines for AI poses considerable challenges, yet it is essential to implement them with clarity and rigor to complement existing innovative approaches effectively.

One more point is that the iterative and proportional approach advocated may inadvertently lead to "regulation lag," whereby regulatory responses are only triggered in the wake of significant AI-related losses or harms, rather than being proactive. This underscores the necessity for a clear distinction between leading and lagging regulatory regimes, with leading regulations anticipating potential AI risks to establish regulatory guidelines proactively.

Acknowledging the notable potential and inherent constraints outlined in the AI white paper, we have identified several implications for innovation in financial regulation. The deployment of AI holds promise in revolutionising various facets of financial regulation, including bolstering risk management and ensuring regulatory compliance. The innovative approach could offer certain advantages to firms such as flexibility, cooperation, and collaboration among stakeholders to address complicated cases.

As discussed above, to implement the effectiveness of financial regulations, the government authorities may consider revising and developing some key points. Given the opaque nature of AI-generated outcomes, it is imperative to apply and develop some advanced techniques, such as Explainable AI (XAI), to support decision-making processes and mitigate latent risks. Additionally, while regulators may opt for an iterative approach in rule-setting to accommodate contextual nuances, it is imperative to establish robust and transparent ethical guidelines to govern AI adoption responsibly. Such guidelines, categorised as "leading" regulations, should be developed in detail and collaboratively, engaging industry stakeholders, academic experts, and civil society, to ensure alignment with societal values and mitigate potential adverse impacts. Furthermore, it is essential to establish unequivocal "hard laws" for firms and anticipate legal forces for non-compliance with regulations. These legal instruments serve as valuable supplements to the innovative "soft laws" and contribute to maintaining equilibrium within the market.


About the author

Daniel Dao is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. Besides, he is Doctoral Researcher in Fintech at Centre for Financial and Corporate Integrity, Coventry University, where his research topics focus on fintech (crowdfunding), sustainable finance and entrepreneurial finance. He is also working as an Economic Consultant at World Bank Group, Washington DC Headquarters, where he has been contributing to various policy publications and reports, including World Development Report 2024; Country Economic Memorandum of Latin American and Caribbean countries; Policy working papers of labor, growth, and policy reforms, etc…. Regarding professional qualifications and networks, he is CFA Charterholder and an active member of CFA UK. He has earned his MBA (2017) in Finance from Bangor University, UK, and his MSc (2022) in Financial Engineering from WorldQuant University, US. He has shown a strong commitment and passion for international development and high-impact policy research. His proficiency extends to data science techniques and advanced analytics, with a specific focus on artificial intelligence, machine learning, and natural language processing (NLP).


Photo by Markus Winkler: https://www.pexels.com/photo/a-typewriter-with-the-word-ethics-on-it-18510427/