Critique of the UK’s pro-innovation approach to AI regulation and implications for financial regulation innovation
Article written by Daniel Dao ”“ Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde.
Recently, artificial intelligence (AI) is widely recognised as a pivotal technological advancement with the capacity to profoundly reshape societal dynamics. It is celebrated for its potential to enhance public services, create high-quality employment opportunities, and power the future. However, there remains a notable opacity regarding the potential threats it poses to life, security, and related domains, thus requiring a pro-active approach to regulation. To address this gap, the UK Government has released an AI white paper outlining its pro-innovation approach to regulating AI. While this white paper symbolises the contributions and endeavours aimed at providing innovative and dynamic solutions to tackle the significant challenge posed by AI, it is important to acknowledge that there are still certain limitations which the white paper may refine in subsequent iterations.
The framework of the UK Government’s AI regulations in general is underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: Safety, security, and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; Contestability and redress. The pro-innovation approach outlined in the UK Government’s AI white paper proposes a nuanced framework reconciling the trade-off between risks and technological adoption. While the regulatory framework endeavours to identify and mitigate potential risks associated with AI, it also acknowledges the possibility that stringent regulations could impede the pace of AI adoption. Instead of prescribing regulations tailored to specific technologies, the document advocates for a context-based, proportionate approach. This approach entails a delicate balancing act, wherein genuine risks are weighed against the opportunities and benefits that AI stands to offer. Moreover, the white paper advocates for an agile and iterative regulatory methodology, whereby insights from past experiences following evolving technological landscapes inform the ongoing development of a responsive regulatory framework. Overall, this white paper presents an initial standardised approach that holds promise for effectively managing AI risks while concurrently promoting collaborative engagement among governmental bodies, regulatory authorities, industry stakeholders, and civil society.
However, notwithstanding the numerous advantages and potential contributions, certain limitations are often associated with inaugural documents addressing complex phenomena such as AI. Firstly, while the white paper offers extensive commentary on AI risks, its overarching thematic orientation predominantly centers on promoting AI through “soft laws” and “deregulation.” The white paper seems to support AI development with various flexibilities rather than provide some certain stringent policies to mitigate AI risks, thus raising awareness regarding “balance”. The mechanism of “soft laws” hinges primarily on voluntary compliance and commitment. Specifically, without legal forces, there is a risk that firms may not fully adhere to their commitments or may only partially implement them.
Ambiguity or uncertainty is also one critical issue with the “soft laws” mechanism. There exists an absence of detailed regulatory provisions within the proposed framework outlined in the white paper. While the document espouses an “innovative approach” with promising prospects, its nature leaves industries and individuals to speculate about necessary actions, thereby raising the potential for inconsistencies in practical implementation and adoption. Firms lack a systematic, step-by-step process and precise mechanisms to navigate through various developmental stages. Crafting stringent guidelines for AI poses considerable challenges, yet it is essential to implement them with clarity and rigor to complement existing innovative approaches effectively.
One more point is that the iterative and proportional approach advocated may inadvertently lead to “regulation lag,” whereby regulatory responses are only triggered in the wake of significant AI-related losses or harms, rather than being proactive. This underscores the necessity for a clear distinction between leading and lagging regulatory regimes, with leading regulations anticipating potential AI risks to establish regulatory guidelines proactively.
Acknowledging the notable potential and inherent constraints outlined in the AI white paper, we have identified several implications for innovation in financial regulation. The deployment of AI holds promise in revolutionising various facets of financial regulation, including bolstering risk management and ensuring regulatory compliance. The innovative approach could offer certain advantages to firms such as flexibility, cooperation, and collaboration among stakeholders to address complicated cases.
As discussed above, to implement the effectiveness of financial regulations, the government authorities may consider revising and developing some key points. Given the opaque nature of AI-generated outcomes, it is imperative to apply and develop some advanced techniques, such as Explainable AI (XAI), to support decision-making processes and mitigate latent risks. Additionally, while regulators may opt for an iterative approach in rule-setting to accommodate contextual nuances, it is imperative to establish robust and transparent ethical guidelines to govern AI adoption responsibly. Such guidelines, categorised as “leading” regulations, should be developed in detail and collaboratively, engaging industry stakeholders, academic experts, and civil society, to ensure alignment with societal values and mitigate potential adverse impacts. Furthermore, it is essential to establish unequivocal “hard laws” for firms and anticipate legal forces for non-compliance with regulations. These legal instruments serve as valuable supplements to the innovative “soft laws” and contribute to maintaining equilibrium within the market.
About the author
Daniel Dao is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. Besides, he is Doctoral Researcher in Fintech at Centre for Financial and Corporate Integrity, Coventry University, where his research topics focus on fintech (crowdfunding), sustainable finance and entrepreneurial finance. He is also working as an Economic Consultant at World Bank Group, Washington DC Headquarters, where he has been contributing to various policy publications and reports, including World Development Report 2024; Country Economic Memorandum of Latin American and Caribbean countries; Policy working papers of labor, growth, and policy reforms, etc”¦. Regarding professional qualifications and networks, he is CFA Charterholder and an active member of CFA UK. He has earned his MBA (2017) in Finance from Bangor University, UK, and his MSc (2022) in Financial Engineering from WorldQuant University, US. He has shown a strong commitment and passion for international development and high-impact policy research. His proficiency extends to data science techniques and advanced analytics, with a specific focus on artificial intelligence, machine learning, and natural language processing (NLP).
Photo by Markus Winkler: https://www.pexels.com/photo/a-typewriter-with-the-word-ethics-on-it-18510427/
Simplifying Compliance with AI
Season 4, episode 2
Listen to the full episode here.
In this episode, we explore the role of Artificial Intelligence (AI) in streamlining compliance within the financial sector, showcasing the Financial Regulation Innovation Lab.
We discuss the future of financial compliance, enriched by AI’s capability to automate and innovate. This episode is for anyone interested in the intersection of technology, finance, and regulation, offering insights into the collaborative efforts shaping a more compliant and efficient financial landscape.
Guests:
- Antony Brookes – Head of UK Investment Compliance, abrdn
- Mark Cummins – Professor of Financial Technology at University of Strathclyde
- Joanne Seagrave – Head of Regulatory Affairs at Tesco Bank
Generative AI in the Context of UK/EU Regulation
The UK Government released its AI White Paper on March 29, 2023, outlining its plans for overseeing the implementation of artificial intelligence (AI) in the United Kingdom. The White Paper is a follow-up to the AI Regulation Policy Paper, which outlined the UK Government’s vision for a future AI regulatory system in the United Kingdom that is supportive of innovation and tailored to specific contexts.
The White Paper presents an alternative methodology for regulating AI in contrast to the EU’s AI Act. Rather than enacting comprehensive legislation to govern AI in the United Kingdom, the UK Government is prioritising the establishment of guidelines for the development and utilisation of AI. Additionally, it aims to enhance the authority of existing regulatory bodies such as the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), and the Competition and Markets Authority (CMA) to provide guidance and oversee the use of AI within their respective domains.
What is the key information from the UK White Paper and EU Legal Framework for Artificial Intelligence?
In contrast to the proposed EU AI Act, the AI White Paper does not put out a comprehensive definition of the terms “AI” or “AI system” as intended by the UK Government. The White Paper defines AI based on two key attributes – adaptivity and autonomy – in order to ensure that the proposed regulatory framework remains relevant and effective in the face of emerging technology. Although the absence of a clear-cut definition of AI may cause legal ambiguity, it will be the responsibility of various regulators to provide instructions to firms, outlining their requirements on the use of AI within their jurisdiction.
The regulatory framework outlined in the AI White Paper, put forth by the UK Government, encompasses the entirety of the United Kingdom. The White Paper does not suggest altering the territorial scope of current UK legislation pertaining to AI. Essentially, this implies that if the current laws regarding the use of AI have jurisdiction outside national borders (like the UK General Data Protection Regulation), the instructions and enforcement by existing regulatory bodies may also apply outside of the United Kingdom. For a comparison of the UK and EU approaches to AI regulation, see Table 1 in the Appendix.
What will be the impact of the EU AI Act on the UK?
Once the EUAI Act is implemented, it will apply to UK firms who utilise AI systems within the EU, make them available on the EU market, or participate in any other activity regulated by the AI Act. These UK organisations must guarantee that their AI systems are in compliance, or else they may face the potential consequences of financial penalties and damage to their brand.
Nevertheless, the AI Act may have broader ramifications, perhaps causing a ripple effect for UK firms only operating within the UK market. The AI Act is expected to establish a global benchmark in this domain, much like the General Data Protection Regulation (GDPR) has done for data protection. There are two possible implications: firstly, UK companies that actively adopt and adhere to the AI Act can distinguish themselves in the UK market, attracting customers who value ethical and responsible AI solutions; secondly, as the AI Act becomes a benchmark, we may witness the UK’s domestic regulations aligning with the AI Act in order to achieve consistency.
Moreover, the EU AI Act is a crucial legislative measure that promotes voluntary adherence, even for companies that may not initially be subject to its provisions (as emphasised in Article 69, which pertains to Codes of Conduct). Consequently, the Act is expected to have an effect on UK companies, especially those that provide AI services in the EU and utilise AI technologies to deliver their services within the region. It is essential to remember that numerous UK enterprises have a market presence that extends well beyond the borders of the UK, therefore making the EU AI Act very pertinent to them.
How does the United Kingdom’s approach compare to those of other countries?
The UK Government is charting its own course in respect of AI implementation and its objective is to establish regulations for AI that promote innovation while safeguarding the rights and interests of individuals. The AI White Paper incorporates several ideas that align with the European Union’s position on artificial intelligence. As an illustration, the Government plans to establish a novel regulatory framework for AI systems that pose substantial risks. Additionally, it intends to mandate that enterprises perform risk evaluations before utilising AI tools. This need is logical, particularly when considering that the AI tool is handling personal data, as data protection by design and default are important tenets of the UK GDPR. Nevertheless, the AI White Paper specifies that these ideas will not be implemented by legislation, at least not at first. Thus, the level of acceptance and the impact of the voluntary nature of these principles on their adoption by organisations throughout the UK remain uncertain.
The UK government has expressed its ambition to become a dominant force in the field of AI, taking the lead in establishing global regulations and standards to ensure the safe deployment of AI technology. As part of this endeavor, the UK hosted the AI Safety Summit in the autumn of 2023. Establishing a global agreement on AI regulation will effectively mitigate the negative consequences arising from emerging technology advancements.
Nevertheless, the international community’s history of coordinating regulation does not instill trust. The initial legislation in social media, influenced by certain technology companies, granted legal immunity to platforms for hosting content created by users, hence creating challenges in regulating online harms at a later stage. The potential for this error to be replicated with AI exists. Although there have been recent demands for the establishment of a counterpart to the Intergovernmental Panel on Climate Change, as expressed by both the Prime Minister and the EU Commission President, reaching a unified agreement on climate change response has proven challenging due to conflicting national interests, similar to those observed in the context of artificial intelligence.
The UK’s present strategy for regulating AI differs from the EU’s proposed method outlined in the EU AI Act. The EU’s proposal involves implementing strict controls and transparency obligations for AI systems deemed “high risk,” while imposing less stringent standards for AI systems considered “limited risk.” The majority of general-purpose AI systems are considered to have a high level of risk. This means that there are specific rules that developers of foundational models must follow, and they are also required to provide detailed reports explaining how the models are trained.
Additionally, there exists a collaborative effort between the United States and the European Union to create a collection of non-binding regulations for companies, known as the “AI Code of Conduct,” in accordance with their shared plan for ensuring reliable and secure AI and mitigating any risks. The code of conduct will be accessible via the Hiroshima Process at the G7 to foster global agreement on AI governance. If this endeavour is successful, there is potential for the UK to diminish its influence on the formulation of international AI regulations. However, the publication of the AI bill of rights in the USA in October 2022 has the potential to result in a more principles-oriented approach that is in line with the United Kingdom.
Despite these potential dangers, the UK is establishing itself as a nation where companies can create cutting-edge AI technology and perhaps become a global leader in this field. This could be beneficial provided that a suitable equilibrium can be achieved between innovation and the secure advancement of systems.
What will be the effect of the EU AI Act on UK companies utilising Generative AI?
Due to the increasing popularity and widespread influence of Generative AI and Large Language Models (LLMs) in 2023, the EU AI Act underwent significant modifications in June 2023, specifically addressing the utilisation of Generative AI.
Foundation models are a category of expansive machine learning models that form the fundamental framework for constructing a diverse array of artificial intelligence applications. These models have undergone pre-training using extensive datasets, which allows them to acquire knowledge and comprehension of intricate patterns, relationships, and structures present in the data. Developers can achieve impressive skills in natural language processing, computer vision, and decision-making by refining foundation models for specific applications or domains. Some examples of foundation models include OpenAI’s ChatGPT, Google’s BERT, and PaLM-2. Foundation models have been essential in the advancement of sophisticated AI applications in diverse industries, owing to their versatility and adaptability.
Companies now engaged in the development of apps utilising Generative AI Large Language Models (LLMs) and comparable AI technologies, such as ChatGPT, Google Bard, Anthropic’s Claude, and Microsoft’s Bing Chat or ‘Bing AI’, must carefully consider the consequences of the EU AI Act. These companies should be cognisant of the potential ramifications of the Act on their operations and proactively take measures to assure adherence, irrespective of whether they are specifically targeted by the legislation. By doing this, they can remain at the forefront and sustain a robust presence in the always changing AI landscape.
Companies utilising these AI tools and ‘foundation models’ to provide their services must carefully assess and handle risks in accordance with Article 28b, and adhere to the transparency requirements outlined in Article 52 (1).
The primary objective of the EU AI Act is to establish a benchmark for ensuring AI safety, ethics, and responsible utilisation, while also enforcing requirements for openness and responsibility. Article 52 (3) of the EU AI Act, as revised in June 2023, establishes certain requirements on the utilisation of Generative AI.
In conclusion
Regulating AI in all it’s forms is a daunting and pressing task, but an essential one. Amidst the prevalent and rapidly increasing acceptance of AI, regulations must guarantee the reliability of AI systems, minimise AI-related risks, and establish mechanisms to hold accountable the individuals involved in the development, deployment, and utilisation of these technologies in case of failures and malpractice.
The UK’s involvement in this challenge is appreciated, as is its commitment to advancing the goal of AI governance on the global stage. The UK has the chance to establish itself as a thought leader in global AI governance by introducing a context-based, institutionally focused framework for regulating AI. This approach might potentially be adopted by other global jurisdictions as a standard. The emergence and rapid advancement of Generative AI places heightened responsibility on the UK to assume this thought leadership role.
APPENDIX
Table 1: Comparison between UK and EU: AI White Paper vs Legal Framework for Artificial Intelligence | ||
Aspects | UK | EU |
Approach | 1.Ensure the safe utilization of AI: Safety is expected to be a fundamental concern in specific industries, such as healthcare or vital infrastructure. Nevertheless, the Policy Paper recommends that regulators adopt a context-dependent approach in assessing the probability of AI endangering safety and adopt a proportional strategy in mitigating this risk. | 1. The European Parliament ratified the EU AI Act on June 14, 2023. |
2. Ensure the technical security and proper functioning of AI: AI systems must possess robust technical security measures and operate according to their intended design and functionality. The Policy Paper proposes that AI systems undergo testing to assess their functionality, resilience, and security, taking into account the specific context and proportionality considerations. Additionally, regulators are expected to establish the regulatory requirements for AI systems in their respective sectors or domains. | 2. European institutions will now commence negotiations to achieve consensus on the ultimate document. Consequently, the earliest possible implementation of the EU AI Act would be in 2025, even if it is adopted promptly. | |
3. Ensure that AI is adequately transparent and explainable: The Policy Paper recognizes that AI systems may not always be easily explicable, and in most cases, this is unlikely to present significant risks. Nevertheless, the Policy Paper proposes that in specific circumstances with a high level of risk, decisions that cannot be adequately justified may be disallowed by the appropriate regulatory body. This could include situations such as a tribunal decision where the absence of a clear explanation would prevent an individual from exercising their right to contest the tribunal’s ruling. | 3. Jurisdictional scope: If implemented, the EU AI Act will enforce a series of responsibilities on both providers and deployers of AI systems that fall within its scope and are used within or have an impact on the EU, regardless of where they are based. | |
4. Integrate fairness into AI: The Policy Paper suggests that regulators provide a clear definition of “fairness” within their specific sector or area and specify the circumstances in which fairness should be taken into account (such as in the context of job applications). | ||
5. The Policy Paper asserts that legal people must bear responsibility for AI governance, ensuring that they are held accountable for the results generated by AI systems and assuming legal obligation. This responsibility applies to an identified or identifiable legal entity. | 4. Broadening the ban on specific applications of AI systems to encompass remote biometric identification in publicly accessible areas, as well as emotion recognition and predictive policing technologies. | |
6. Elucidate pathways for seeking redress or challenging decisions: As stated in the Policy Paper, the use of AI should not eliminate the opportunity for individuals and groups to protest a decision, if they have the right to do so outside the realm of AI. Hence, the UK Government will need regulators to guarantee that the results produced by AI systems can be challenged in “pertinent regulated circumstances”. | 5. The scope of high-risk AI systems has been extended to encompass systems employed for voter manipulation or utilized in recommender systems of extremely large online platforms (referred to as VLOPs). | |
6. Establishing regulations for providers of foundation models, which are AI systems trained on extensive data, designed to produce general outputs, and can be customized for various specific purposes, including those that drive generative AI systems. | ||
7. Prohibited risks, such as social scoring or systems that exploit vulnerabilities of specific groups of individuals, are considered unacceptable. | ||
8. High-risk activities may be allowed, provided that they adhere strictly to requirements for conformity, documentation, data governance, design, and incident reporting obligations. These encompass systems utilized in civil aviation security, medical gadgets, and the administration and functioning of vital infrastructure. | ||
9. Systems that directly engage with humans, such as chatbots, are allowed as long as they meet specific transparency requirements. These requirements include informing end-users that they are dealing with a machine and ensuring that the risk is limited. | ||
10. Provide evidence through suitable design, testing, and analysis that potential risks that might be reasonably anticipated have been correctly identified and minimized; | ||
11. Utilize only datasets that adhere to proper data governance protocols for foundational models, ensuring that data sources are suitable and potential biases are taken into account. | ||
12. Create and construct a model that attains optimal levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity. | ||
13. Generate comprehensive technical documentation and clear instructions for use that enable downstream providers to fulfil their obligations effectively. | ||
14. Implement a quality management system to guarantee and record adherence to the aforementioned obligations. | ||
15. Enroll the foundational model in a European Union database that will be upheld by the Commission. | ||
In addition, the creators of foundational models utilized in generative AI systems would be required to openly acknowledge that the content was generated by AI and guarantee that the system includes protective measures to prevent the creation of content that violates European Union (EU) regulations. In addition, they would need to provide a summary of the training data utilized, which is safeguarded by copyright law. | ||
Regulators | The Policy Paper designated the Information Commissioner’s Office (ICO), Competition and Markets Authority (CMA), Ofcom, Medicine and Healthcare Regulatory Authority (MHRA), and Equality and Human Rights Commission (EHRC) as the principal regulators in its new system. | 1. National competent authorities for supervising the application and implementation. |
Note: Although several UK regulators and government agencies have initiated measures to promote the appropriate use of AI, the Policy Paper underscores the existing hurdles encountered by businesses, such as a dearth of transparency, redundancies, and incongruity among several regulatory bodies. | 2. European Artificial Intelligence Board for coordination and advice. |
About the Author
Dr. Hao Zhang is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. He holds a PhD in Finance from the University of Glasgow, Adam Smith Business School. Hao held the position of Senior Project Manager at the Information Center of the Ministry of Industry and Information Technology (MIIT) of the People’s Republic of China. His recent research has focused on asset pricing, risk management, financial derivatives, intersection of technology and data science.
Photo by Kelly : https://www.pexels.com/photo/road-sign-with-information-inscription-placed-near-street-on-sunny-day-3861780/
Navigating the Tides of Regulatory Risk: Insights from Pinsent Masons’ April 2024 Edition
The April 2024 Edition of the Pinsent Masons’ Regulatory Risk Trends offers a deep dive into the current and emerging issues that are shaping the world of finance, legal compliance, and corporate governance. This comprehensive document, authored by leading experts in the field, serves as a very useful source of information for businesses, financial institutions, and legal professionals navigating the complex regulatory environment.
The report opens with thoughts from Jonathan Cavill, Partner at Pinsent Masons, who specialises in contentious regulatory and financial services disputes. His expertise sets the stage for an in-depth exploration of the regulatory challenges and opportunities that lie ahead.
Key takeaways
- Consumer Protection: The document highlights the Financial Conduct Authority’s (FCA) intensified focus on the fair treatment of customers, especially the vulnerable ones. With references to recent reviews and consultations, it stresses the importance of businesses aligning their practices with these standards.
- Fair Value and Insurance Sector Scrutiny: The FCA’s call for insurers to act upon the publication of the latest fair value data underscores a shift towards greater transparency and fairness in insurance pricing. The report examines the implications of these demands and offers strategies for compliance.
- Market Operations and Monetary Policy: Insights from Colin Read explore the Bank of England’s Sterling Monetary Framework (SMF) and its implications for market stability and liquidity. This section is crucial for understanding central bank reserves and the broader economic landscape.
- Advancements in Consumer Investments: Elizabeth Budd delves into the FCA’s strategy for consumer investments, emphasising the new Consumer Duty and its impact on financial advisers and investment firms. This represents a significant shift towards ensuring that consumer interests are at the heart of financial services.
- Innovation in Payment Systems: Andrew Barber’s commentary on the latest policy statements from the Bank of England provides a glimpse into how regulatory bodies are supporting payments innovation, particularly through the Real-Time Gross Settlement (RTGS) system. This is vital for fintech companies and traditional financial institutions alike.
- Fighting Financial Scams: The document doesn’t shy away from the darker side of finance, addressing the ongoing battle against scams. It presents a detailed analysis of recent cases and regulatory responses, offering valuable lessons and preventive strategies.
- Gender Equality: The Financial Services Compensation Scheme’s (FSCS) efforts in promoting gender equality within the financial sector are also covered. This initiative reflects a broader movement towards diversity and inclusion in finance, highlighting the societal values shaping regulatory agendas.
The Pinsent Masons’ April 2024 Edition of Regulatory Risk Trends is a roadmap for navigating the regulatory environment with confidence and foresight, giving you access to:
- Detailed analyses of regulatory developments and their implications for various sectors.
- Expert commentary from leading figures in law and finance.
- Strategic recommendations for staying ahead in a regulatory landscape marked by rapid change and increased scrutiny.
FCA Consumer Duty and Financial Inclusion: Does Artificial Intelligence Matter?
The Consumer Duty: What does it entail?
The Financial Conduct Authority (FCA) has recently issued the Consumer Duty Principle to guide financial services firms’ conduct in delivering good outcomes to their retail customers. The Consumer Duty is consumer-centric and outcome-oriented with the potential to bring about major transformation in the financial services industry.
The Consumer Duty is supported by three cross”‘cutting rules that require firms to:
- Act in good faith towards retail customers.
- Avoid causing foreseeable harm to retail customers.
- Enable and support retail customers to pursue their financial objectives.
The Consumer Duty is expected to help firms achieve the following outcomes:
- The first outcome relates to products and services, where products and services are designed to meet the needs of consumers.
- The second outcome relates to price and value, which inter alia focuses on ensuring that consumers receive fair value for goods and services.
- The third outcome seeks to promote consumer understanding through effective communication and information sharing. This is to ensure that consumers understand the nature and characteristics of products and services including potential risks.
- The fourth outcome relates to consumer support, where consumers are supported to derive maximum benefits from financial products and services.
What are the implications for financial inclusion?
The Consumer Duty has significant implications for financial inclusion. Financial inclusion refers to access to and usage of financial services. While access is the primary objective of financial inclusion it does not always translate into usage due to several inhibiting factors such as price, transaction costs, and service quality. Removing the bottlenecks that limit the usage of financial services is therefore indispensable in unlocking the full benefits of financial inclusion.
The Consumer Duty is expected to trigger behavioural changes among financial institutions leading to significant effects on financial inclusion. Financial institutions are compelled to comply with the Consumer Duty and the cross-cutting rules to avoid regulatory risks that may take the form of sanctions. This implies that consumers will now have access to products and services that are fit for purpose, receive fair value for goods and services purchased, have a better understanding of products and services, and receive the support needed to derive maximum benefits from financial services. In this scenario, financial wellbeing will improve leading to a reduction in poverty and income inequality.
In contrast, however, the Consumer Duty can serve as a disincentive to innovate especially when the costs of compliance far outweigh the benefits, and this has significant implications for financial inclusion. Compliance costs may come in various forms including recruitment or training of staff, updating existing software and systems, or purchasing new ones. To reduce the risks of non-compliance financial institutions will be reluctant to innovate thereby limiting consumer choice. Firms can equally avoid the provision of services in areas and to segments of the population where the risk of non-compliance is high. In this case, vulnerable groups and areas are likely to be excluded from the provision of financial services (financial exclusion). These aspects of firms’ behaviours are more likely to be unobserved and subtle making it difficult to detect.
Does Artificial Intelligence matter?
Financial institutions are likely to adopt regulatory technologies and Artificial Intelligence (AI) solutions to comply with the Consumer Duty. This is particularly true given that financial firms are in constant search for automation and AI solutions to drive down the costs of regulatory compliance. The deployment of Machine Learning (ML) and AI in Anti-Money Laundering (AML) systems is taking a front stage in the financial services industry. AI-powered AML systems hold great promise to help financial services firms to detect suspicious activities that are likely to cause significant harm to consumers in real-time.
AI can help financial firms deliver good outcomes to consumers at low costs, especially to those at risk of financial exclusion. AI and ML algorithms can equip financial firms with the capability to remotely onboard customers and conduct remote identification checks thereby reducing costs. AI-powered solutions available to financial institutions during customer onboarding include but are not limited to real-time data access using open Application Programming Interface (API), image forensic, digital signature and verification, facial recognition, and video-based KYC (Know Your Customer). Remote customer onboarding simplifies the account opening process and reduces the costs and inconveniences associated with physical travel to bank branches which can discourage financially excluded consumers from accessing financial services.
AI and Natural Language Processing (NLP) play significant roles in customer-facing roles. The use of chatbots has the prospect of enhancing customer experiences through a rapid resolution of queries. Banks, for example, are moving from simple chatbot technologies to more advanced technologies including Large Language Models and Generative AI to enhance customer service. These advanced technologies facilitate communication between financial institutions and their customers.
AI and ML technologies also support automatic investment or financial advisory services. Robo-advisors use ML algorithms to automatically offer targeted investment or financial advice that is mostly done by human financial advisors. These technologies expand the provision of advisory services to a wide range of consumers including low-income consumers in a cost-effective manner.
AI and ML technologies offer financial institutions the potential to explore alternative sources of risk scoring using both structured and unstructured consumer data to predict their creditworthiness. The use of alternative sources of risk scoring has the potential to facilitate the provision of credit to consumers with limited credit history and low income.
What are some of the challenges with AI?
Regulatory technologies such as AI hold great prospects for compliance, but the deployment of these technologies comes with potential risks that can undermine the gains of financial inclusion. AI models for example are prone to embedded bias especially when the underlying dataset discriminates against certain groups or persons leading to differentiation in pricing and service quality. Bias in credit scoring algorithms can exclude vulnerable groups or regions from accessing loans and even if they do have access such loans are likely to be offered at high interest rates owing to unfair credit scoring. Also, bias in the underlying datasets of chatbots and Robo-advisors can lead to misinformation and cause significant harm to consumers. Data privacy concerns are on the increase especially given that any leakage in the dataset used to train AI models can expose sensitive consumer information. AI and ML technologies are not immune to cyber-attacks and technical glitches which can disrupt their functionality and expose consumers to harm. These examples imply that regulatory technologies and AI models pose a non-compliance risk to the Consumer Duty especially if they inhibit the delivery of good outcomes to consumers for example through discrimination and data privacy breaches.
What is the way forward?
The Consumer Duty is an important regulatory initiative with enormous potential to deepen financial inclusion and accelerate the positive contribution of financial inclusion to development. To achieve the objective of delivering good outcomes to consumers there is a need for constant engagement between the Financial Conduct Authority and stakeholders in the financial services industry. This will help timely identification and resolution of challenges that may arise during the implementation of the Consumer Duty.
While regulatory technologies and Artificial Intelligence are likely to play central roles in complying with the Consumer Duty there is the need for financial institutions to ensure that these technologies are themselves compliant with the Consumer Duty. This can be achieved by addressing the risks inherent in regulatory technologies and AI models. Senior managers of financial institutions are expected to play leading roles in mitigating the risk of non-compliance within the firm in line with the Senior Managers & Certification Regime.
About the Author(s)
Godsway Korku Tetteh is a Research Associate at the Financial Regulation Innovation Lab, University of Strathclyde (UK). He has several years of experience in financial inclusion research including digital financial inclusion. His research focuses on the impacts of digital technologies and financial innovations (FinTech) on financial inclusion, welfare, and entrepreneurship in developing countries. His current project focuses on the application of technologies such as Artificial Intelligence to drive efficiency in regulatory compliance. Previously, he worked as a Knowledge Exchange Associate with the Financial Technology (FinTech) Cluster at the University of Strathclyde. He also worked with the Cambridge Centre for Alternative Finance at the University of Cambridge to build the capacity of FinTech entrepreneurs, regulators, and policymakers from across the globe on FinTech and Regulatory Innovation. Godsway has a Ph.D. in Economics from Maastricht University (Netherlands) and has published in reputable journals such as Small Business Economics.
Email: godsway.tetteh@strath.ac.uk
LinkedIn: https://www.linkedin.com/in/godsway-k-tetteh-ph-d-83a82048/
Photo by Tara Winstead: https://www.pexels.com/photo/an-artificial-intelligence-illustration-on-the-wall-8849295/
How transparency, explainability and fairness are being connected under UK and EU approaches to AI regulation
Article written by Kushagra Jain, research associate for the Financial Regulation Innovation Lab and scholar at the Michael Smurfit Graduate Business School, University College Dublin, Dublin, Ireland.
Introduction and global perspective
Rapid and continuing advances in artificial intelligence (AI) have had profound implications. These have and will continue to reshape our world. Regulators have responsibly and proactively responded to these paradigm shifts. They have begun to put in place regimes to govern AI use.
Global collaboration is taking place in developing these frameworks and policies. For instance, an AI Safety Summit was held in the UK in November 2023. Participants included 28 nations representing the EU, US, Asia, Africa, and the Middle East. Its aim, with internationally coordinated action, was to mitigate AI development “frontier” risks. At the summit, the necessity to collaboratively test next generation AI models against critical national security, safety and societal concerns was identified. Alongside this, the need to develop a report to build international consensus on both risk and capabilities was acknowledged. Two further summits are planned in the next 6 and 12 months respectively. Subsequent summits are expected to continue these topical and crucial global dialogues. These could perhaps build on the first summit’s key insights and realisations.[1]
The UK’s pro-innovation regulation policy paper similarly emphasises continued work with international partners to deliver interoperablility. Further it hopes to incentivise responsible application design, and development of AI. The paper aims for the UK’s AI innovation to be seen as the most attractive in the world. To achieve this aim, it seeks to ensure international compatibility between approaches. Consequently, this would attract international investments and encourage exports (Secretary of State for Science, 2023).[2] Notably however, different regions have taken distinct approaches to regulation applicable within their jurisdiction.
Distinctions between the EU and UK approaches
Broadly, the draft EU Artificial Intelligence Act seeks to codify a risk-based approach within its legislative framework. The framework categorises unacceptable, high, and low risks which threaten users’ safety, human safety, and fundamental rights. It also institutes a new AI regulator (Yaros et al., 2023, Yaros et al., 2021). In contrast, the UK’s approach generally espouses being iterative, agile and context dependent. It is designed to make responsible innovation easier. Existing regulators are responsible for its implementation. All of this is outlined in their AI Regulatory Policy Paper and AI White Paper (Secretary of State for Science, 2023, Prinsley et al., 2023, Yaros et al., 2022).
Another key distinction demarcates the two. No all-encompassing definition of what “AI” or an “AI system” constitute exists in the UK’s case. AI is instead framed in the context of autonomy and adaptivity. The objective is ensuring continued relevance of the proposed legislation for new technologies. This means legal ambiguity is inherent in such an approach. However, individual regulator guidance is expected to resolve this within each regulator’s remit (Prinsley et al., 2023, Yaros et al., 2022).
The EU legislation would apply to all AI system providers in the EU. Further, it also applies to users and providers of AI systems, where the system produced output is utilised in the EU. This applicability is regardless of where they are domiciled. It is envisioned as a civil liability regime to redress AI-relevant problems and risks. At the same time, it seeks to do so without unduly constraining or hindering technological development. Maintaining excellence and trust in AI technology at the same time are the dual targets within it (Yaros et al., 2023, Yaros et al., 2021).
Conversely, the UK regulation applies to the whole of the UK. However, it is also territorially relevant beyond the UK in terms of enforcement and guidance applicability. Initially, it is on a non-statutory footing. The rationale is that it could create obstacles for innovativeness and businesses. Moreover, rapid and commensurate responses may also be impeded if statutory duty is imposed straightaway. During this transitory period, existing regulators’ domain expertise is relied upon for implementation. The eventual intention is assessing if a statutory duty needs to be imposed. Another aim is further strengthening regulator mandates for implementation. Finally, allowing regulators flexibility to exercise judgment in applying principles is a target. Over and above these, coordination through central support functions for regulators is envisaged. Innovation-friendly, yet effective and proportionate risk responses would be the desired outcome of such functions. These functions would be within government. However, they would leverage expertise and activities more broadly across the economy. Additionally, they will be complemented and aligned. This will be achieved through voluntary guidance, and technical standards. Assurance techniques will similarly be deployed, alongside trustworthy AI tools, whose use would be encouraged (Secretary of State for Science, 2023, Prinsley et al., 2023).
Shared focus on fairness, transparency and explainability
In spite of varied approaches, both the EU and UK share an emphasis on aspects such as fairness, transparency, and explainability. These in particular are of interest owing to their human, consumer, and fundamental rights implications. For the UK, this emphasis is apparent from two of their white paper’s five broad cross-sectoral principles (Secretary of State for Science, 2023, Prinsley et al., 2023, Yaros et al., 2022):
- Appropriate transparency and explainability: These are traits sought to be present in AI systems. Their decision-making processes should be accessible to parties to ensure heightened public trust, which non-trivially drives AI adoption. It remains to be discovered how relevant parties may be encouraged to implement appropriate transparency measures. This is acknowledged within the white paper.
- Fairness: Overall involves avoidance of discriminating unfairly, unfair outcomes, and undermining of individual and organisational rights by AI systems. It is understood that developing and publishing appropriate fairness definitions and illustrations for AI systems may become a necessity for regulators within their domains.
This was also encapsulated in the UK’s earlier AI Regulation Policy Paper as follows (Yaros et al., 2022):
- Appropriately transparent and explainable AI. AI systems may not always be meaningfully explainable. While largely unlikely to pose substantial risk, in specific high-risk cases, such unexplainable decisions may be prohibited by relevant regulators (e.g., a tribunal may decide where a lack of explainability may deprive an individual’s right to challenge the tribunal’s decision).
- Fairness considerations embedded into AI. Regulators should define “fairness” in their domain/sector. Further, they ought to outline the relevance of fairness considerations (e.g., for job applications).
In contrast, for the EU, this takes the following shape as encoded in the legislation (Yaros et al., 2023, Yaros et al., 2021):
- Direct human interface systems (such as chat bots) are of limited risk and acceptable if in compliance with certain transparency obligations. Put differently, end-user awareness of machine interaction is needed. For foundation models[3], intelligible instructions and extensive technical documentation preparation may fall into the explainability and transparency bucket. This enables providers downstream to comply with their respective obligations.
- Prohibition of a premise such as social scoring/ systems exploiting vulnerabilities of specific groups of persons. This is termed an unacceptable risk and can be considered linked to fairness. For foundation models, this may be framed as only incorporating datasets subject to appropriate data governance measures. Examples of these measures include data suitability and potential biases. Fairness may also take the form of context-specific fundamental rights impact assessments. These would bear in mind use context before deploying high-risk AI systems. More dystopian possibilities exist that may irreparably harm fairness. Such scenarios are avoided through outright bans on certain systems. These include those with indiscriminate scraping of databases, sensitive characteristic bio-metric categorisation, bio-metric real-time identity, emotion recognition, face recognition, and predictive policing.
Conclusions and future topics
In conclusion, merits and demerits come to mind when considering both the EU’s and UK’s paths to regulating AI innovation. The EU’s approach may be perceived as more bureaucratic. Owing to its stricter compliance approach, it would require anyone to whom it applies to expend significantly more time, cost, and effort. Only then will they ensure they do not fall foul of regulatory guidelines.
That being said, its stronger ethical grounding ensures the best interests of relevant stakeholders. In a similar vein to GDPR, it may serve as a blueprint for future AI regulations adopted by other countries around the world. Coupled with the EU’s new rules on machinery products ensuring new machinery generations guarantee user and consumer safety, it is a very comprehensive legal framework (Yaros et al., 2023, Yaros et al., 2021).
On the other hand, the UK’s approach has received acclaim from industry for its pragmatism and measured approach. The UK Science and Technology Framework singles out AI as one of 5 critical technologies as part of the government’s strategic vision. The need to establish such regulation was highlighted by Sir Patrick Vallance in his Regulation for Innovation review. In response to these factors, the AI Regulation Policy and White Papers were penned. The regulation’s ability to learn from experience while flexibly and continuously adopting best practices will catalyse industry innovation (Secretary of State for Science, 2023, Intellectual Property Office, 2023).
Nonetheless, a dark side of innovation may also manifest as a consequence. Bad players proliferating and exploiting the lack of statutory regulatory oversight may cause reputational damage to the UK, in so far as AI is concerned, if not handled rigorously. This is especially pertinent in insidious cases, such as those illustrated earlier by the banned AI systems under EU law.
Despite significant differences between the EU and UK’s approaches, commonalities exist in pivotal regulatory priorities such as transparency, explainability and fairness. Blended pro-innovation and risk-based regulatory approaches might achieve the best results for these priorities. Such a blend can be ascertained based on how efficacious each approach is in achieving its goals over time. and given the context of its application.
Given the systematic importance of the US in shaping the global economic landscape, it may be interesting to explore in a future blog its approach to AI regulation. In particular, investigating how transparency, explainability and fairness are dealt with in contrast with the EU, and juxtaposed against the UK, might shed new light on how AI regulation should evolve (Prinsley et al., 2023, Yaros et al., 2022, Yaros et al., 2021), with the dawn of what may one day be called the AI age in human history.
References
Intellectual Property Office (2023, 06 29). Guidance: The government’s code of practice on copyright and AI. Retrieved from: https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai
Prinsley, Mark A. and Yaros, Oliver and Randall, Reece and Hadja, Ondrej and Hepworth, Ellen (2023, 07 07). Mayer Brown: UK’s Approach to Regulating the Use of Artificial Intelligence. Retrieved from: https://www.mayerbrown.com/en/perspectives-events/publications/2023/07/uks-approach-to-regulating-the-use-of-artificial-intelligence
Secretary of State for Science, Innovation & Technology (2023, 08 03). Policy paper: A pro-innovation approach to AI regulation. Retrieved from: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
Yaros, Oliver and Bruder, Ana Hadnes and Leipzig, Dominique Shelton and Wolf, Livia Crepaldi and Hadja, Ondrej and Peters Salome (2023, 06 16). Mayer Brown: European Parliament Reaches Agreement on its Version of the Proposed EU Artificial Intelligence Act. Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2023/06/european-parliament-reaches-agreement-on-its-version-of-the-proposed–eu-artificial-intelligence-act
Yaros, Oliver and Bruder, Ana Hadnes and Hadja, Ondrej (2021, 05 05). Mayer Brown: The European Union Proposes New Legal Framework for Artificial Intelligence. Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2021/05/the-european-union-proposes-new-legal-framework-for-artificial-intelligence
Yaros, Oliver and Hadja, Ondrej and Prinsley, Mark A. and Randall, Reece and Hepworth, Ellen (2022, 08 17). Mayer Brown: UK Government proposes a new approach to regulating artificial intelligence (AI). Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2022/08/uk-government-proposes-a-new-approach-to-regulating-artificial-intelligence-ai
About the author
Kushagra Jain is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. His research interests include artificial intelligence, machine learning, financial/regulatory technology, textual analysis, international finance, and risk management, among others. He was awarded doctoral scholarships from the Financial Mathematics and Computation Cluster (FMCC), Science Foundation Ireland (SFI), Higher Education Authority (HEA) and Michael Smurfit Graduate Business School, University College Dublin (UCD). Previously, he worked within wealth management and as a statutory auditor. He completed his doctoral studies in Finance from UCD in 2023, and obtained his MSc in Finance from UCD, his Accounting Technician accreditation from the Institute of Chartered Accountants of India and his undergraduate degree from Bangalore University. He was formerly FMCC Database Management Group Data Manager, Research Assistant, PhD Representative and Teaching Assistant for undergraduate, graduate and MBA programmes.
[1] These details, and further information can be found here, here, and here.
[2] This information and further context can be found here.
[3] AI systems adaptable to a wide range of distinctive tasks, designed for output generality, and trained on broad data at scale.
Photo by Tara Winstead: https://www.pexels.com/photo/robot-pointing-on-a-wall-8386440/
17 fintechs selected for the Financial Regulation Innovation Lab’s first innovation call
The Financial Regulation Innovation Lab, in collaboration with the University of Strathclyde and The University of Glasgow, has selected the firms that will take part in its first innovation call centred around “Simplifying Compliance through AI and Emerging Technologies,”. This call aims to showcase how technology can meet UK and global regulatory requirements, potentially setting a new benchmark for future advancements in the industry.
The mission
This initiative will not only look at advancing innovation but also at highlighting the significant role of artificial intelligence and emerging technologies in simplifying and enhancing compliance processes within the financial sector. By bringing together academia and some of the UK’s leading financial institutions, such as Morgan Stanley, Tesco Bank, Virgin Money, abrdn, and Deloitte, the programme will offer unparalleled mentorship, insights, and real-world case studies to the participants.
The selected fintechs
17 companies have been meticulously selected to participate in this programme. These companies, established in the UK, Canada, and Singapore, represent the cutting edge of fintech innovation:
Aifluent
Amiqus
Argus Pro LLP
AsiaVerify
Auquan
Change Gap Ltd
Datawhisper
DX Compliance
Fairly AI
Financial Crime Intelligence
First Derivative
HAELO
International Data Flows
Legal-Pythia
Level E Research
Talan UK
Pytilia
Next steps
These selected fintech companies will gather in Glasgow on the 12th of March, where they will delve into use cases presented by the supporting financial institutions. This gathering will not only provide them with a unique opportunity to learn directly from leading experts in the field but also to hear about the latest developments in AI from university scholars. The discussion will span best practices for maximising the benefits of innovation calls and strategies for scaling businesses for success
The collaboration between these fintech innovators, academia, and some of the largest financial institutions in the UK will not only demonstrate the potential of AI and emerging technologies to revolutionise regulatory compliance but also help inform the future of financial innovation.
The FRIL project is funded by the Glasgow City Region Innovation Accelerator programme. Led by Innovate UK on behalf of UK Research and Innovation, the pilot Innovation Accelerator programme is investing £100m in 26 transformative R&D projects to accelerate the growth of three high-potential innovation clusters ”“ Glasgow City Region, Greater Manchester and West Midlands. Supporting the UK Government’s levelling-up agenda, this is a new model of R&D decision making that empowers local leaders to harness innovation in support of regional economic growth and help attract private R&D investment and develop future technologies.
The Art of Problem-Solving: Insights from a New Type of Innovation Call
“Fall in love with the problem you’re trying to solve”, that’s Kent Mackenzie’s mantra when it comes to innovation challenges. As leader of Deloitte’s Digital Compliance business, Kent has spent the past 12 years observing innovation calls of all different types, whether they are directly pointed towards a particular financial services organisation, regtech provider or academia. While many of these have come up with some quite good solutions, Kent observes that they can major quite heavily from one perspective.
What intrigues Kent at the moment, however, is an exciting new type of innovation call that is being issued from FinTech Scotland and leading financial services firms, in conjunction with Deloitte. Spearheaded by the newly established Financial Regulation Innovation Lab, the first in a series of industry-led innovation calls will focus on ‘Simplifying Compliance through the Application of AI and Emerging Technologies’.
With a passion for FinTech, data and advanced analytics, Kent has worked with local, national and international clients to develop tech and data solutions to manage financial crime, regulatory compliance, credit risk, and collections & recoveries. We thought he would be the ideal person to answer our questions and provide us with more detail about the inaugural innovation call”¦
Q: How can industry-wide innovation calls in general, such as the one led by the Financial Regulation Innovation Lab, contribute to accelerating positive change in financial services?
A: I think for me, the breadth and ambition of this series of innovation calls is what will stand out quite markedly. The wonder of these innovation challenges is the involvement of a very broad community of participants, from large financial institutions and fintech providers, to academia and regulators. What I’m looking forward to the most is having the richness that comes through from that breadth of community, because not only will we be able to understand the problem through a number of different lenses, but we can then go on to solve that problem with a number of different answers and potential solutions.
Q: How does the Lab’s unique environment for collaboration support financial services firms in innovating to meet their regulatory obligations. And what makes this initiative groundbreaking in that context?
A: I think there are a number of different components. Firstly, the Lab will provide us the opportunity to really focus on a particular use case. There have been some innovation calls in the past in which fintechs and regtechs have missed the point of the question and end up coming up with solutions that don’t solve the original problem. What is innovative about the Lab is that it allows us to bring in a range of perspectives from the people that are feeling the impact of this particular problem.
Secondly, this breadth of participants will be able to forensically examine every single element of a potential solution: from the established, to the groundbreaking, to the “way-out-there” considerations – these are all valid when you come to solve a complicated and gnarly problem like this. Lastly, once the Lab has gained an intimate understanding of how it is going to solve the problem, we can focus on the perspective from regulators, academics and end-users who will all try to ensure the solution will actually work in the context of the real world. I truly expect the Lab to be a real petri dish of experience, in the way that it will forensically deconstruct a problem, build up a solution, and then challenge from a number of different angles to ensure it is market-ready.
Q: The Lab’s inaugural innovation call is focusing on ‘Simplifying Compliance through the Application of AI and Emerging Technologies’. How will simplifying compliance processes accelerate positive change, and why is this particularly important now?
A: Ultimately, financial services companies are trying to do a number of things: to achieve better outcomes for consumers; to access people who might have previously been financially excluded with products and services; and to provide the best rates and offerings to clients. But in order to achieve all these things, a deep understanding of the underpinning regulations of these offerings is compulsory.
The current challenge for organisations is the time it takes between understanding the regulation, and then bringing products and services to market. The use of technology, however, can rapidly accelerate this understanding so that new products and services can be created with much more expedience. In addition, a beneficial byproduct of using technology is that it also promotes transparency on how a product has been shaped around a particular regulation decision, the communication of which is crucial to consumers, compliance departments and regulators.
Q: From Deloitte’s perspective, how does the innovation challenge contribute to building confidence in the adoption of AI and other technologies within the financial services sector, particularly in meeting global regulatory requirements?
A: When adopting AI, particularly in meeting global regulatory requirements, there can at times be a “black box” type feeling. Typically, the extraction and translation of regulatory obligations can be a highly nuanced affair due to the different risk appetites of organisations and the final definition of what constitutes an obligation. Because of this nuance, I don’t expect AI can solve the entirety of absolute extraction of appropriate obligations. But I do expect us to solve a lot of the problem in the right way, and in doing so then build confidence around how AI can play in this space. As we explore all the angles, facets, challenges and concerns, by unpacking and then restacking them, this will give us confidence that actually AI is capable of doing either all of this job, some of this job, or parts of this job.
Q: How do you see these Innovation Calls contributing to maintaining and growing Scotland and the UK’s position as a global leader in financial services regulatory innovation?
A: The net effect of these innovation calls across all of these groups is positive. For Deloitte itself, it will accelerate and advance our understanding of how our clients are thinking about this. For fintechs, it will also advance the intimate understanding and knowledge of the problem that their tech is trying to solve so they can better shape their offerings and propositions. Both large financial services organisations and consumers alike will have their eyes opened to the art of the possible, either in today’s world or the future. And for academia, it will ping their synapses on every level to further examine possible points of contention or unresolved issues. So for each of the stakeholders involved in these calls, it’s going to give them oxygen for their own individual pursuit.
I am also really intrigued about the role of the regulator during these innovation calls. I am quite looking forward to having the contribution from the regulator – in the room with a ringside seat – on where they stand on quite how far technology could and should take us.
Essentially, I think this new type of innovation call stands to help each of the stakeholders fall in love with the problem, construct solutions, and examine how they can be applied as live use cases.
Fintechs and other teams of innovators are invited to join the Financial Regulation Innovation Lab’s innovation call challenge, ‘Simplifying Compliance through the Application of AI and Emerging Technologies’. Applications are closing 3rd March. To find out more click here.
Regulating the Future: Building Trust and Managing Risks in AI for FinTech
Written by Dharini Mohan, MSc Financial Technology (FinTech) student at UWE Bristol. She is also a part-time Service Associate at Hargreaves Lansdown.
Artificial Intelligence (AI) has emerged as a transformative force in the FinTech sector, promising to revolutionise processes, enhance customer experiences, and drive innovation. However, as AI adoption accelerates, concerns surrounding regulation, trust, and risk management have become increasingly prominent.
Following the Rise & Shine event organised by Fintech Fringe, sponsored by Rise (created by Barclays) earlier this month in London, the critical importance of regulating AI in FinTech, building trust among stakeholders, and effectively managing risks were thoroughly discussed among the panellists to ensure sustainable growth and innovation in AI for FinTech. Here are some noteworthy insights and strategies that were shared.
Know Your AI (KYAI), Know Your Risk
While Know Your Customer (KYC) practices have long been a cornerstone of risk management for financial institutions, the emergence of AI introduces a new dimension to this imperative. Understanding the nuances of customer profiles is crucial for accurate risk assessment, but it’s equally essential to grasp the capabilities and limitations of AI systems to effectively manage associated risks. The challenge lies in the inherent complexity and unpredictability of AI algorithms, which can introduce unforeseen risks into operations across different sectors, whether financial or intangible. Without a comprehensive understanding of AI technologies and their potential implications, organisations risk being blindsided by vulnerabilities and shortcomings in their AI systems. Therefore, embracing the concept of KYAI is essential for navigating the complexities of AI-driven services and mitigating associated risks effectively.
Never Put Customer-Facing Operations to AI
Customers often seek personalised, empathetic interactions when addressing their queries or concerns ”“ qualities that are inherently human and difficult for AI systems to authentically replicate. The recent case involving Air Canada proves the potential repercussions of relying on AI for customer-facing operations. In this instance, Air Canada’s chatbot provided incorrect information to a traveller, leading to a dispute over liability for the misinformation provided. The airline argued that its chatbot was a “separate legal entity” responsible for its own actions, but the tribunal ruled in favour of the passenger, emphasising that Air Canada ultimately bears responsibility for the accuracy of information provided through its channels, whether human or AI-driven. This scenario demonstrates the significance of maintaining human oversight and accountability in customer interactions within AI technologies.
Plot Twist: Humans Can Make AI Better
It’s about finding the right balance ”“ a little bit of this, a little bit of that. Humans are the ones who input the data, so any decision that AI provides would align with the data it possesses and serve the data’s purpose. Human-based controls are crucial, and it’s up to the organisation to determine how they wish to establish regulations and understand their responsibilities based on their clients’ needs. The integration of Human-in-the-Loop (HITL) is brilliant as it allows humans to be involved in both the training and testing stages of building an algorithm, enabling real-time data control and contributing to the development of a dynamic risk profile. Having more controls on how the model handles data inputs, where the data is sourced, and how it’s divided for training and testing is essential to measure deviations effectively.
It is (Mathematically) Impossible to Eliminate All Discrimination and Bias
Given the impossibility of eliminating all discrimination and bias, organisations must carefully choose the biases inherent in their AI systems. Questions arise regarding the origins of Generative AI, particularly ChatGPT by OpenAI, with concerns raised over its development in a research lab in San Francisco. The training data, sourced from non-diversified datasets, presents a significant challenge, reflecting a limited cultural context and accentuating the necessity for a challenger model to address these gaps. For instance, you may not find sufficient information for certain countries, and that may potentially portray discrimination, but that is just the data the model was trained on. Despite undergoing rigorous training, AI is not infallible and is prone to errors. This highlights the prominence of continuous refinement and validation processes. Additionally, the intrinsic need for human oversight persists, as diversity never takes care of itself within AI systems. Synthetic datasets offer a solution to address the shortfall in training data, incorporating real-world data to provide comprehensive coverage and mitigate biases.
Key Strategies to Mitigate Risks: 1) Identifying 2) Classifying 3) Mapping Out
In navigating AI risks effectively, truly articulating an organisation’s specific risks serves as the foundational step in risk management, complemented by due diligence and a comprehensive understanding of AI deployment. It’s essential to consider whether to develop an in-house AI stack or outsource it, as well as implementation of post-deployment controls for ongoing training and maintenance of AI systems. Risk classification is key, alongside crucial actions such as fortifying cybersecurity measures, protecting data privacy, and monitoring third-party involvement, all while addressing opacity risks and setting risk-based priorities. FinTech ventures must carefully consider their product element alongside regulatory compliance. With the EU AI Act mandating the establishment of a risk management system for high-risk AI systems, it surely helps organisations stay up-to-date and compliant.
Conclusion
As AI continues to reshape the FinTech landscape, the importance of regulating AI, fostering trust, and managing risks cannot be emphasised enough. Regulatory frameworks must evolve to keep pace with technological advancements, ensuring responsible and ethical deployment of AI. It’s essential to acknowledge that AI literacy is just as vital as financial literacy, enabling the FinTech industry to fully leverage AI’s potential while navigating its inherent complexities and uncertainties.
Risk management is not merely a matter of ticking boxes; it requires continuous vigilance and adaptability.
FinTech Scotland Deepens Collaboration with Leading Global Financial Firms for Inaugural Innovation Challenge
Applications Open for AI Compliance Innovation to Discover Fintech Partners
Today, FinTech Scotland, working with professional services supporter Deloitte, and with Tesco Bank, Morgan Stanley and abrdn, launched a first-of-its-kind innovation challenge. The industry-led call to action will encourage financial institutions and innovators from across the Fintech community and beyond to learn collaboratively and facilitate industry forums to collectively share best practices for companies to develop new solutions to key financial regulatory challenges.
The inaugural innovation challenge, spearheaded by the newly established Financial Regulation Innovation Lab (FRIL), and funded through the UK Governments’ Innovation Accelerator programme, delivered by Innovate UK, focuses on ‘Simplifying Compliance through the Application of AI and Emerging Technologies’.
The first in a series of industry-led innovation calls, the initiative is dedicated to fostering confidence in the adoption of emerging technologies into financial services. Notably, this call aims to demonstrate the ability technology could have in meeting global regulatory requirements, setting a new standard for future advancements in the industry.
FinTech Scotland and the financial services firms, in conjunction with professional services leader Deloitte, are inviting entrepreneurs and innovators to identify and use technologies to address industry compliance challenges. Launched under the principle of responsible innovation, these calls set the stage for exploration and development of effective solutions that will yield positive outcomes for the pressing needs of consumers and businesses alike, resulting in an overall economic contribution.
Nicola Anderson, CEO of FinTech Scotland, said:
“We are extremely excited to kick off this inaugural industry innovation challenge. Demand-led innovation calls are an important part of the toolkit that the Financial Regulation Innovation Lab will employ to drive positive outcomes. It is also an opportunity to bring together financial institutions and innovators, enabling financial institutions to learn collaboratively about ways to improve compliance processes to drive efficiency for the sector and, ultimately, increase consumer protection.”
In partnership with the University of Strathclyde and the University of Glasgow, the Lab aims to leverage expertise in financial services risk and compliance and combine this with emerging technologies to build capabilities that maintain and grow both Scotland and the UK’s position as a global leader in financial services regulatory innovation.
Kent Mackenzie, Fintech Lead for Scotland at Deloitte said:
“Deloitte is excited to be one of the challenge supporters of the Financial Regulation Innovation Lab’s first innovation call. Simplifying compliance is critical to delivering change in financial services, and industry-wide solutions can help enable us to accelerate this positive change. The Lab provides a unique environment to support collaboration, and this groundbreaking initiative will further support how financial services firms are innovating to meet their regulatory obligations.”
Joanne Seagrave, Head of Regulatory Affairs at Tesco Bank:
“At Tesco Bank we embrace the opportunity that the Financial Regulation Innovation Lab’s innovation call series offers to collaborate with innovators. This will allow us to gain further insight on how utilising AI and emerging technologies could help support us in managing the evolving regulatory change landscape. It also presents a significant opportunity to advance industry understanding.”
Angela Benson, Head of Glasgow Finance at Morgan Stanley:
“Morgan Stanley recognises the opportunity in employing AI and emerging technologies to address the industry’s global regulatory obligations. We are delighted to partner with FinTech Scotland on this innovation challenge to foster ideation and support the next generation of innovators in this space. Having opened our office in Glasgow over 20 years ago, we have seen first-hand the depth of talent Scotland has to offer.”
Gareth Murphy, Chief Risk Officer at abrdn:
“At abrdn, we’re delighted to join the Financial Regulation Innovation Lab’s inaugural innovation call to action. It is essential that we continue to evolve the mix of people, process and technology in all of our activities. We draw on extensive experience in financial services, in Scotland and globally. This collaboration is a testament to our commitment to seizing the ongoing opportunities that financial services and innovation present.”
The programme includes three phases: challenge definition, solution design & testing, and final demonstrations. Applicants will receive invaluable insights about financial firms through close collaboration, a support network, academic expertise and service design support. Successful companies will receive grant awards up to £50,000 for further development and implementation.
Fintechs and other teams of innovators are invited to join the challenge. The application window is open from 1stFebruary. To find out more click here
The Innovation Challenge Call finale event will take place in April 2024.
The FRIL project is funded by the Glasgow City Region Innovation Accelerator programme.
Led by Innovate UK on behalf of UK Research and Innovation, the pilot Innovation Accelerator programme is investing £100m in 26 transformative R&D projects to accelerate the growth of three high-potential innovation clusters ”“ Glasgow City Region, Greater Manchester and West Midlands. Supporting the UK Government’s levelling-up agenda, this is a new model of R&D decision making that empowers local leaders to harness innovation in support of regional economic growth and help attract private R&D investment and develop future technologies.
Glasgow has a remarkable history rooted in industry and innovation and is home to world-leading science and technology expertise. The Innovation Accelerator programme will support the Region’s key economic aims of increasing productivity, delivering inclusive growth and achieving net zero.