Regulatory Risk Trends in June 2024: A Comprehensive Overview
As we move through 2024, the landscape of regulatory risk continues to evolve, presenting both challenges and opportunities for businesses worldwide. The latest report from Pinsent Masons, “Regulatory Risk Trends – June 2024,” provides an in-depth analysis of current and emerging risks. This blog post summarises key insights from the report, highlighting the major trends and their implications for businesses.
Key Regulatory Risk Trends
Operational Resilience
The Bank of England’s focus on operational resilience remains a cornerstone of regulatory scrutiny. Firms are required to demonstrate their ability to withstand and recover from significant operational disruptions. The Financial Policy Committee’s macroprudential approach underscores the need for robust operational risk management frameworks.
Consumer Duty
The Financial Conduct Authority (FCA) has intensified its efforts to enforce the Consumer Duty, which mandates that firms must act to deliver good outcomes for retail customers. This involves ensuring fair treatment of customers, providing clear and transparent information, and fostering an environment where customers can pursue their financial objectives effectively.
Financial Promotions and Influencers
The FCA has been particularly vigilant regarding financial promotions, with a crackdown on misleading advertisements and unauthorised financial advice from social media influencers. Recent enforcement actions highlight the need for firms to ensure their promotional materials comply with regulatory standards and do not mislead consumers.
Money Laundering Regulations
HM Treasury’s consultation on improving the effectiveness of money laundering regulations signals ongoing governmental focus on combating financial crime. The consultation aims to enhance regulatory frameworks to prevent money laundering and terrorist financing, ensuring that the UK’s financial system remains robust and secure.
Vulnerable Customers
The FCA has issued finalised guidance on the fair treatment of vulnerable customers, emphasising the need for firms to take into account the diverse needs of their customer base. This guidance outlines practical steps for firms to ensure that vulnerable customers are not disadvantaged and can access the financial services they need.
Politically Exposed Persons (PEPs)
The FCA’s review of the treatment of PEPs aims to strike a balance between preventing financial crime and ensuring that PEPs are not unfairly discriminated against. This ongoing review seeks to refine the regulatory approach to PEPs, ensuring compliance while mitigating undue burdens on these individuals.
Cybersecurity and Data Protection
With the increasing reliance on digital technologies, cybersecurity and data protection have become paramount. Regulatory bodies are pushing for enhanced measures to protect sensitive data and prevent cyberattacks, requiring firms to implement rigorous cybersecurity protocols and regular assessments.
Implications for Businesses
Businesses must stay ahead of these regulatory changes to mitigate risks and ensure compliance. Here are some practical steps firms can take:
ӢEnhance Operational Resilience: Develop and regularly test robust business continuity plans to handle potential disruptions.
ӢPrioritise Consumer Duty: Foster a customer-centric culture and ensure that all customer interactions are fair, transparent, and beneficial.
ӢMonitor Financial Promotions: Implement stringent compliance checks for all promotional materials and be cautious when using social media influencers.
ӢStrengthen Anti-Money Laundering Measures: Stay updated on regulatory changes and enhance internal controls to prevent financial crimes.
ӢSupport Vulnerable Customers: Train staff to identify and support vulnerable customers, ensuring they receive appropriate services and advice.
ӢReview PEP Policies: Balance compliance requirements with fair treatment of PEPs, avoiding unnecessary restrictions while maintaining security.
ӢInvest in Cybersecurity: Regularly update cybersecurity measures and conduct vulnerability assessments to protect against data breaches.
The regulatory landscape is becoming increasingly complex, and businesses must remain vigilant to navigate these changes successfully. By understanding and addressing these regulatory risk trends, firms can build trust and resilience in their operations. The insights from Pinsent Masons’ June 2024 report provide a valuable roadmap for navigating this dynamic environment.
For more detailed information, you can access the full report here.
The Financial Regulation Innovation Lab: Lessons and advice from the first Innovation Call
Season 4, episode 5
Listen to the full episode here.
In this podcast, our partners at Label Sessions interviewed Antony Brookes and Ruairidh Patfield from abrdn to hear about their experience of getting involved in the Financial Regulation Innovation Lab’s first innovation call.
Alongside Tesco Bank, Virgin Money, Morgan Stanley and Deloitte they worked with the University of Glasgow and the University of Strathclyde to reshape financial compliance through AI and emerging technologies.
Calling fintechs from around the world to get involved they selected 5 of them to partner with. In this podcast we also hear from those 5 organisations with:
Callum Murray (Amiqus)
Mick O’Connor (Haelo)
Daniel munro (Level-E)
Neil Sinclair (Pytilia)
Simon Dix (DX Compliance)
To apply for our new Innovation Challenge on reshaping ESG in Financial Services visit https://www.fintechscotland.com/what-we-do/financial-regulation-innovation-lab/shaping-the-future-of-esg-in-financial-services/
New ground breaking innovation challenge deepens collaboration with global financial firms to deliver positive environmental impact
FinTech Scotland, working with ten industry partners, announces a new innovation challenge, focused on delivering positive environmental and societal outcomes.
Working in collaboration with EY, Morgan Stanley, Lloyds Banking Group, HSBC, Barclays, Phoenix Group, Sopra Steria, Equifax, Virgin Money and abrdn, this innovation challenge invites innovative companies from across the world to apply, with successful firms potentially eligible for funding of up to £50,000.
The challenge focusses on the best use of data and identifying new data sources that can help address critical Environmental, Social, and Governance (ESG) questions. It invites innovative enterprises to develop data led solutions and technology enabled approaches to new ESG regulatory requirements, helping drive responsible outcomes for people and the environment.
The challenge will run for 3 months, and successful applications will work alongside some of the leading global financial services firms, learning about challenges, their ways of working and how to best integrate solutions within their businesses. Successful applicants will also be able to access support and inputs from industry partners to help develop their solution further.
The programme is enabled by FinTech Scotland’s Financial Regulation Innovation Lab, which works to support innovation and ground-breaking solutions to the increasing demand of new financial regulations, using a collaborative approach working across industry, academia, regulators, experts and innovators.
The Financial Regulation Innovation Lab will utilise the expertise from leading academic experts in climate, data and technology from across the University of Strathclyde and the University of Glasgow to support the development of this programme.
Companies interested in applying can do so here until the 7th of July at midnight.
Nicola Anderson, CEO at FinTech Scotland said:
“I’m excited to see this work develop to drive innovation on this important agenda. This programme highlights two key attributes that when combined can accelerate responsible innovation. Using collaborative action that is focused on priority industry needs will accelerate positive innovation. I’m looking forward to seeing the progress and outcomes from this work have a positive impact for the environment and for society”.
Tom McFarlane, Partner at EY said:
“Embedding environmental, social, and governance (ESG) criteria across the financial sector is not just a regulatory requirement, but a fundamental driver of long-term value. The FRIL’s ESG Innovation Call will bring firms of all sizes together to create innovative solutions that raise the standards of ethical and sustainable governance, and EY is proud to play a part in supporting this”.
Angela Benson, Head of Glasgow Finance at Morgan Stanley said:
“Morgan Stanley is delighted to join this ESG Innovation Call, reflecting our steadfast commitment to integrating environmental, social, and governance principles into our core business strategies. This initiative is an excellent platform for fostering collaboration and driving forward the innovative solutions needed to address the pressing sustainability challenges we face today”.
Jennifer Simpson, Head of Climate & ESG Risk at Lloyds Banking Group said:
“LBG is excited to join the Financial Regulation Innovation Lab’s ESG Innovations Call as we recognise the critical importance of addressing climate and ESG risks ensuring a sustainable future for our customers. This initiative also aligns with our purpose of helping Britain prosper and provides an excellent opportunity for us to work with industry partners, Fintech’s and researchers to develop innovative solutions that enhances ESG integration and supports regulatory delivery”.
Kal Bukovski, Director of Academia and Research at Sopra Steria said:
“Our involvement underscores our dedication to advancing ESG principles through cutting-edge research and collaboration. This effort reflects Sopra Steria’s broader mission to leverage technology and expertise for positive environmental and social impact”.
Richard Nicol, Senior Product Owner at Phoenix Group said:
“This call aligns seamlessly with our commitment to integrating sustainable governance into our investment strategies. We recognise the critical role that fintech innovations can play in addressing global environmental and social challenges that not only generate strong financial returns but also contribute positively to our broader community and planet”.
Brendan Mohr, Head of Sustainability Compliance at Barclays said:
“We are delighted to participate in this initiative as it is a unique opportunity to collaborate across the industry. Financial institutions need to evolve at pace to meet both our customer’s expectations and our own strategic goals, so it is essential that we find new ways to achieve this. This is a great opportunity to find innovative solutions to accelerate change while maintaining the controls that keep our customers safe”.
Special Money2020 – Interview with Appointedd
Season 4, episode 3
Listen to the full episode here.
In this episode we spoke with Megan Grant from Edinburgh-based fintech Appointedd about Consumer Duty and the role that new innovative technologies can play in supporting established financial firms meet those new requirements.
At the end we also mentioned Megan’s next challenge, swimming the channel to raise money for charity. To support her follow this link. https://www.justgiving.com/page/megan-grant-channel-swim
FinTech Scotland announces grant winners in first of its kind AI compliance cluster-wide challenge.
FinTech Scotland’s Financial Regulation Innovation Lab (FRIL), a collaborative effort between FinTech Scotland, the University of Strathclyde and the University of Glasgow, is announcing the successful conclusion of its first innovation call focussed on “Simplifying Compliance through the Application of AI and Emerging Technologies,”
The programme concluded with a Demo Day in Glasgow on the 30th of April, when the 15 fintech finalists showcased their innovative solutions in front of professional services firm Deloitte, and leading financial institutions including Tesco Bank, Morgan Stanley, Virgin Money and abrdn who had all contributed by providing use cases for this call.
Five winners were selected as grant recipients, each awarded up to £50,000 to further develop and implement their innovative solutions.
The winners are:
- Amiqus ”“ One of the UK fastest growing fintechs, revolutionising identity checks
- HAELO ”“ Helping senior risk managers demonstrate individual accountability.
- Level E Research ”“ Using AI to improve buy side Compliance surveillance and potential investment decisions.
- DX Compliance ”“ Using AI to improve Buy Side Compliance surveillance processes.
- Pytilia ”“ Building AI applications for buy side Compliance surveillance requirements.
This funding will enable these companies to refine their technologies following insights from industry. These innovations will support the sector in increasing the efficiency and effectiveness of compliance processes to drive better customer outcomes.
The challenge attracted participation from fintech companies located in Scotland, the UK, and around the world with applications from countries such as Singapore and Canada.
The focus of the initiative has been on streamlining regulatory processes within the financial sector through advanced technological solutions. Participants underwent a three-phase programme that included challenge definition, solution design and testing, and final demonstrations. This structure provided participants with critical insights into the operational needs of financial firms, facilitated by direct collaboration, academic expertise, and service design support.
The FRIL initiative is part of the larger Glasgow City Region Innovation Accelerator programme with Glasgow, one of three pilot regions’ sharing a £100m investment aimed at transforming R&D within the UK. Led by Innovate UK, this programme supports the UK Government’s levelling-up agenda by empowering local regions to drive economic growth through innovation. This approach not only supports regional development but also positions the UK as a leader in the global innovation landscape.
Glasgow is a hub for science and technology which makes it an ideal setting for this new initiative. The Innovation Accelerator programme aligns with the region’s key economic goals of enhancing productivity, fostering inclusive growth, and achieving net-zero emissions.
FinTech Scotland remains committed to advancing the UK’s financial regulatory framework through cutting-edge research and development, ensuring that the UK continues to set global standards in financial innovation.
Nicola Anderson, CEO of FinTech Scotland, said:
“I am proud to see the extraordinary level of collaboration demonstrated across our fintech cluster through this first innovation call. The engagement among industry leaders, academic scholars, and public sector representatives at the Demo Day gives me confidence that our cluster delivery approach can drive real impact and continue to help us deliver our ambition set-out in our Fintech Research and Innovation Roadmap. the impact of our cluster delivery approach. I’d like to thank and congratulate all those involved”
Mark Cummins, at University of Strathclyde commented:
At the University of Strathclyde we are proud to be part of the Financial Regulation Innovation Lab, responsible for grant award funding to successful fintech applicants in our Innovation Call series. The innovative thinking and insight our grant award winners from Amiqus, DX Compliance, HAELO, Level E Research and Pytilia have shown makes them deserving winners of FRIL’s first innovation call on Simplifying Compliance through AI and Emerging Technologies. Our team look forward to supporting each proposition develop throughout its technology roadmap and we are excited about the potential for real industry led innovation that may help reduce the amount of current manual interventions required when addressing regulatory obligations’.
Joanne Seagrave, Head of Regulatory Affairs at Tesco Bank said:
Tesco Bank have been thoroughly impressed by the enthusiasm, innovative thinking and support we’ve received during the Financial Regulation Innovation Lab’s innovation call on Simplifying Compliance through AI and other Emerging Technologies. We’ve seen a high quality and diversity of fintechs involved and many of the solutions presented closely match the objective we set of streamlining compliance with regulatory developments. This has advanced our understanding of AI as well as offering practical new solutions. We’re thrilled that HAELO have been successful in this innovation call and see huge potential for our sector in their proposition.’
Antony Brookes, Head of UK Investment Compliance at abrdn said: Our team at abrdn have been invested in the Financial Regulation Innovation Lab’s innovation call. We have relished the opportunity to engage with a number of innovative fintech companies we would not normally get access to. Their thinking and propositions on Demo Day itself were hugely insightful and have proven that AI does have a place in addressing some of the challenges we face across our industry when it comes to reporting. We are delighted that Pytilia Ltd, DX Compliance Solutions and Level E Research who worked on our abrdn use case have been successful in obtaining grant award funding and look forward to collaborating and supporting their innovation to help enhance surveillance capabilities and ensure a more accurate and tailored approach to regulatory compliance within the asset management sector’.
Rob Sharp, Digital Sales Manager at Virgin Money commented:
Being a leading use case strategic partner with FRIL has been a fantastic opportunity to see the passion and expertise shown by the cohort of fintechs the programme has brought together. The event showcased a range of potential AI solutions and emerging technologies, which are key areas of focus within Virgin Money’s digital strategy. Amiqus’ proposition really resonated with the challenge we set, and we are excited to be collaborating with them on the opportunities their innovative idea creates to help further improve our customer experience.’
Angela Benson, Head of Glasgow Finance at Morgan Stanley said:
“The team at Morgan Stanley have enjoyed the opportunity to participate in the Financial Regulation Innovation Lab’s first innovation call on Simplifying Compliance through AI and other Emerging Technologies. It’s these types of collaborations that will drive our fintech industry forward ”“ from interacting with the participating fintechs to hearing different industry perspectives throughout the calls on the set days we gathered together, including the breadth of innovative thinking we heard on Demo Day.’
Critique of the UK’s pro-innovation approach to AI regulation and implications for financial regulation innovation
Article written by Daniel Dao ”“ Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde.
Recently, artificial intelligence (AI) is widely recognised as a pivotal technological advancement with the capacity to profoundly reshape societal dynamics. It is celebrated for its potential to enhance public services, create high-quality employment opportunities, and power the future. However, there remains a notable opacity regarding the potential threats it poses to life, security, and related domains, thus requiring a pro-active approach to regulation. To address this gap, the UK Government has released an AI white paper outlining its pro-innovation approach to regulating AI. While this white paper symbolises the contributions and endeavours aimed at providing innovative and dynamic solutions to tackle the significant challenge posed by AI, it is important to acknowledge that there are still certain limitations which the white paper may refine in subsequent iterations.
The framework of the UK Government’s AI regulations in general is underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: Safety, security, and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; Contestability and redress. The pro-innovation approach outlined in the UK Government’s AI white paper proposes a nuanced framework reconciling the trade-off between risks and technological adoption. While the regulatory framework endeavours to identify and mitigate potential risks associated with AI, it also acknowledges the possibility that stringent regulations could impede the pace of AI adoption. Instead of prescribing regulations tailored to specific technologies, the document advocates for a context-based, proportionate approach. This approach entails a delicate balancing act, wherein genuine risks are weighed against the opportunities and benefits that AI stands to offer. Moreover, the white paper advocates for an agile and iterative regulatory methodology, whereby insights from past experiences following evolving technological landscapes inform the ongoing development of a responsive regulatory framework. Overall, this white paper presents an initial standardised approach that holds promise for effectively managing AI risks while concurrently promoting collaborative engagement among governmental bodies, regulatory authorities, industry stakeholders, and civil society.
However, notwithstanding the numerous advantages and potential contributions, certain limitations are often associated with inaugural documents addressing complex phenomena such as AI. Firstly, while the white paper offers extensive commentary on AI risks, its overarching thematic orientation predominantly centers on promoting AI through “soft laws” and “deregulation.” The white paper seems to support AI development with various flexibilities rather than provide some certain stringent policies to mitigate AI risks, thus raising awareness regarding “balance”. The mechanism of “soft laws” hinges primarily on voluntary compliance and commitment. Specifically, without legal forces, there is a risk that firms may not fully adhere to their commitments or may only partially implement them.
Ambiguity or uncertainty is also one critical issue with the “soft laws” mechanism. There exists an absence of detailed regulatory provisions within the proposed framework outlined in the white paper. While the document espouses an “innovative approach” with promising prospects, its nature leaves industries and individuals to speculate about necessary actions, thereby raising the potential for inconsistencies in practical implementation and adoption. Firms lack a systematic, step-by-step process and precise mechanisms to navigate through various developmental stages. Crafting stringent guidelines for AI poses considerable challenges, yet it is essential to implement them with clarity and rigor to complement existing innovative approaches effectively.
One more point is that the iterative and proportional approach advocated may inadvertently lead to “regulation lag,” whereby regulatory responses are only triggered in the wake of significant AI-related losses or harms, rather than being proactive. This underscores the necessity for a clear distinction between leading and lagging regulatory regimes, with leading regulations anticipating potential AI risks to establish regulatory guidelines proactively.
Acknowledging the notable potential and inherent constraints outlined in the AI white paper, we have identified several implications for innovation in financial regulation. The deployment of AI holds promise in revolutionising various facets of financial regulation, including bolstering risk management and ensuring regulatory compliance. The innovative approach could offer certain advantages to firms such as flexibility, cooperation, and collaboration among stakeholders to address complicated cases.
As discussed above, to implement the effectiveness of financial regulations, the government authorities may consider revising and developing some key points. Given the opaque nature of AI-generated outcomes, it is imperative to apply and develop some advanced techniques, such as Explainable AI (XAI), to support decision-making processes and mitigate latent risks. Additionally, while regulators may opt for an iterative approach in rule-setting to accommodate contextual nuances, it is imperative to establish robust and transparent ethical guidelines to govern AI adoption responsibly. Such guidelines, categorised as “leading” regulations, should be developed in detail and collaboratively, engaging industry stakeholders, academic experts, and civil society, to ensure alignment with societal values and mitigate potential adverse impacts. Furthermore, it is essential to establish unequivocal “hard laws” for firms and anticipate legal forces for non-compliance with regulations. These legal instruments serve as valuable supplements to the innovative “soft laws” and contribute to maintaining equilibrium within the market.
About the author
Daniel Dao is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. Besides, he is Doctoral Researcher in Fintech at Centre for Financial and Corporate Integrity, Coventry University, where his research topics focus on fintech (crowdfunding), sustainable finance and entrepreneurial finance. He is also working as an Economic Consultant at World Bank Group, Washington DC Headquarters, where he has been contributing to various policy publications and reports, including World Development Report 2024; Country Economic Memorandum of Latin American and Caribbean countries; Policy working papers of labor, growth, and policy reforms, etc”¦. Regarding professional qualifications and networks, he is CFA Charterholder and an active member of CFA UK. He has earned his MBA (2017) in Finance from Bangor University, UK, and his MSc (2022) in Financial Engineering from WorldQuant University, US. He has shown a strong commitment and passion for international development and high-impact policy research. His proficiency extends to data science techniques and advanced analytics, with a specific focus on artificial intelligence, machine learning, and natural language processing (NLP).
Photo by Markus Winkler: https://www.pexels.com/photo/a-typewriter-with-the-word-ethics-on-it-18510427/
Simplifying Compliance with AI
Season 4, episode 2
Listen to the full episode here.
In this episode, we explore the role of Artificial Intelligence (AI) in streamlining compliance within the financial sector, showcasing the Financial Regulation Innovation Lab.
We discuss the future of financial compliance, enriched by AI’s capability to automate and innovate. This episode is for anyone interested in the intersection of technology, finance, and regulation, offering insights into the collaborative efforts shaping a more compliant and efficient financial landscape.
Guests:
- Antony Brookes – Head of UK Investment Compliance, abrdn
- Mark Cummins – Professor of Financial Technology at University of Strathclyde
- Joanne Seagrave – Head of Regulatory Affairs at Tesco Bank
Generative AI in the Context of UK/EU Regulation
The UK Government released its AI White Paper on March 29, 2023, outlining its plans for overseeing the implementation of artificial intelligence (AI) in the United Kingdom. The White Paper is a follow-up to the AI Regulation Policy Paper, which outlined the UK Government’s vision for a future AI regulatory system in the United Kingdom that is supportive of innovation and tailored to specific contexts.
The White Paper presents an alternative methodology for regulating AI in contrast to the EU’s AI Act. Rather than enacting comprehensive legislation to govern AI in the United Kingdom, the UK Government is prioritising the establishment of guidelines for the development and utilisation of AI. Additionally, it aims to enhance the authority of existing regulatory bodies such as the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), and the Competition and Markets Authority (CMA) to provide guidance and oversee the use of AI within their respective domains.
What is the key information from the UK White Paper and EU Legal Framework for Artificial Intelligence?
In contrast to the proposed EU AI Act, the AI White Paper does not put out a comprehensive definition of the terms “AI” or “AI system” as intended by the UK Government. The White Paper defines AI based on two key attributes – adaptivity and autonomy – in order to ensure that the proposed regulatory framework remains relevant and effective in the face of emerging technology. Although the absence of a clear-cut definition of AI may cause legal ambiguity, it will be the responsibility of various regulators to provide instructions to firms, outlining their requirements on the use of AI within their jurisdiction.
The regulatory framework outlined in the AI White Paper, put forth by the UK Government, encompasses the entirety of the United Kingdom. The White Paper does not suggest altering the territorial scope of current UK legislation pertaining to AI. Essentially, this implies that if the current laws regarding the use of AI have jurisdiction outside national borders (like the UK General Data Protection Regulation), the instructions and enforcement by existing regulatory bodies may also apply outside of the United Kingdom. For a comparison of the UK and EU approaches to AI regulation, see Table 1 in the Appendix.
What will be the impact of the EU AI Act on the UK?
Once the EUAI Act is implemented, it will apply to UK firms who utilise AI systems within the EU, make them available on the EU market, or participate in any other activity regulated by the AI Act. These UK organisations must guarantee that their AI systems are in compliance, or else they may face the potential consequences of financial penalties and damage to their brand.
Nevertheless, the AI Act may have broader ramifications, perhaps causing a ripple effect for UK firms only operating within the UK market. The AI Act is expected to establish a global benchmark in this domain, much like the General Data Protection Regulation (GDPR) has done for data protection. There are two possible implications: firstly, UK companies that actively adopt and adhere to the AI Act can distinguish themselves in the UK market, attracting customers who value ethical and responsible AI solutions; secondly, as the AI Act becomes a benchmark, we may witness the UK’s domestic regulations aligning with the AI Act in order to achieve consistency.
Moreover, the EU AI Act is a crucial legislative measure that promotes voluntary adherence, even for companies that may not initially be subject to its provisions (as emphasised in Article 69, which pertains to Codes of Conduct). Consequently, the Act is expected to have an effect on UK companies, especially those that provide AI services in the EU and utilise AI technologies to deliver their services within the region. It is essential to remember that numerous UK enterprises have a market presence that extends well beyond the borders of the UK, therefore making the EU AI Act very pertinent to them.
How does the United Kingdom’s approach compare to those of other countries?
The UK Government is charting its own course in respect of AI implementation and its objective is to establish regulations for AI that promote innovation while safeguarding the rights and interests of individuals. The AI White Paper incorporates several ideas that align with the European Union’s position on artificial intelligence. As an illustration, the Government plans to establish a novel regulatory framework for AI systems that pose substantial risks. Additionally, it intends to mandate that enterprises perform risk evaluations before utilising AI tools. This need is logical, particularly when considering that the AI tool is handling personal data, as data protection by design and default are important tenets of the UK GDPR. Nevertheless, the AI White Paper specifies that these ideas will not be implemented by legislation, at least not at first. Thus, the level of acceptance and the impact of the voluntary nature of these principles on their adoption by organisations throughout the UK remain uncertain.
The UK government has expressed its ambition to become a dominant force in the field of AI, taking the lead in establishing global regulations and standards to ensure the safe deployment of AI technology. As part of this endeavor, the UK hosted the AI Safety Summit in the autumn of 2023. Establishing a global agreement on AI regulation will effectively mitigate the negative consequences arising from emerging technology advancements.
Nevertheless, the international community’s history of coordinating regulation does not instill trust. The initial legislation in social media, influenced by certain technology companies, granted legal immunity to platforms for hosting content created by users, hence creating challenges in regulating online harms at a later stage. The potential for this error to be replicated with AI exists. Although there have been recent demands for the establishment of a counterpart to the Intergovernmental Panel on Climate Change, as expressed by both the Prime Minister and the EU Commission President, reaching a unified agreement on climate change response has proven challenging due to conflicting national interests, similar to those observed in the context of artificial intelligence.
The UK’s present strategy for regulating AI differs from the EU’s proposed method outlined in the EU AI Act. The EU’s proposal involves implementing strict controls and transparency obligations for AI systems deemed “high risk,” while imposing less stringent standards for AI systems considered “limited risk.” The majority of general-purpose AI systems are considered to have a high level of risk. This means that there are specific rules that developers of foundational models must follow, and they are also required to provide detailed reports explaining how the models are trained.
Additionally, there exists a collaborative effort between the United States and the European Union to create a collection of non-binding regulations for companies, known as the “AI Code of Conduct,” in accordance with their shared plan for ensuring reliable and secure AI and mitigating any risks. The code of conduct will be accessible via the Hiroshima Process at the G7 to foster global agreement on AI governance. If this endeavour is successful, there is potential for the UK to diminish its influence on the formulation of international AI regulations. However, the publication of the AI bill of rights in the USA in October 2022 has the potential to result in a more principles-oriented approach that is in line with the United Kingdom.
Despite these potential dangers, the UK is establishing itself as a nation where companies can create cutting-edge AI technology and perhaps become a global leader in this field. This could be beneficial provided that a suitable equilibrium can be achieved between innovation and the secure advancement of systems.
What will be the effect of the EU AI Act on UK companies utilising Generative AI?
Due to the increasing popularity and widespread influence of Generative AI and Large Language Models (LLMs) in 2023, the EU AI Act underwent significant modifications in June 2023, specifically addressing the utilisation of Generative AI.
Foundation models are a category of expansive machine learning models that form the fundamental framework for constructing a diverse array of artificial intelligence applications. These models have undergone pre-training using extensive datasets, which allows them to acquire knowledge and comprehension of intricate patterns, relationships, and structures present in the data. Developers can achieve impressive skills in natural language processing, computer vision, and decision-making by refining foundation models for specific applications or domains. Some examples of foundation models include OpenAI’s ChatGPT, Google’s BERT, and PaLM-2. Foundation models have been essential in the advancement of sophisticated AI applications in diverse industries, owing to their versatility and adaptability.
Companies now engaged in the development of apps utilising Generative AI Large Language Models (LLMs) and comparable AI technologies, such as ChatGPT, Google Bard, Anthropic’s Claude, and Microsoft’s Bing Chat or ‘Bing AI’, must carefully consider the consequences of the EU AI Act. These companies should be cognisant of the potential ramifications of the Act on their operations and proactively take measures to assure adherence, irrespective of whether they are specifically targeted by the legislation. By doing this, they can remain at the forefront and sustain a robust presence in the always changing AI landscape.
Companies utilising these AI tools and ‘foundation models’ to provide their services must carefully assess and handle risks in accordance with Article 28b, and adhere to the transparency requirements outlined in Article 52 (1).
The primary objective of the EU AI Act is to establish a benchmark for ensuring AI safety, ethics, and responsible utilisation, while also enforcing requirements for openness and responsibility. Article 52 (3) of the EU AI Act, as revised in June 2023, establishes certain requirements on the utilisation of Generative AI.
In conclusion
Regulating AI in all it’s forms is a daunting and pressing task, but an essential one. Amidst the prevalent and rapidly increasing acceptance of AI, regulations must guarantee the reliability of AI systems, minimise AI-related risks, and establish mechanisms to hold accountable the individuals involved in the development, deployment, and utilisation of these technologies in case of failures and malpractice.
The UK’s involvement in this challenge is appreciated, as is its commitment to advancing the goal of AI governance on the global stage. The UK has the chance to establish itself as a thought leader in global AI governance by introducing a context-based, institutionally focused framework for regulating AI. This approach might potentially be adopted by other global jurisdictions as a standard. The emergence and rapid advancement of Generative AI places heightened responsibility on the UK to assume this thought leadership role.
APPENDIX
Table 1: Comparison between UK and EU: AI White Paper vs Legal Framework for Artificial Intelligence | ||
Aspects | UK | EU |
Approach | 1.Ensure the safe utilization of AI: Safety is expected to be a fundamental concern in specific industries, such as healthcare or vital infrastructure. Nevertheless, the Policy Paper recommends that regulators adopt a context-dependent approach in assessing the probability of AI endangering safety and adopt a proportional strategy in mitigating this risk. | 1. The European Parliament ratified the EU AI Act on June 14, 2023. |
2. Ensure the technical security and proper functioning of AI: AI systems must possess robust technical security measures and operate according to their intended design and functionality. The Policy Paper proposes that AI systems undergo testing to assess their functionality, resilience, and security, taking into account the specific context and proportionality considerations. Additionally, regulators are expected to establish the regulatory requirements for AI systems in their respective sectors or domains. | 2. European institutions will now commence negotiations to achieve consensus on the ultimate document. Consequently, the earliest possible implementation of the EU AI Act would be in 2025, even if it is adopted promptly. | |
3. Ensure that AI is adequately transparent and explainable: The Policy Paper recognizes that AI systems may not always be easily explicable, and in most cases, this is unlikely to present significant risks. Nevertheless, the Policy Paper proposes that in specific circumstances with a high level of risk, decisions that cannot be adequately justified may be disallowed by the appropriate regulatory body. This could include situations such as a tribunal decision where the absence of a clear explanation would prevent an individual from exercising their right to contest the tribunal’s ruling. | 3. Jurisdictional scope: If implemented, the EU AI Act will enforce a series of responsibilities on both providers and deployers of AI systems that fall within its scope and are used within or have an impact on the EU, regardless of where they are based. | |
4. Integrate fairness into AI: The Policy Paper suggests that regulators provide a clear definition of “fairness” within their specific sector or area and specify the circumstances in which fairness should be taken into account (such as in the context of job applications). | ||
5. The Policy Paper asserts that legal people must bear responsibility for AI governance, ensuring that they are held accountable for the results generated by AI systems and assuming legal obligation. This responsibility applies to an identified or identifiable legal entity. | 4. Broadening the ban on specific applications of AI systems to encompass remote biometric identification in publicly accessible areas, as well as emotion recognition and predictive policing technologies. | |
6. Elucidate pathways for seeking redress or challenging decisions: As stated in the Policy Paper, the use of AI should not eliminate the opportunity for individuals and groups to protest a decision, if they have the right to do so outside the realm of AI. Hence, the UK Government will need regulators to guarantee that the results produced by AI systems can be challenged in “pertinent regulated circumstances”. | 5. The scope of high-risk AI systems has been extended to encompass systems employed for voter manipulation or utilized in recommender systems of extremely large online platforms (referred to as VLOPs). | |
6. Establishing regulations for providers of foundation models, which are AI systems trained on extensive data, designed to produce general outputs, and can be customized for various specific purposes, including those that drive generative AI systems. | ||
7. Prohibited risks, such as social scoring or systems that exploit vulnerabilities of specific groups of individuals, are considered unacceptable. | ||
8. High-risk activities may be allowed, provided that they adhere strictly to requirements for conformity, documentation, data governance, design, and incident reporting obligations. These encompass systems utilized in civil aviation security, medical gadgets, and the administration and functioning of vital infrastructure. | ||
9. Systems that directly engage with humans, such as chatbots, are allowed as long as they meet specific transparency requirements. These requirements include informing end-users that they are dealing with a machine and ensuring that the risk is limited. | ||
10. Provide evidence through suitable design, testing, and analysis that potential risks that might be reasonably anticipated have been correctly identified and minimized; | ||
11. Utilize only datasets that adhere to proper data governance protocols for foundational models, ensuring that data sources are suitable and potential biases are taken into account. | ||
12. Create and construct a model that attains optimal levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity. | ||
13. Generate comprehensive technical documentation and clear instructions for use that enable downstream providers to fulfil their obligations effectively. | ||
14. Implement a quality management system to guarantee and record adherence to the aforementioned obligations. | ||
15. Enroll the foundational model in a European Union database that will be upheld by the Commission. | ||
In addition, the creators of foundational models utilized in generative AI systems would be required to openly acknowledge that the content was generated by AI and guarantee that the system includes protective measures to prevent the creation of content that violates European Union (EU) regulations. In addition, they would need to provide a summary of the training data utilized, which is safeguarded by copyright law. | ||
Regulators | The Policy Paper designated the Information Commissioner’s Office (ICO), Competition and Markets Authority (CMA), Ofcom, Medicine and Healthcare Regulatory Authority (MHRA), and Equality and Human Rights Commission (EHRC) as the principal regulators in its new system. | 1. National competent authorities for supervising the application and implementation. |
Note: Although several UK regulators and government agencies have initiated measures to promote the appropriate use of AI, the Policy Paper underscores the existing hurdles encountered by businesses, such as a dearth of transparency, redundancies, and incongruity among several regulatory bodies. | 2. European Artificial Intelligence Board for coordination and advice. |
About the Author
Dr. Hao Zhang is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. He holds a PhD in Finance from the University of Glasgow, Adam Smith Business School. Hao held the position of Senior Project Manager at the Information Center of the Ministry of Industry and Information Technology (MIIT) of the People’s Republic of China. His recent research has focused on asset pricing, risk management, financial derivatives, intersection of technology and data science.
Photo by Kelly : https://www.pexels.com/photo/road-sign-with-information-inscription-placed-near-street-on-sunny-day-3861780/
Navigating the Tides of Regulatory Risk: Insights from Pinsent Masons’ April 2024 Edition
The April 2024 Edition of the Pinsent Masons’ Regulatory Risk Trends offers a deep dive into the current and emerging issues that are shaping the world of finance, legal compliance, and corporate governance. This comprehensive document, authored by leading experts in the field, serves as a very useful source of information for businesses, financial institutions, and legal professionals navigating the complex regulatory environment.
The report opens with thoughts from Jonathan Cavill, Partner at Pinsent Masons, who specialises in contentious regulatory and financial services disputes. His expertise sets the stage for an in-depth exploration of the regulatory challenges and opportunities that lie ahead.
Key takeaways
- Consumer Protection: The document highlights the Financial Conduct Authority’s (FCA) intensified focus on the fair treatment of customers, especially the vulnerable ones. With references to recent reviews and consultations, it stresses the importance of businesses aligning their practices with these standards.
- Fair Value and Insurance Sector Scrutiny: The FCA’s call for insurers to act upon the publication of the latest fair value data underscores a shift towards greater transparency and fairness in insurance pricing. The report examines the implications of these demands and offers strategies for compliance.
- Market Operations and Monetary Policy: Insights from Colin Read explore the Bank of England’s Sterling Monetary Framework (SMF) and its implications for market stability and liquidity. This section is crucial for understanding central bank reserves and the broader economic landscape.
- Advancements in Consumer Investments: Elizabeth Budd delves into the FCA’s strategy for consumer investments, emphasising the new Consumer Duty and its impact on financial advisers and investment firms. This represents a significant shift towards ensuring that consumer interests are at the heart of financial services.
- Innovation in Payment Systems: Andrew Barber’s commentary on the latest policy statements from the Bank of England provides a glimpse into how regulatory bodies are supporting payments innovation, particularly through the Real-Time Gross Settlement (RTGS) system. This is vital for fintech companies and traditional financial institutions alike.
- Fighting Financial Scams: The document doesn’t shy away from the darker side of finance, addressing the ongoing battle against scams. It presents a detailed analysis of recent cases and regulatory responses, offering valuable lessons and preventive strategies.
- Gender Equality: The Financial Services Compensation Scheme’s (FSCS) efforts in promoting gender equality within the financial sector are also covered. This initiative reflects a broader movement towards diversity and inclusion in finance, highlighting the societal values shaping regulatory agendas.
The Pinsent Masons’ April 2024 Edition of Regulatory Risk Trends is a roadmap for navigating the regulatory environment with confidence and foresight, giving you access to:
- Detailed analyses of regulatory developments and their implications for various sectors.
- Expert commentary from leading figures in law and finance.
- Strategic recommendations for staying ahead in a regulatory landscape marked by rapid change and increased scrutiny.
FCA Consumer Duty and Financial Inclusion: Does Artificial Intelligence Matter?
The Consumer Duty: What does it entail?
The Financial Conduct Authority (FCA) has recently issued the Consumer Duty Principle to guide financial services firms’ conduct in delivering good outcomes to their retail customers. The Consumer Duty is consumer-centric and outcome-oriented with the potential to bring about major transformation in the financial services industry.
The Consumer Duty is supported by three cross”‘cutting rules that require firms to:
- Act in good faith towards retail customers.
- Avoid causing foreseeable harm to retail customers.
- Enable and support retail customers to pursue their financial objectives.
The Consumer Duty is expected to help firms achieve the following outcomes:
- The first outcome relates to products and services, where products and services are designed to meet the needs of consumers.
- The second outcome relates to price and value, which inter alia focuses on ensuring that consumers receive fair value for goods and services.
- The third outcome seeks to promote consumer understanding through effective communication and information sharing. This is to ensure that consumers understand the nature and characteristics of products and services including potential risks.
- The fourth outcome relates to consumer support, where consumers are supported to derive maximum benefits from financial products and services.
What are the implications for financial inclusion?
The Consumer Duty has significant implications for financial inclusion. Financial inclusion refers to access to and usage of financial services. While access is the primary objective of financial inclusion it does not always translate into usage due to several inhibiting factors such as price, transaction costs, and service quality. Removing the bottlenecks that limit the usage of financial services is therefore indispensable in unlocking the full benefits of financial inclusion.
The Consumer Duty is expected to trigger behavioural changes among financial institutions leading to significant effects on financial inclusion. Financial institutions are compelled to comply with the Consumer Duty and the cross-cutting rules to avoid regulatory risks that may take the form of sanctions. This implies that consumers will now have access to products and services that are fit for purpose, receive fair value for goods and services purchased, have a better understanding of products and services, and receive the support needed to derive maximum benefits from financial services. In this scenario, financial wellbeing will improve leading to a reduction in poverty and income inequality.
In contrast, however, the Consumer Duty can serve as a disincentive to innovate especially when the costs of compliance far outweigh the benefits, and this has significant implications for financial inclusion. Compliance costs may come in various forms including recruitment or training of staff, updating existing software and systems, or purchasing new ones. To reduce the risks of non-compliance financial institutions will be reluctant to innovate thereby limiting consumer choice. Firms can equally avoid the provision of services in areas and to segments of the population where the risk of non-compliance is high. In this case, vulnerable groups and areas are likely to be excluded from the provision of financial services (financial exclusion). These aspects of firms’ behaviours are more likely to be unobserved and subtle making it difficult to detect.
Does Artificial Intelligence matter?
Financial institutions are likely to adopt regulatory technologies and Artificial Intelligence (AI) solutions to comply with the Consumer Duty. This is particularly true given that financial firms are in constant search for automation and AI solutions to drive down the costs of regulatory compliance. The deployment of Machine Learning (ML) and AI in Anti-Money Laundering (AML) systems is taking a front stage in the financial services industry. AI-powered AML systems hold great promise to help financial services firms to detect suspicious activities that are likely to cause significant harm to consumers in real-time.
AI can help financial firms deliver good outcomes to consumers at low costs, especially to those at risk of financial exclusion. AI and ML algorithms can equip financial firms with the capability to remotely onboard customers and conduct remote identification checks thereby reducing costs. AI-powered solutions available to financial institutions during customer onboarding include but are not limited to real-time data access using open Application Programming Interface (API), image forensic, digital signature and verification, facial recognition, and video-based KYC (Know Your Customer). Remote customer onboarding simplifies the account opening process and reduces the costs and inconveniences associated with physical travel to bank branches which can discourage financially excluded consumers from accessing financial services.
AI and Natural Language Processing (NLP) play significant roles in customer-facing roles. The use of chatbots has the prospect of enhancing customer experiences through a rapid resolution of queries. Banks, for example, are moving from simple chatbot technologies to more advanced technologies including Large Language Models and Generative AI to enhance customer service. These advanced technologies facilitate communication between financial institutions and their customers.
AI and ML technologies also support automatic investment or financial advisory services. Robo-advisors use ML algorithms to automatically offer targeted investment or financial advice that is mostly done by human financial advisors. These technologies expand the provision of advisory services to a wide range of consumers including low-income consumers in a cost-effective manner.
AI and ML technologies offer financial institutions the potential to explore alternative sources of risk scoring using both structured and unstructured consumer data to predict their creditworthiness. The use of alternative sources of risk scoring has the potential to facilitate the provision of credit to consumers with limited credit history and low income.
What are some of the challenges with AI?
Regulatory technologies such as AI hold great prospects for compliance, but the deployment of these technologies comes with potential risks that can undermine the gains of financial inclusion. AI models for example are prone to embedded bias especially when the underlying dataset discriminates against certain groups or persons leading to differentiation in pricing and service quality. Bias in credit scoring algorithms can exclude vulnerable groups or regions from accessing loans and even if they do have access such loans are likely to be offered at high interest rates owing to unfair credit scoring. Also, bias in the underlying datasets of chatbots and Robo-advisors can lead to misinformation and cause significant harm to consumers. Data privacy concerns are on the increase especially given that any leakage in the dataset used to train AI models can expose sensitive consumer information. AI and ML technologies are not immune to cyber-attacks and technical glitches which can disrupt their functionality and expose consumers to harm. These examples imply that regulatory technologies and AI models pose a non-compliance risk to the Consumer Duty especially if they inhibit the delivery of good outcomes to consumers for example through discrimination and data privacy breaches.
What is the way forward?
The Consumer Duty is an important regulatory initiative with enormous potential to deepen financial inclusion and accelerate the positive contribution of financial inclusion to development. To achieve the objective of delivering good outcomes to consumers there is a need for constant engagement between the Financial Conduct Authority and stakeholders in the financial services industry. This will help timely identification and resolution of challenges that may arise during the implementation of the Consumer Duty.
While regulatory technologies and Artificial Intelligence are likely to play central roles in complying with the Consumer Duty there is the need for financial institutions to ensure that these technologies are themselves compliant with the Consumer Duty. This can be achieved by addressing the risks inherent in regulatory technologies and AI models. Senior managers of financial institutions are expected to play leading roles in mitigating the risk of non-compliance within the firm in line with the Senior Managers & Certification Regime.
About the Author(s)
Godsway Korku Tetteh is a Research Associate at the Financial Regulation Innovation Lab, University of Strathclyde (UK). He has several years of experience in financial inclusion research including digital financial inclusion. His research focuses on the impacts of digital technologies and financial innovations (FinTech) on financial inclusion, welfare, and entrepreneurship in developing countries. His current project focuses on the application of technologies such as Artificial Intelligence to drive efficiency in regulatory compliance. Previously, he worked as a Knowledge Exchange Associate with the Financial Technology (FinTech) Cluster at the University of Strathclyde. He also worked with the Cambridge Centre for Alternative Finance at the University of Cambridge to build the capacity of FinTech entrepreneurs, regulators, and policymakers from across the globe on FinTech and Regulatory Innovation. Godsway has a Ph.D. in Economics from Maastricht University (Netherlands) and has published in reputable journals such as Small Business Economics.
Email: godsway.tetteh@strath.ac.uk
LinkedIn: https://www.linkedin.com/in/godsway-k-tetteh-ph-d-83a82048/
Photo by Tara Winstead: https://www.pexels.com/photo/an-artificial-intelligence-illustration-on-the-wall-8849295/