Revolutionising Financial Futures: UK Fintech Challenge Pioneers Data-Driven Solutions for Later Life Planning

FinTech Scotland and Smart Data Foundry are collaborating to bring an industry-wide UK fintech innovation challenge to the market. This challenge seeks to inspire the development of inclusive financial services for consumers, empowering them on their journey towards a secure financial future.

Building on the success of the previous SME business banking programme in 2023, the 2024 emphasis will be on new innovative solutions that can support people’s financial journeys as they plan for their later years.

UK fintechs can now apply to take part in the challenge, and up to six successful applicants will be awarded a £5K participation fund to allocate resource to developing their idea. In addition to funding, the challenge offers a fantastic opportunity for participants to present to some of the largest financial institutions in the UK, including NatWest Group, PwC, and Royal London, as well as engage with experts in data, technology, and fintech. This exposure will allow innovators to gain valuable insights, receive expert guidance, and enter potential collaborations, maximising the chances of success for their projects.

Another key feature of the challenge is Smart Data Foundry’s provision of synthetic data replicating both consumer banking and investment and savings products*. This will enable fintechs to thoroughly test and refine their innovations, ensuring the development of robust and effective solutions that address consumers’ real needs.

As life expectancy in the UK and around the world continues to increase, the number of people living later in life is growing rapidly. It is expected that average life expectancy in the UK will be 85.9 years by 2050 – in 1950 it was 68.6 years.1 This demographic shift has a significant impact on the current and future cost of living, as there is an increased need to be financially secure for longer. Fintech solutions need to consider future products and services that will help prepare people financially for a longer life.

Through support from the Strength in Places UK Research and Innovation Grant, a prize fund of £45K will be offered to promising projects arising from the challenge. This will allow entrepreneurs and innovators to further develop and implement their ideas, which will help unlock later-life planning for consumers.

Those interested in taking part have until 17 May to submit their application.

Samantha Brand, Innovations & Partnerships at NatWest Group, said:

We are thrilled to partner with FinTech Scotland and Smart Data Foundry on the innovation call for Supporting Later Financial Lives. This growing customer segment spans life stages with varying product requirements, and we believe there are specific needs to be solved in this space. We look forward to working with innovators to understand how we can create the best solution for our customers. The challenge aligns firmly with NatWest’s purpose to champion potential, helping people, families and businesses to thrive.”

Sarah Collins, Director PwC United Kingdom, commented:

“We are thrilled to support this innovation challenge, which represents an exciting opportunity to harness the power of open finance data. The power of fintech can help consumers gain greater control over their financial futures, ultimately enabling them to make smarter decisions as they plan for later life. We are delighted to be working with FinTech Scotland and Smart Data Foundry to accelerate data driven innovation.”

Bryn Coulthard, Chief Product and Technology Officer at Smart Data Foundry, said:

“Our continued partnership with FinTech Scotland in this innovation challenge underscores our commitment to empowering consumers with innovative, data-driven solutions. By leveraging the power of data, technology, and fintech expertise, we hope this challenge will help to revolutionise financial services, ensuring individuals can embark on their later years with confidence and security. Through initiatives like this, we’re envisioning the future and actively shaping it.”

Nicola Anderson, CEO of FinTech Scotland, said:

“Together with Smart Data Foundry, we are excited to launch this new innovation challenge focusing on later financial lives. As we explore innovation in this domain we hope it will also generate fresh insights into the potential for Open Finance data. This is a great opportunity to explore that potential, with a focus on delivering smarter, and future-focused customer solutions. We are excited to see how these new ideas will help evolve the digital financial landscape with a focus on accessibility and using data to capture the needs of our rapidly evolving society.”

Those interested in taking part can find out more here:  FinTech Scotland | Innovation Challenge to support consumers in their later financial lives.

CreditNature’s Accreditation: Pioneering Natural Capital Investment

CreditNature has announced a new achievement. The Scottish fintech just secured the world’s first independent accreditation for a Terrestrial Ecosystem Condition Method under the Accounting for Nature® Standard, which represents a huge leap forward for ecosystem restoration and natural capital investment.

The accreditation is a result of a year-long process where CreditNature, in collaboration with the renowned Accounting for Nature® and their expert panel, developed a robust method enabling businesses to measure and report on nature-positive impacts of their investments. This method aligns with crucial reporting frameworks such as the Taskforce on Nature-related Financial Disclosures (TNFD) and the EU Corporate Sustainability Reporting Directive (CSRD), ensuring that businesses can disclose impactful Environmental, Social, and Governance (ESG) outcomes.

The Ecosystem Condition Method, forming part of CreditNature’s NARIA framework, provides a standardised and accredited metric that quantifies ecosystem integrity on a scale of 0 to 100. This allows for consistent reporting across varied landscapes, not just in Europe, but with plans to expand globally including regions like Africa, the Americas, and Southeast Asia.

  • Impacts and Visualisation CreditNature’s method does more than quantify; it also offers real-world visualisations of restoration progress through their innovative dashboard. This feature provides investors with evidence and insights into restoration gains, complemented by narratives, accredited Key Performance Indicators (KPIs), and vivid media of nature restoration successes.
  • Endorsements and Support The method has garnered support from key environmental figures and governmental bodies. Dr. Peter Phillips of the Scottish Government and Prof Hugh Possingham of Accounting for Nature have both praised the accreditation for setting high standards and providing a reliable method for quantifying Nature Positive outcomes.

This development not only showcases CreditNature’s commitment to scientific excellence and innovation but also enhances the credibility and viability of investing in natural capital on a global scale.

Stuart Barnard Becomes First CFO of Encompass Corporation

Scotland-based fintech Encompass Corporation, a leader in the field of Corporate Digital Identity (CDI), just announced the appointment of Stuart Barnard as its first Chief Financial Officer (CFO).

Stuart joins the company with over 15 years experience in the industry. He brings extensive expertise in finance, particularly in growing start-ups and scale-ups, to look after Encompass’ global financial strategies and operations. His new role at Encompass will see him lead finance, legal, human resources, revenue operations, IT, and information security.

Barnard’s appointment recognises his pivotal contributions to the company since joining the team in 2016. Initially serving as Head of Finance, and later as VP of Finance and Business Operations, Stuart has been instrumental in driving Encompass’ rapid growth and international expansion.

Reflecting on his new role, Stuart Barnard remarked,

“I am delighted to continue what has been a remarkable journey at Encompass by taking up the role of CFO. It has been a privilege to be part of establishing the organisation as a global leader, with the foundations in place to continue to go from strength to strength, supported by a first-class team and innovation. It is a truly exciting time, and we are well placed to continue to flourish as a top player in the market, fuelled by our focus on unlocking the benefits of CDI for our customers.”

Wayne Johnson, co-founder and CEO of Encompass Corporation, expressed his enthusiasm about Barnard’s promotion, saying,

“Stuart has been instrumental to the success of Encompass, from the early days until now, as we have developed our offering, operations and culture ”“ all of which have allowed us to scale internationally. His knowledge, expertise, and business acumen have been vital to ensuring Encompass remains ahead of the curve, and the diverse, high-performing talent that he has brought into his teams has been central to not only how we operate on a daily basis, but to instilling the values that we pride ourselves on as a forward-thinking employer that enables individuals to thrive”

Stuart’s appointment to the executive team follows other significant hires this year, including Neil Acworth as Chief Information Security Officer (CISO) and Job den Hamer, former CEO of CoorpID, as Head of Business Development, highlighting Encompass’ ongoing commitment to strengthening its leadership team and enhancing its market position.

Critique of the UK’s pro-innovation approach to AI regulation and implications for financial regulation innovation

Article written by Daniel Dao ”“ Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde.


Recently, artificial intelligence (AI) is widely recognised as a pivotal technological advancement with the capacity to profoundly reshape societal dynamics. It is celebrated for its potential to enhance public services, create high-quality employment opportunities, and power the future. However, there remains a notable opacity regarding the potential threats it poses to life, security, and related domains, thus requiring a pro-active approach to regulation. To address this gap, the UK Government has released an AI white paper outlining its pro-innovation approach to regulating AI. While this white paper symbolises the contributions and endeavours aimed at providing innovative and dynamic solutions to tackle the significant challenge posed by AI, it is important to acknowledge that there are still certain limitations which the white paper may refine in subsequent iterations.

The framework of the UK Government’s AI regulations in general is underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: Safety, security, and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; Contestability and redress. The pro-innovation approach outlined in the UK Government’s AI white paper proposes a nuanced framework reconciling the trade-off between risks and technological adoption. While the regulatory framework endeavours to identify and mitigate potential risks associated with AI, it also acknowledges the possibility that stringent regulations could impede the pace of AI adoption. Instead of prescribing regulations tailored to specific technologies, the document advocates for a context-based, proportionate approach. This approach entails a delicate balancing act, wherein genuine risks are weighed against the opportunities and benefits that AI stands to offer. Moreover, the white paper advocates for an agile and iterative regulatory methodology, whereby insights from past experiences following evolving technological landscapes inform the ongoing development of a responsive regulatory framework. Overall, this white paper presents an initial standardised approach that holds promise for effectively managing AI risks while concurrently promoting collaborative engagement among governmental bodies, regulatory authorities, industry stakeholders, and civil society.

However, notwithstanding the numerous advantages and potential contributions, certain limitations are often associated with inaugural documents addressing complex phenomena such as AI. Firstly, while the white paper offers extensive commentary on AI risks, its overarching thematic orientation predominantly centers on promoting AI through “soft laws” and “deregulation.” The white paper seems to support AI development with various flexibilities rather than provide some certain stringent policies to mitigate AI risks, thus raising awareness regarding “balance”. The mechanism of “soft laws” hinges primarily on voluntary compliance and commitment. Specifically, without legal forces, there is a risk that firms may not fully adhere to their commitments or may only partially implement them.

Ambiguity or uncertainty is also one critical issue with the “soft laws” mechanism. There exists an absence of detailed regulatory provisions within the proposed framework outlined in the white paper. While the document espouses an “innovative approach” with promising prospects, its nature leaves industries and individuals to speculate about necessary actions, thereby raising the potential for inconsistencies in practical implementation and adoption. Firms lack a systematic, step-by-step process and precise mechanisms to navigate through various developmental stages. Crafting stringent guidelines for AI poses considerable challenges, yet it is essential to implement them with clarity and rigor to complement existing innovative approaches effectively.

One more point is that the iterative and proportional approach advocated may inadvertently lead to “regulation lag,” whereby regulatory responses are only triggered in the wake of significant AI-related losses or harms, rather than being proactive. This underscores the necessity for a clear distinction between leading and lagging regulatory regimes, with leading regulations anticipating potential AI risks to establish regulatory guidelines proactively.

Acknowledging the notable potential and inherent constraints outlined in the AI white paper, we have identified several implications for innovation in financial regulation. The deployment of AI holds promise in revolutionising various facets of financial regulation, including bolstering risk management and ensuring regulatory compliance. The innovative approach could offer certain advantages to firms such as flexibility, cooperation, and collaboration among stakeholders to address complicated cases.

As discussed above, to implement the effectiveness of financial regulations, the government authorities may consider revising and developing some key points. Given the opaque nature of AI-generated outcomes, it is imperative to apply and develop some advanced techniques, such as Explainable AI (XAI), to support decision-making processes and mitigate latent risks. Additionally, while regulators may opt for an iterative approach in rule-setting to accommodate contextual nuances, it is imperative to establish robust and transparent ethical guidelines to govern AI adoption responsibly. Such guidelines, categorised as “leading” regulations, should be developed in detail and collaboratively, engaging industry stakeholders, academic experts, and civil society, to ensure alignment with societal values and mitigate potential adverse impacts. Furthermore, it is essential to establish unequivocal “hard laws” for firms and anticipate legal forces for non-compliance with regulations. These legal instruments serve as valuable supplements to the innovative “soft laws” and contribute to maintaining equilibrium within the market.


About the author

Daniel Dao is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. Besides, he is Doctoral Researcher in Fintech at Centre for Financial and Corporate Integrity, Coventry University, where his research topics focus on fintech (crowdfunding), sustainable finance and entrepreneurial finance. He is also working as an Economic Consultant at World Bank Group, Washington DC Headquarters, where he has been contributing to various policy publications and reports, including World Development Report 2024; Country Economic Memorandum of Latin American and Caribbean countries; Policy working papers of labor, growth, and policy reforms, etc”¦. Regarding professional qualifications and networks, he is CFA Charterholder and an active member of CFA UK. He has earned his MBA (2017) in Finance from Bangor University, UK, and his MSc (2022) in Financial Engineering from WorldQuant University, US. He has shown a strong commitment and passion for international development and high-impact policy research. His proficiency extends to data science techniques and advanced analytics, with a specific focus on artificial intelligence, machine learning, and natural language processing (NLP).


Photo by Markus Winkler: https://www.pexels.com/photo/a-typewriter-with-the-word-ethics-on-it-18510427/

Simplifying Compliance with AI

Season 4, episode 2

Listen to the full episode here.

In this episode, we explore the role of Artificial Intelligence (AI) in streamlining compliance within the financial sector, showcasing the Financial Regulation Innovation Lab. 

We discuss the future of financial compliance, enriched by AI’s capability to automate and innovate. This episode is for anyone interested in the intersection of technology, finance, and regulation, offering insights into the collaborative efforts shaping a more compliant and efficient financial landscape.

Guests: 

  • Antony Brookes – Head of UK Investment Compliance, abrdn
  • Mark Cummins – Professor of Financial Technology at University of Strathclyde
  • Joanne Seagrave – Head of Regulatory Affairs at Tesco Bank

Tesco Bank and Black Professional Scotland driving diversity and inclusion

Blog written by Fiona Allan, Senior Clubcard Proposition Manager at Tesco Bank


Through our great collaboration with Fintech Scotland, we received an introduction to Black Professionals Scotland and over time have built up a strong relationship where we have been able to grow our participation with their excellent internship programme.

As a business, we are passionate about increasing the diversity of our workforce and making sure we can support those from under-represented backgrounds. In our most recent intake, we welcomed 16 interns from Black Professional Scotland to join us for our 12-week internship program.

Tawa joined us in Oct of last year as Innovation & Loyalty intern. She hit the ground running having finished her Master’s degree at Robert Gordon’s University in Aberdeen just days before starting with us! This was the first time we had welcomed an intern to our team, and we were excited to get Tawa involved in the work we lead on proposition development and Clubcard.

When designing the intern experience, it was really important that we focused on giving our intern the most breadth in terms of their experience and visibility right across the Tesco group. Tawa’s project focus was developing Clubcard propositions, specifically looking at how we bring the best of Clubcard to our travel propositions.

During Tawa’s first couple of weeks, I set her up with induction meetings with colleagues from across the business. Having never working in Financial Services before, I was keen for her to build a solid foundation with an understanding of our products, how they worked, and our relationship with the wider Tesco group. Tawa found these initial meetings extremely valuable and continued to build these positive relationships with her stakeholders throughout her internship.

Personal development is something we’re very passionate about here at Tesco Bank, and during the 12 weeks we had together, I was keen to do everything I could to help her build a clear focus on her development. We found Tawa a mentor to help support her and offer some guidance on navigating the next steps in her career and the world of Tesco. We set three core focus areas for development and supported her to build her skills in presenting, storytelling, and stakeholder management.

Tawa was based in Aberdeen, so we agreed that she would commute to Edinburgh 1-2 days per week to get face time with the team and work remotely for the rest of the week. This time in the office was extremely valuable for Tawa to build relationships and spend time with the other interns. In addition to her time working in the Edinburgh office, Tawa also made time to attend multiple industry events, including a FinTech Scotland conference, a day spent with our Customer Service teams in Glasgow, and networking events organised by Black Professional Scotland.

Mid-way through the internship, I organised a trip down to Tesco HQ in Welwyn Garden City. Although her internship was with the Bank, I wanted Tawa to have the opportunity to see and experience as much of the wider Tesco business as possible. This trip gave Tawa the opportunity to step out of the world of finance and into the world of food, where she met colleagues working in the wider Clubcard team and even had time for a tour of the Tesco innovation hub, Tesco labs.

Having Tawa in the team for 12 weeks was hugely valuable, not only was she a pleasure to work with, but she was also a valued member of the team who brought an incredibly insightful outside perspective. She challenged and expanded our thinking, while giving a clear recommendation for her project on future Clubcard travel propositions.

It was a pleasure to watch Tawa develop and grow in confidence throughout her internship and I know she’ll go on to be a huge success in whatever she does. Tawa has now successfully secured a working Visa for the UK and is looking for permanent jobs. She knows she has allies at Tesco Bank that she can call on, and a mentor in me who will support her in any way I can.

I wouldn’t hesitate to work with Black Professionals Scotland again to welcome another intern to our team and help offer more opportunities to diversify the workforce within the FinTech industry. Thanks again to our continued strong collaboration with Fintech Scotland, being able to make these powerful connections in the industry.

Generative AI in the Context of UK/EU Regulation

The UK Government released its AI White Paper on March 29, 2023, outlining its plans for overseeing the implementation of artificial intelligence (AI) in the United Kingdom. The White Paper is a follow-up to the AI Regulation Policy Paper, which outlined the UK Government’s vision for a future AI regulatory system in the United Kingdom that is supportive of innovation and tailored to specific contexts.

The White Paper presents an alternative methodology for regulating AI in contrast to the EU’s AI Act. Rather than enacting comprehensive legislation to govern AI in the United Kingdom, the UK Government is prioritising the establishment of guidelines for the development and utilisation of AI. Additionally, it aims to enhance the authority of existing regulatory bodies such as the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), and the Competition and Markets Authority (CMA) to provide guidance and oversee the use of AI within their respective domains.

 

What is the key information from the UK White Paper and EU Legal Framework for Artificial Intelligence?

In contrast to the proposed EU AI Act, the AI White Paper does not put out a comprehensive definition of the terms “AI” or “AI system” as intended by the UK Government. The White Paper defines AI based on two key attributes – adaptivity and autonomy – in order to ensure that the proposed regulatory framework remains relevant and effective in the face of emerging technology. Although the absence of a clear-cut definition of AI may cause legal ambiguity, it will be the responsibility of various regulators to provide instructions to firms, outlining their requirements on the use of AI within their jurisdiction.

The regulatory framework outlined in the AI White Paper, put forth by the UK Government, encompasses the entirety of the United Kingdom. The White Paper does not suggest altering the territorial scope of current UK legislation pertaining to AI. Essentially, this implies that if the current laws regarding the use of AI have jurisdiction outside national borders (like the UK General Data Protection Regulation), the instructions and enforcement by existing regulatory bodies may also apply outside of the United Kingdom. For a comparison of the UK and EU approaches to AI regulation, see Table 1 in the Appendix.

 

What will be the impact of the EU AI Act on the UK?

Once the EUAI Act is implemented, it will apply to UK firms who utilise AI systems within the EU, make them available on the EU market, or participate in any other activity regulated by the AI Act. These UK organisations must guarantee that their AI systems are in compliance, or else they may face the potential consequences of financial penalties and damage to their brand.

Nevertheless, the AI Act may have broader ramifications, perhaps causing a ripple effect for UK firms only operating within the UK market. The AI Act is expected to establish a global benchmark in this domain, much like the General Data Protection Regulation (GDPR) has done for data protection. There are two possible implications: firstly, UK companies that actively adopt and adhere to the AI Act can distinguish themselves in the UK market, attracting customers who value ethical and responsible AI solutions; secondly, as the AI Act becomes a benchmark, we may witness the UK’s domestic regulations aligning with the AI Act in order to achieve consistency.

Moreover, the EU AI Act is a crucial legislative measure that promotes voluntary adherence, even for companies that may not initially be subject to its provisions (as emphasised in Article 69, which pertains to Codes of Conduct). Consequently, the Act is expected to have an effect on UK companies, especially those that provide AI services in the EU and utilise AI technologies to deliver their services within the region. It is essential to remember that numerous UK enterprises have a market presence that extends well beyond the borders of the UK, therefore making the EU AI Act very pertinent to them.

How does the United Kingdom’s approach compare to those of other countries?

The UK Government is charting its own course in respect of AI implementation and its objective is to establish regulations for AI that promote innovation while safeguarding the rights and interests of individuals. The AI White Paper incorporates several ideas that align with the European Union’s position on artificial intelligence. As an illustration, the Government plans to establish a novel regulatory framework for AI systems that pose substantial risks. Additionally, it intends to mandate that enterprises perform risk evaluations before utilising AI tools. This need is logical, particularly when considering that the AI tool is handling personal data, as data protection by design and default are important tenets of the UK GDPR. Nevertheless, the AI White Paper specifies that these ideas will not be implemented by legislation, at least not at first. Thus, the level of acceptance and the impact of the voluntary nature of these principles on their adoption by organisations throughout the UK remain uncertain.

The UK government has expressed its ambition to become a dominant force in the field of AI, taking the lead in establishing global regulations and standards to ensure the safe deployment of AI technology. As part of this endeavor, the UK hosted the AI Safety Summit in the autumn of 2023. Establishing a global agreement on AI regulation will effectively mitigate the negative consequences arising from emerging technology advancements.

Nevertheless, the international community’s history of coordinating regulation does not instill trust. The initial legislation in social media, influenced by certain technology companies, granted legal immunity to platforms for hosting content created by users, hence creating challenges in regulating online harms at a later stage. The potential for this error to be replicated with AI exists. Although there have been recent demands for the establishment of a counterpart to the Intergovernmental Panel on Climate Change, as expressed by both the Prime Minister and the EU Commission President, reaching a unified agreement on climate change response has proven challenging due to conflicting national interests, similar to those observed in the context of artificial intelligence.

The UK’s present strategy for regulating AI differs from the EU’s proposed method outlined in the EU AI Act. The EU’s proposal involves implementing strict controls and transparency obligations for AI systems deemed “high risk,” while imposing less stringent standards for AI systems considered “limited risk.” The majority of general-purpose AI systems are considered to have a high level of risk. This means that there are specific rules that developers of foundational models must follow, and they are also required to provide detailed reports explaining how the models are trained.

Additionally, there exists a collaborative effort between the United States and the European Union to create a collection of non-binding regulations for companies, known as the “AI Code of Conduct,” in accordance with their shared plan for ensuring reliable and secure AI and mitigating any risks. The code of conduct will be accessible via the Hiroshima Process at the G7 to foster global agreement on AI governance. If this endeavour is successful, there is potential for the UK to diminish its influence on the formulation of international AI regulations. However, the publication of the AI bill of rights in the USA in October 2022 has the potential to result in a more principles-oriented approach that is in line with the United Kingdom.

Despite these potential dangers, the UK is establishing itself as a nation where companies can create cutting-edge AI technology and perhaps become a global leader in this field. This could be beneficial provided that a suitable equilibrium can be achieved between innovation and the secure advancement of systems.

What will be the effect of the EU AI Act on UK companies utilising Generative AI?

Due to the increasing popularity and widespread influence of Generative AI and Large Language Models (LLMs) in 2023, the EU AI Act underwent significant modifications in June 2023, specifically addressing the utilisation of Generative AI.

Foundation models are a category of expansive machine learning models that form the fundamental framework for constructing a diverse array of artificial intelligence applications. These models have undergone pre-training using extensive datasets, which allows them to acquire knowledge and comprehension of intricate patterns, relationships, and structures present in the data. Developers can achieve impressive skills in natural language processing, computer vision, and decision-making by refining foundation models for specific applications or domains. Some examples of foundation models include OpenAI’s ChatGPT, Google’s BERT, and PaLM-2. Foundation models have been essential in the advancement of sophisticated AI applications in diverse industries, owing to their versatility and adaptability.

Companies now engaged in the development of apps utilising Generative AI Large Language Models (LLMs) and comparable AI technologies, such as ChatGPT, Google Bard, Anthropic’s Claude, and Microsoft’s Bing Chat or ‘Bing AI’, must carefully consider the consequences of the EU AI Act. These companies should be cognisant of the potential ramifications of the Act on their operations and proactively take measures to assure adherence, irrespective of whether they are specifically targeted by the legislation. By doing this, they can remain at the forefront and sustain a robust presence in the always changing AI landscape.

Companies utilising these AI tools and ‘foundation models’ to provide their services must carefully assess and handle risks in accordance with Article 28b, and adhere to the transparency requirements outlined in Article 52 (1).

The primary objective of the EU AI Act is to establish a benchmark for ensuring AI safety, ethics, and responsible utilisation, while also enforcing requirements for openness and responsibility. Article 52 (3) of the EU AI Act, as revised in June 2023, establishes certain requirements on the utilisation of Generative AI.

In conclusion

Regulating AI in all it’s forms is a daunting and pressing task, but an essential one. Amidst the prevalent and rapidly increasing acceptance of AI, regulations must guarantee the reliability of AI systems, minimise AI-related risks, and establish mechanisms to hold accountable the individuals involved in the development, deployment, and utilisation of these technologies in case of failures and malpractice.

The UK’s involvement in this challenge is appreciated, as is its commitment to advancing the goal of AI governance on the global stage. The UK has the chance to establish itself as a thought leader in global AI governance by introducing a context-based, institutionally focused framework for regulating AI. This approach might potentially be adopted by other global jurisdictions as a standard. The emergence and rapid advancement of Generative AI places heightened responsibility on the UK to assume this thought leadership role.

APPENDIX

Table 1: Comparison between UK and EU: AI White Paper vs Legal Framework for Artificial Intelligence
Aspects UK EU
Approach 1.Ensure the safe utilization of AI: Safety is expected to be a fundamental concern in specific industries, such as healthcare or vital infrastructure. Nevertheless, the Policy Paper recommends that regulators adopt a context-dependent approach in assessing the probability of AI endangering safety and adopt a proportional strategy in mitigating this risk. 1. The European Parliament ratified the EU AI Act on June 14, 2023.
2. Ensure the technical security and proper functioning of AI: AI systems must possess robust technical security measures and operate according to their intended design and functionality. The Policy Paper proposes that AI systems undergo testing to assess their functionality, resilience, and security, taking into account the specific context and proportionality considerations. Additionally, regulators are expected to establish the regulatory requirements for AI systems in their respective sectors or domains. 2. European institutions will now commence negotiations to achieve consensus on the ultimate document. Consequently, the earliest possible implementation of the EU AI Act would be in 2025, even if it is adopted promptly.
3. Ensure that AI is adequately transparent and explainable: The Policy Paper recognizes that AI systems may not always be easily explicable, and in most cases, this is unlikely to present significant risks. Nevertheless, the Policy Paper proposes that in specific circumstances with a high level of risk, decisions that cannot be adequately justified may be disallowed by the appropriate regulatory body. This could include situations such as a tribunal decision where the absence of a clear explanation would prevent an individual from exercising their right to contest the tribunal’s ruling. 3. Jurisdictional scope: If implemented, the EU AI Act will enforce a series of responsibilities on both providers and deployers of AI systems that fall within its scope and are used within or have an impact on the EU, regardless of where they are based.
4. Integrate fairness into AI: The Policy Paper suggests that regulators provide a clear definition of “fairness” within their specific sector or area and specify the circumstances in which fairness should be taken into account (such as in the context of job applications).
5. The Policy Paper asserts that legal people must bear responsibility for AI governance, ensuring that they are held accountable for the results generated by AI systems and assuming legal obligation. This responsibility applies to an identified or identifiable legal entity. 4. Broadening the ban on specific applications of AI systems to encompass remote biometric identification in publicly accessible areas, as well as emotion recognition and predictive policing technologies.
6. Elucidate pathways for seeking redress or challenging decisions: As stated in the Policy Paper, the use of AI should not eliminate the opportunity for individuals and groups to protest a decision, if they have the right to do so outside the realm of AI. Hence, the UK Government will need regulators to guarantee that the results produced by AI systems can be challenged in “pertinent regulated circumstances”. 5. The scope of high-risk AI systems has been extended to encompass systems employed for voter manipulation or utilized in recommender systems of extremely large online platforms (referred to as VLOPs).
6. Establishing regulations for providers of foundation models, which are AI systems trained on extensive data, designed to produce general outputs, and can be customized for various specific purposes, including those that drive generative AI systems.
7. Prohibited risks, such as social scoring or systems that exploit vulnerabilities of specific groups of individuals, are considered unacceptable.
8. High-risk activities may be allowed, provided that they adhere strictly to requirements for conformity, documentation, data governance, design, and incident reporting obligations. These encompass systems utilized in civil aviation security, medical gadgets, and the administration and functioning of vital infrastructure.
9. Systems that directly engage with humans, such as chatbots, are allowed as long as they meet specific transparency requirements. These requirements include informing end-users that they are dealing with a machine and ensuring that the risk is limited.
10. Provide evidence through suitable design, testing, and analysis that potential risks that might be reasonably anticipated have been correctly identified and minimized;
11. Utilize only datasets that adhere to proper data governance protocols for foundational models, ensuring that data sources are suitable and potential biases are taken into account.
12. Create and construct a model that attains optimal levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity. 
13. Generate comprehensive technical documentation and clear instructions for use that enable downstream providers to fulfil their obligations effectively. 
14. Implement a quality management system to guarantee and record adherence to the aforementioned obligations.
15. Enroll the foundational model in a European Union database that will be upheld by the Commission.
In addition, the creators of foundational models utilized in generative AI systems would be required to openly acknowledge that the content was generated by AI and guarantee that the system includes protective measures to prevent the creation of content that violates European Union (EU) regulations. In addition, they would need to provide a summary of the training data utilized, which is safeguarded by copyright law.
Regulators The Policy Paper designated the Information Commissioner’s Office (ICO), Competition and Markets Authority (CMA), Ofcom, Medicine and Healthcare Regulatory Authority (MHRA), and Equality and Human Rights Commission (EHRC) as the principal regulators in its new system. 1. National competent authorities for supervising the application and implementation. 
Note: Although several UK regulators and government agencies have initiated measures to promote the appropriate use of AI, the Policy Paper underscores the existing hurdles encountered by businesses, such as a dearth of transparency, redundancies, and incongruity among several regulatory bodies. 2. European Artificial Intelligence Board for coordination and advice.

About the Author

Dr. Hao Zhang is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. He holds a PhD in Finance from the University of Glasgow, Adam Smith Business School. Hao held the position of Senior Project Manager at the Information Center of the Ministry of Industry and Information Technology (MIIT) of the People’s Republic of China. His recent research has focused on asset pricing, risk management, financial derivatives, intersection of technology and data science.

 


Photo by Kelly : https://www.pexels.com/photo/road-sign-with-information-inscription-placed-near-street-on-sunny-day-3861780/

Beyond Quotas: Achieving Authentic Diversity and Inclusion

In today’s rapidly evolving workplace landscape, diversity and inclusion have become more than just buzzwords; they’re integral components of successful business models. However, achieving genuine diversity and inclusion goes far beyond simply meeting quotas. It requires a nuanced approach that values individuals’ unique contributions and fosters inclusive cultures where everyone feels respected and empowered.

Traditionally, quotas have been employed to increase diversity, setting specific targets for recruiting or promoting individuals from underrepresented groups. While quotas may boost diversity statistically, they often fall short in addressing underlying biases and systemic issues. This can lead to tokenism and resentment among employees, undermining the very essence of diversity and inclusion.

To truly embrace diversity and inclusion, organisations must move beyond quotas and adopt thoughtful hiring practices. This approach prioritises quality over quantity, focusing on recruiting high-quality, diverse candidates based on their skills and competencies.

Thoughtful hiring practices involve:

  1. Building a Diverse Talent Pipeline: Actively seeking out talented individuals from diverse backgrounds through partnerships, internships, and mentorship programs.
  2. Aligning Hiring Practices with Organisational Values: Ensuring fairness and inclusivity throughout the recruitment process by mitigating unconscious biases and fostering transparency.
  3. Implementing Blind Auditions and Structured Interviews: Removing identifying information from job applications and using structured interview techniques to reduce bias and ensure fairness.
  4. Developing a Culture of Inclusion: Providing training and education for hiring teams, fostering leadership commitment to diversity, and establishing employee resource groups and mentorship programs.

Continuous monitoring and evaluation are essential for measuring the success of diversity and inclusion efforts. Key performance indicators, regular audits, and transparent communication help organisations stay accountable and identify areas for improvement.

Despite the challenges, embracing authentic diversity and inclusion is essential for creating workplaces where all individuals feel valued, empowered, and able to contribute fully. By going beyond quotas and embracing thoughtful hiring practices, organisations can unlock the numerous benefits that diversity and inclusion bring to the workplace and society at large.


Photo by Walls.io : https://www.pexels.com/photo/whiteboard-with-hashtag-company-values-of-walls-io-website-15543047/

Navigating the Tides of Regulatory Risk: Insights from Pinsent Masons’ April 2024 Edition

The April 2024 Edition of the Pinsent Masons’ Regulatory Risk Trends offers a deep dive into the current and emerging issues that are shaping the world of finance, legal compliance, and corporate governance. This comprehensive document, authored by leading experts in the field, serves as a very useful source of information for businesses, financial institutions, and legal professionals navigating the complex regulatory environment.

The report opens with thoughts from Jonathan Cavill, Partner at Pinsent Masons, who specialises in contentious regulatory and financial services disputes. His expertise sets the stage for an in-depth exploration of the regulatory challenges and opportunities that lie ahead.

 

Key takeaways

  1. Consumer Protection: The document highlights the Financial Conduct Authority’s (FCA) intensified focus on the fair treatment of customers, especially the vulnerable ones. With references to recent reviews and consultations, it stresses the importance of businesses aligning their practices with these standards.
  1. Fair Value and Insurance Sector Scrutiny: The FCA’s call for insurers to act upon the publication of the latest fair value data underscores a shift towards greater transparency and fairness in insurance pricing. The report examines the implications of these demands and offers strategies for compliance.
  1. Market Operations and Monetary Policy: Insights from Colin Read explore the Bank of England’s Sterling Monetary Framework (SMF) and its implications for market stability and liquidity. This section is crucial for understanding central bank reserves and the broader economic landscape. 
  1. Advancements in Consumer Investments: Elizabeth Budd delves into the FCA’s strategy for consumer investments, emphasising the new Consumer Duty and its impact on financial advisers and investment firms. This represents a significant shift towards ensuring that consumer interests are at the heart of financial services.
  1. Innovation in Payment Systems: Andrew Barber’s commentary on the latest policy statements from the Bank of England provides a glimpse into how regulatory bodies are supporting payments innovation, particularly through the Real-Time Gross Settlement (RTGS) system. This is vital for fintech companies and traditional financial institutions alike.
  1. Fighting Financial Scams: The document doesn’t shy away from the darker side of finance, addressing the ongoing battle against scams. It presents a detailed analysis of recent cases and regulatory responses, offering valuable lessons and preventive strategies.
  1. Gender Equality: The Financial Services Compensation Scheme’s (FSCS) efforts in promoting gender equality within the financial sector are also covered. This initiative reflects a broader movement towards diversity and inclusion in finance, highlighting the societal values shaping regulatory agendas.

The Pinsent Masons’ April 2024 Edition of Regulatory Risk Trends is a roadmap for navigating the regulatory environment with confidence and foresight, giving you access to:

  • Detailed analyses of regulatory developments and their implications for various sectors.
  • Expert commentary from leading figures in law and finance.
  • Strategic recommendations for staying ahead in a regulatory landscape marked by rapid change and increased scrutiny.

Download the full report.

FCA Consumer Duty and Financial Inclusion: Does Artificial Intelligence Matter?

The Consumer Duty: What does it entail?

The Financial Conduct Authority (FCA) has recently issued the Consumer Duty Principle to guide financial services firms’ conduct in delivering good outcomes to their retail customers. The Consumer Duty is consumer-centric and outcome-oriented with the potential to bring about major transformation in the financial services industry.

The Consumer Duty is supported by three cross”‘cutting rules that require firms to:

  • Act in good faith towards retail customers.
  • Avoid causing foreseeable harm to retail customers.
  • Enable and support retail customers to pursue their financial objectives.

The Consumer Duty is expected to help firms achieve the following outcomes:

  1. The first outcome relates to products and services, where products and services are designed to meet the needs of consumers.
  2. The second outcome relates to price and value, which inter alia focuses on ensuring that consumers receive fair value for goods and services.
  3. The third outcome seeks to promote consumer understanding through effective communication and information sharing. This is to ensure that consumers understand the nature and characteristics of products and services including potential risks.
  4. The fourth outcome relates to consumer support, where consumers are supported to derive maximum benefits from financial products and services.

 

What are the implications for financial inclusion?

The Consumer Duty has significant implications for financial inclusion. Financial inclusion refers to access to and usage of financial services. While access is the primary objective of financial inclusion it does not always translate into usage due to several inhibiting factors such as price, transaction costs, and service quality. Removing the bottlenecks that limit the usage of financial services is therefore indispensable in unlocking the full benefits of financial inclusion.

The Consumer Duty is expected to trigger behavioural changes among financial institutions leading to significant effects on financial inclusion. Financial institutions are compelled to comply with the Consumer Duty and the cross-cutting rules to avoid regulatory risks that may take the form of sanctions. This implies that consumers will now have access to products and services that are fit for purpose, receive fair value for goods and services purchased, have a better understanding of products and services, and receive the support needed to derive maximum benefits from financial services. In this scenario, financial wellbeing will improve leading to a reduction in poverty and income inequality.

In contrast, however, the Consumer Duty can serve as a disincentive to innovate especially when the costs of compliance far outweigh the benefits, and this has significant implications for financial inclusion. Compliance costs may come in various forms including recruitment or training of staff, updating existing software and systems, or purchasing new ones. To reduce the risks of non-compliance financial institutions will be reluctant to innovate thereby limiting consumer choice. Firms can equally avoid the provision of services in areas and to segments of the population where the risk of non-compliance is high. In this case, vulnerable groups and areas are likely to be excluded from the provision of financial services (financial exclusion). These aspects of firms’ behaviours are more likely to be unobserved and subtle making it difficult to detect.

 

Does Artificial Intelligence matter?

Financial institutions are likely to adopt regulatory technologies and Artificial Intelligence (AI) solutions to comply with the Consumer Duty. This is particularly true given that financial firms are in constant search for automation and AI solutions to drive down the costs of regulatory compliance. The deployment of Machine Learning (ML) and AI in Anti-Money Laundering (AML) systems is taking a front stage in the financial services industry. AI-powered AML systems hold great promise to help financial services firms to detect suspicious activities that are likely to cause significant harm to consumers in real-time.

AI can help financial firms deliver good outcomes to consumers at low costs, especially to those at risk of financial exclusion. AI and ML algorithms can equip financial firms with the capability to remotely onboard customers and conduct remote identification checks thereby reducing costs. AI-powered solutions available to financial institutions during customer onboarding include but are not limited to real-time data access using open Application Programming Interface (API), image forensic, digital signature and verification, facial recognition, and video-based KYC (Know Your Customer). Remote customer onboarding simplifies the account opening process and reduces the costs and inconveniences associated with physical travel to bank branches which can discourage financially excluded consumers from accessing financial services.

AI and Natural Language Processing (NLP) play significant roles in customer-facing roles. The use of chatbots has the prospect of enhancing customer experiences through a rapid resolution of queries. Banks, for example, are moving from simple chatbot technologies to more advanced technologies including Large Language Models and Generative AI to enhance customer service. These advanced technologies facilitate communication between financial institutions and their customers.

AI and ML technologies also support automatic investment or financial advisory services. Robo-advisors use ML algorithms to automatically offer targeted investment or financial advice that is mostly done by human financial advisors. These technologies expand the provision of advisory services to a wide range of consumers including low-income consumers in a cost-effective manner.

AI and ML technologies offer financial institutions the potential to explore alternative sources of risk scoring using both structured and unstructured consumer data to predict their creditworthiness. The use of alternative sources of risk scoring has the potential to facilitate the provision of credit to consumers with limited credit history and low income.

 

What are some of the challenges with AI?

Regulatory technologies such as AI hold great prospects for compliance, but the deployment of these technologies comes with potential risks that can undermine the gains of financial inclusion. AI models for example are prone to embedded bias especially when the underlying dataset discriminates against certain groups or persons leading to differentiation in pricing and service quality. Bias in credit scoring algorithms can exclude vulnerable groups or regions from accessing loans and even if they do have access such loans are likely to be offered at high interest rates owing to unfair credit scoring. Also, bias in the underlying datasets of chatbots and Robo-advisors can lead to misinformation and cause significant harm to consumers. Data privacy concerns are on the increase especially given that any leakage in the dataset used to train AI models can expose sensitive consumer information. AI and ML technologies are not immune to cyber-attacks and technical glitches which can disrupt their functionality and expose consumers to harm. These examples imply that regulatory technologies and AI models pose a non-compliance risk to the Consumer Duty especially if they inhibit the delivery of good outcomes to consumers for example through discrimination and data privacy breaches.

What is the way forward?

The Consumer Duty is an important regulatory initiative with enormous potential to deepen financial inclusion and accelerate the positive contribution of financial inclusion to development. To achieve the objective of delivering good outcomes to consumers there is a need for constant engagement between the Financial Conduct Authority and stakeholders in the financial services industry. This will help timely identification and resolution of challenges that may arise during the implementation of the Consumer Duty.

While regulatory technologies and Artificial Intelligence are likely to play central roles in complying with the Consumer Duty there is the need for financial institutions to ensure that these technologies are themselves compliant with the Consumer Duty. This can be achieved by addressing the risks inherent in regulatory technologies and AI models. Senior managers of financial institutions are expected to play leading roles in mitigating the risk of non-compliance within the firm in line with the Senior Managers & Certification Regime.


About the Author(s)

Godsway Korku Tetteh is a Research Associate at the Financial Regulation Innovation Lab, University of Strathclyde (UK). He has several years of experience in financial inclusion research including digital financial inclusion. His research focuses on the impacts of digital technologies and financial innovations (FinTech) on financial inclusion, welfare, and entrepreneurship in developing countries. His current project focuses on the application of technologies such as Artificial Intelligence to drive efficiency in regulatory compliance. Previously, he worked as a Knowledge Exchange Associate with the Financial Technology (FinTech) Cluster at the University of Strathclyde. He also worked with the Cambridge Centre for Alternative Finance at the University of Cambridge to build the capacity of FinTech entrepreneurs, regulators, and policymakers from across the globe on FinTech and Regulatory Innovation. Godsway has a Ph.D. in Economics from Maastricht University (Netherlands) and has published in reputable journals such as Small Business Economics.

Email: godsway.tetteh@strath.ac.uk

LinkedIn: https://www.linkedin.com/in/godsway-k-tetteh-ph-d-83a82048/

Photo by Tara Winstead: https://www.pexels.com/photo/an-artificial-intelligence-illustration-on-the-wall-8849295/