Tesco Bank and Black Professional Scotland driving diversity and inclusion

Blog written by Fiona Allan, Senior Clubcard Proposition Manager at Tesco Bank


Through our great collaboration with Fintech Scotland, we received an introduction to Black Professionals Scotland and over time have built up a strong relationship where we have been able to grow our participation with their excellent internship programme.

As a business, we are passionate about increasing the diversity of our workforce and making sure we can support those from under-represented backgrounds. In our most recent intake, we welcomed 16 interns from Black Professional Scotland to join us for our 12-week internship program.

Tawa joined us in Oct of last year as Innovation & Loyalty intern. She hit the ground running having finished her Master’s degree at Robert Gordon’s University in Aberdeen just days before starting with us! This was the first time we had welcomed an intern to our team, and we were excited to get Tawa involved in the work we lead on proposition development and Clubcard.

When designing the intern experience, it was really important that we focused on giving our intern the most breadth in terms of their experience and visibility right across the Tesco group. Tawa’s project focus was developing Clubcard propositions, specifically looking at how we bring the best of Clubcard to our travel propositions.

During Tawa’s first couple of weeks, I set her up with induction meetings with colleagues from across the business. Having never working in Financial Services before, I was keen for her to build a solid foundation with an understanding of our products, how they worked, and our relationship with the wider Tesco group. Tawa found these initial meetings extremely valuable and continued to build these positive relationships with her stakeholders throughout her internship.

Personal development is something we’re very passionate about here at Tesco Bank, and during the 12 weeks we had together, I was keen to do everything I could to help her build a clear focus on her development. We found Tawa a mentor to help support her and offer some guidance on navigating the next steps in her career and the world of Tesco. We set three core focus areas for development and supported her to build her skills in presenting, storytelling, and stakeholder management.

Tawa was based in Aberdeen, so we agreed that she would commute to Edinburgh 1-2 days per week to get face time with the team and work remotely for the rest of the week. This time in the office was extremely valuable for Tawa to build relationships and spend time with the other interns. In addition to her time working in the Edinburgh office, Tawa also made time to attend multiple industry events, including a FinTech Scotland conference, a day spent with our Customer Service teams in Glasgow, and networking events organised by Black Professional Scotland.

Mid-way through the internship, I organised a trip down to Tesco HQ in Welwyn Garden City. Although her internship was with the Bank, I wanted Tawa to have the opportunity to see and experience as much of the wider Tesco business as possible. This trip gave Tawa the opportunity to step out of the world of finance and into the world of food, where she met colleagues working in the wider Clubcard team and even had time for a tour of the Tesco innovation hub, Tesco labs.

Having Tawa in the team for 12 weeks was hugely valuable, not only was she a pleasure to work with, but she was also a valued member of the team who brought an incredibly insightful outside perspective. She challenged and expanded our thinking, while giving a clear recommendation for her project on future Clubcard travel propositions.

It was a pleasure to watch Tawa develop and grow in confidence throughout her internship and I know she’ll go on to be a huge success in whatever she does. Tawa has now successfully secured a working Visa for the UK and is looking for permanent jobs. She knows she has allies at Tesco Bank that she can call on, and a mentor in me who will support her in any way I can.

I wouldn’t hesitate to work with Black Professionals Scotland again to welcome another intern to our team and help offer more opportunities to diversify the workforce within the FinTech industry. Thanks again to our continued strong collaboration with Fintech Scotland, being able to make these powerful connections in the industry.

Generative AI in the Context of UK/EU Regulation

The UK Government released its AI White Paper on March 29, 2023, outlining its plans for overseeing the implementation of artificial intelligence (AI) in the United Kingdom. The White Paper is a follow-up to the AI Regulation Policy Paper, which outlined the UK Government’s vision for a future AI regulatory system in the United Kingdom that is supportive of innovation and tailored to specific contexts.

The White Paper presents an alternative methodology for regulating AI in contrast to the EU’s AI Act. Rather than enacting comprehensive legislation to govern AI in the United Kingdom, the UK Government is prioritising the establishment of guidelines for the development and utilisation of AI. Additionally, it aims to enhance the authority of existing regulatory bodies such as the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), and the Competition and Markets Authority (CMA) to provide guidance and oversee the use of AI within their respective domains.

 

What is the key information from the UK White Paper and EU Legal Framework for Artificial Intelligence?

In contrast to the proposed EU AI Act, the AI White Paper does not put out a comprehensive definition of the terms “AI” or “AI system” as intended by the UK Government. The White Paper defines AI based on two key attributes – adaptivity and autonomy – in order to ensure that the proposed regulatory framework remains relevant and effective in the face of emerging technology. Although the absence of a clear-cut definition of AI may cause legal ambiguity, it will be the responsibility of various regulators to provide instructions to firms, outlining their requirements on the use of AI within their jurisdiction.

The regulatory framework outlined in the AI White Paper, put forth by the UK Government, encompasses the entirety of the United Kingdom. The White Paper does not suggest altering the territorial scope of current UK legislation pertaining to AI. Essentially, this implies that if the current laws regarding the use of AI have jurisdiction outside national borders (like the UK General Data Protection Regulation), the instructions and enforcement by existing regulatory bodies may also apply outside of the United Kingdom. For a comparison of the UK and EU approaches to AI regulation, see Table 1 in the Appendix.

 

What will be the impact of the EU AI Act on the UK?

Once the EUAI Act is implemented, it will apply to UK firms who utilise AI systems within the EU, make them available on the EU market, or participate in any other activity regulated by the AI Act. These UK organisations must guarantee that their AI systems are in compliance, or else they may face the potential consequences of financial penalties and damage to their brand.

Nevertheless, the AI Act may have broader ramifications, perhaps causing a ripple effect for UK firms only operating within the UK market. The AI Act is expected to establish a global benchmark in this domain, much like the General Data Protection Regulation (GDPR) has done for data protection. There are two possible implications: firstly, UK companies that actively adopt and adhere to the AI Act can distinguish themselves in the UK market, attracting customers who value ethical and responsible AI solutions; secondly, as the AI Act becomes a benchmark, we may witness the UK’s domestic regulations aligning with the AI Act in order to achieve consistency.

Moreover, the EU AI Act is a crucial legislative measure that promotes voluntary adherence, even for companies that may not initially be subject to its provisions (as emphasised in Article 69, which pertains to Codes of Conduct). Consequently, the Act is expected to have an effect on UK companies, especially those that provide AI services in the EU and utilise AI technologies to deliver their services within the region. It is essential to remember that numerous UK enterprises have a market presence that extends well beyond the borders of the UK, therefore making the EU AI Act very pertinent to them.

How does the United Kingdom’s approach compare to those of other countries?

The UK Government is charting its own course in respect of AI implementation and its objective is to establish regulations for AI that promote innovation while safeguarding the rights and interests of individuals. The AI White Paper incorporates several ideas that align with the European Union’s position on artificial intelligence. As an illustration, the Government plans to establish a novel regulatory framework for AI systems that pose substantial risks. Additionally, it intends to mandate that enterprises perform risk evaluations before utilising AI tools. This need is logical, particularly when considering that the AI tool is handling personal data, as data protection by design and default are important tenets of the UK GDPR. Nevertheless, the AI White Paper specifies that these ideas will not be implemented by legislation, at least not at first. Thus, the level of acceptance and the impact of the voluntary nature of these principles on their adoption by organisations throughout the UK remain uncertain.

The UK government has expressed its ambition to become a dominant force in the field of AI, taking the lead in establishing global regulations and standards to ensure the safe deployment of AI technology. As part of this endeavor, the UK hosted the AI Safety Summit in the autumn of 2023. Establishing a global agreement on AI regulation will effectively mitigate the negative consequences arising from emerging technology advancements.

Nevertheless, the international community’s history of coordinating regulation does not instill trust. The initial legislation in social media, influenced by certain technology companies, granted legal immunity to platforms for hosting content created by users, hence creating challenges in regulating online harms at a later stage. The potential for this error to be replicated with AI exists. Although there have been recent demands for the establishment of a counterpart to the Intergovernmental Panel on Climate Change, as expressed by both the Prime Minister and the EU Commission President, reaching a unified agreement on climate change response has proven challenging due to conflicting national interests, similar to those observed in the context of artificial intelligence.

The UK’s present strategy for regulating AI differs from the EU’s proposed method outlined in the EU AI Act. The EU’s proposal involves implementing strict controls and transparency obligations for AI systems deemed “high risk,” while imposing less stringent standards for AI systems considered “limited risk.” The majority of general-purpose AI systems are considered to have a high level of risk. This means that there are specific rules that developers of foundational models must follow, and they are also required to provide detailed reports explaining how the models are trained.

Additionally, there exists a collaborative effort between the United States and the European Union to create a collection of non-binding regulations for companies, known as the “AI Code of Conduct,” in accordance with their shared plan for ensuring reliable and secure AI and mitigating any risks. The code of conduct will be accessible via the Hiroshima Process at the G7 to foster global agreement on AI governance. If this endeavour is successful, there is potential for the UK to diminish its influence on the formulation of international AI regulations. However, the publication of the AI bill of rights in the USA in October 2022 has the potential to result in a more principles-oriented approach that is in line with the United Kingdom.

Despite these potential dangers, the UK is establishing itself as a nation where companies can create cutting-edge AI technology and perhaps become a global leader in this field. This could be beneficial provided that a suitable equilibrium can be achieved between innovation and the secure advancement of systems.

What will be the effect of the EU AI Act on UK companies utilising Generative AI?

Due to the increasing popularity and widespread influence of Generative AI and Large Language Models (LLMs) in 2023, the EU AI Act underwent significant modifications in June 2023, specifically addressing the utilisation of Generative AI.

Foundation models are a category of expansive machine learning models that form the fundamental framework for constructing a diverse array of artificial intelligence applications. These models have undergone pre-training using extensive datasets, which allows them to acquire knowledge and comprehension of intricate patterns, relationships, and structures present in the data. Developers can achieve impressive skills in natural language processing, computer vision, and decision-making by refining foundation models for specific applications or domains. Some examples of foundation models include OpenAI’s ChatGPT, Google’s BERT, and PaLM-2. Foundation models have been essential in the advancement of sophisticated AI applications in diverse industries, owing to their versatility and adaptability.

Companies now engaged in the development of apps utilising Generative AI Large Language Models (LLMs) and comparable AI technologies, such as ChatGPT, Google Bard, Anthropic’s Claude, and Microsoft’s Bing Chat or ‘Bing AI’, must carefully consider the consequences of the EU AI Act. These companies should be cognisant of the potential ramifications of the Act on their operations and proactively take measures to assure adherence, irrespective of whether they are specifically targeted by the legislation. By doing this, they can remain at the forefront and sustain a robust presence in the always changing AI landscape.

Companies utilising these AI tools and ‘foundation models’ to provide their services must carefully assess and handle risks in accordance with Article 28b, and adhere to the transparency requirements outlined in Article 52 (1).

The primary objective of the EU AI Act is to establish a benchmark for ensuring AI safety, ethics, and responsible utilisation, while also enforcing requirements for openness and responsibility. Article 52 (3) of the EU AI Act, as revised in June 2023, establishes certain requirements on the utilisation of Generative AI.

In conclusion

Regulating AI in all it’s forms is a daunting and pressing task, but an essential one. Amidst the prevalent and rapidly increasing acceptance of AI, regulations must guarantee the reliability of AI systems, minimise AI-related risks, and establish mechanisms to hold accountable the individuals involved in the development, deployment, and utilisation of these technologies in case of failures and malpractice.

The UK’s involvement in this challenge is appreciated, as is its commitment to advancing the goal of AI governance on the global stage. The UK has the chance to establish itself as a thought leader in global AI governance by introducing a context-based, institutionally focused framework for regulating AI. This approach might potentially be adopted by other global jurisdictions as a standard. The emergence and rapid advancement of Generative AI places heightened responsibility on the UK to assume this thought leadership role.

APPENDIX

Table 1: Comparison between UK and EU: AI White Paper vs Legal Framework for Artificial Intelligence
Aspects UK EU
Approach 1.Ensure the safe utilization of AI: Safety is expected to be a fundamental concern in specific industries, such as healthcare or vital infrastructure. Nevertheless, the Policy Paper recommends that regulators adopt a context-dependent approach in assessing the probability of AI endangering safety and adopt a proportional strategy in mitigating this risk. 1. The European Parliament ratified the EU AI Act on June 14, 2023.
2. Ensure the technical security and proper functioning of AI: AI systems must possess robust technical security measures and operate according to their intended design and functionality. The Policy Paper proposes that AI systems undergo testing to assess their functionality, resilience, and security, taking into account the specific context and proportionality considerations. Additionally, regulators are expected to establish the regulatory requirements for AI systems in their respective sectors or domains. 2. European institutions will now commence negotiations to achieve consensus on the ultimate document. Consequently, the earliest possible implementation of the EU AI Act would be in 2025, even if it is adopted promptly.
3. Ensure that AI is adequately transparent and explainable: The Policy Paper recognizes that AI systems may not always be easily explicable, and in most cases, this is unlikely to present significant risks. Nevertheless, the Policy Paper proposes that in specific circumstances with a high level of risk, decisions that cannot be adequately justified may be disallowed by the appropriate regulatory body. This could include situations such as a tribunal decision where the absence of a clear explanation would prevent an individual from exercising their right to contest the tribunal’s ruling. 3. Jurisdictional scope: If implemented, the EU AI Act will enforce a series of responsibilities on both providers and deployers of AI systems that fall within its scope and are used within or have an impact on the EU, regardless of where they are based.
4. Integrate fairness into AI: The Policy Paper suggests that regulators provide a clear definition of “fairness” within their specific sector or area and specify the circumstances in which fairness should be taken into account (such as in the context of job applications).
5. The Policy Paper asserts that legal people must bear responsibility for AI governance, ensuring that they are held accountable for the results generated by AI systems and assuming legal obligation. This responsibility applies to an identified or identifiable legal entity. 4. Broadening the ban on specific applications of AI systems to encompass remote biometric identification in publicly accessible areas, as well as emotion recognition and predictive policing technologies.
6. Elucidate pathways for seeking redress or challenging decisions: As stated in the Policy Paper, the use of AI should not eliminate the opportunity for individuals and groups to protest a decision, if they have the right to do so outside the realm of AI. Hence, the UK Government will need regulators to guarantee that the results produced by AI systems can be challenged in “pertinent regulated circumstances”. 5. The scope of high-risk AI systems has been extended to encompass systems employed for voter manipulation or utilized in recommender systems of extremely large online platforms (referred to as VLOPs).
6. Establishing regulations for providers of foundation models, which are AI systems trained on extensive data, designed to produce general outputs, and can be customized for various specific purposes, including those that drive generative AI systems.
7. Prohibited risks, such as social scoring or systems that exploit vulnerabilities of specific groups of individuals, are considered unacceptable.
8. High-risk activities may be allowed, provided that they adhere strictly to requirements for conformity, documentation, data governance, design, and incident reporting obligations. These encompass systems utilized in civil aviation security, medical gadgets, and the administration and functioning of vital infrastructure.
9. Systems that directly engage with humans, such as chatbots, are allowed as long as they meet specific transparency requirements. These requirements include informing end-users that they are dealing with a machine and ensuring that the risk is limited.
10. Provide evidence through suitable design, testing, and analysis that potential risks that might be reasonably anticipated have been correctly identified and minimized;
11. Utilize only datasets that adhere to proper data governance protocols for foundational models, ensuring that data sources are suitable and potential biases are taken into account.
12. Create and construct a model that attains optimal levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity. 
13. Generate comprehensive technical documentation and clear instructions for use that enable downstream providers to fulfil their obligations effectively. 
14. Implement a quality management system to guarantee and record adherence to the aforementioned obligations.
15. Enroll the foundational model in a European Union database that will be upheld by the Commission.
In addition, the creators of foundational models utilized in generative AI systems would be required to openly acknowledge that the content was generated by AI and guarantee that the system includes protective measures to prevent the creation of content that violates European Union (EU) regulations. In addition, they would need to provide a summary of the training data utilized, which is safeguarded by copyright law.
Regulators The Policy Paper designated the Information Commissioner’s Office (ICO), Competition and Markets Authority (CMA), Ofcom, Medicine and Healthcare Regulatory Authority (MHRA), and Equality and Human Rights Commission (EHRC) as the principal regulators in its new system. 1. National competent authorities for supervising the application and implementation. 
Note: Although several UK regulators and government agencies have initiated measures to promote the appropriate use of AI, the Policy Paper underscores the existing hurdles encountered by businesses, such as a dearth of transparency, redundancies, and incongruity among several regulatory bodies. 2. European Artificial Intelligence Board for coordination and advice.

About the Author

Dr. Hao Zhang is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. He holds a PhD in Finance from the University of Glasgow, Adam Smith Business School. Hao held the position of Senior Project Manager at the Information Center of the Ministry of Industry and Information Technology (MIIT) of the People’s Republic of China. His recent research has focused on asset pricing, risk management, financial derivatives, intersection of technology and data science.

 


Photo by Kelly : https://www.pexels.com/photo/road-sign-with-information-inscription-placed-near-street-on-sunny-day-3861780/

Beyond Quotas: Achieving Authentic Diversity and Inclusion

In today’s rapidly evolving workplace landscape, diversity and inclusion have become more than just buzzwords; they’re integral components of successful business models. However, achieving genuine diversity and inclusion goes far beyond simply meeting quotas. It requires a nuanced approach that values individuals’ unique contributions and fosters inclusive cultures where everyone feels respected and empowered.

Traditionally, quotas have been employed to increase diversity, setting specific targets for recruiting or promoting individuals from underrepresented groups. While quotas may boost diversity statistically, they often fall short in addressing underlying biases and systemic issues. This can lead to tokenism and resentment among employees, undermining the very essence of diversity and inclusion.

To truly embrace diversity and inclusion, organisations must move beyond quotas and adopt thoughtful hiring practices. This approach prioritises quality over quantity, focusing on recruiting high-quality, diverse candidates based on their skills and competencies.

Thoughtful hiring practices involve:

  1. Building a Diverse Talent Pipeline: Actively seeking out talented individuals from diverse backgrounds through partnerships, internships, and mentorship programs.
  2. Aligning Hiring Practices with Organisational Values: Ensuring fairness and inclusivity throughout the recruitment process by mitigating unconscious biases and fostering transparency.
  3. Implementing Blind Auditions and Structured Interviews: Removing identifying information from job applications and using structured interview techniques to reduce bias and ensure fairness.
  4. Developing a Culture of Inclusion: Providing training and education for hiring teams, fostering leadership commitment to diversity, and establishing employee resource groups and mentorship programs.

Continuous monitoring and evaluation are essential for measuring the success of diversity and inclusion efforts. Key performance indicators, regular audits, and transparent communication help organisations stay accountable and identify areas for improvement.

Despite the challenges, embracing authentic diversity and inclusion is essential for creating workplaces where all individuals feel valued, empowered, and able to contribute fully. By going beyond quotas and embracing thoughtful hiring practices, organisations can unlock the numerous benefits that diversity and inclusion bring to the workplace and society at large.


Photo by Walls.io : https://www.pexels.com/photo/whiteboard-with-hashtag-company-values-of-walls-io-website-15543047/

Navigating the Tides of Regulatory Risk: Insights from Pinsent Masons’ April 2024 Edition

The April 2024 Edition of the Pinsent Masons’ Regulatory Risk Trends offers a deep dive into the current and emerging issues that are shaping the world of finance, legal compliance, and corporate governance. This comprehensive document, authored by leading experts in the field, serves as a very useful source of information for businesses, financial institutions, and legal professionals navigating the complex regulatory environment.

The report opens with thoughts from Jonathan Cavill, Partner at Pinsent Masons, who specialises in contentious regulatory and financial services disputes. His expertise sets the stage for an in-depth exploration of the regulatory challenges and opportunities that lie ahead.

 

Key takeaways

  1. Consumer Protection: The document highlights the Financial Conduct Authority’s (FCA) intensified focus on the fair treatment of customers, especially the vulnerable ones. With references to recent reviews and consultations, it stresses the importance of businesses aligning their practices with these standards.
  1. Fair Value and Insurance Sector Scrutiny: The FCA’s call for insurers to act upon the publication of the latest fair value data underscores a shift towards greater transparency and fairness in insurance pricing. The report examines the implications of these demands and offers strategies for compliance.
  1. Market Operations and Monetary Policy: Insights from Colin Read explore the Bank of England’s Sterling Monetary Framework (SMF) and its implications for market stability and liquidity. This section is crucial for understanding central bank reserves and the broader economic landscape. 
  1. Advancements in Consumer Investments: Elizabeth Budd delves into the FCA’s strategy for consumer investments, emphasising the new Consumer Duty and its impact on financial advisers and investment firms. This represents a significant shift towards ensuring that consumer interests are at the heart of financial services.
  1. Innovation in Payment Systems: Andrew Barber’s commentary on the latest policy statements from the Bank of England provides a glimpse into how regulatory bodies are supporting payments innovation, particularly through the Real-Time Gross Settlement (RTGS) system. This is vital for fintech companies and traditional financial institutions alike.
  1. Fighting Financial Scams: The document doesn’t shy away from the darker side of finance, addressing the ongoing battle against scams. It presents a detailed analysis of recent cases and regulatory responses, offering valuable lessons and preventive strategies.
  1. Gender Equality: The Financial Services Compensation Scheme’s (FSCS) efforts in promoting gender equality within the financial sector are also covered. This initiative reflects a broader movement towards diversity and inclusion in finance, highlighting the societal values shaping regulatory agendas.

The Pinsent Masons’ April 2024 Edition of Regulatory Risk Trends is a roadmap for navigating the regulatory environment with confidence and foresight, giving you access to:

  • Detailed analyses of regulatory developments and their implications for various sectors.
  • Expert commentary from leading figures in law and finance.
  • Strategic recommendations for staying ahead in a regulatory landscape marked by rapid change and increased scrutiny.

Download the full report.

FCA Consumer Duty and Financial Inclusion: Does Artificial Intelligence Matter?

The Consumer Duty: What does it entail?

The Financial Conduct Authority (FCA) has recently issued the Consumer Duty Principle to guide financial services firms’ conduct in delivering good outcomes to their retail customers. The Consumer Duty is consumer-centric and outcome-oriented with the potential to bring about major transformation in the financial services industry.

The Consumer Duty is supported by three cross”‘cutting rules that require firms to:

  • Act in good faith towards retail customers.
  • Avoid causing foreseeable harm to retail customers.
  • Enable and support retail customers to pursue their financial objectives.

The Consumer Duty is expected to help firms achieve the following outcomes:

  1. The first outcome relates to products and services, where products and services are designed to meet the needs of consumers.
  2. The second outcome relates to price and value, which inter alia focuses on ensuring that consumers receive fair value for goods and services.
  3. The third outcome seeks to promote consumer understanding through effective communication and information sharing. This is to ensure that consumers understand the nature and characteristics of products and services including potential risks.
  4. The fourth outcome relates to consumer support, where consumers are supported to derive maximum benefits from financial products and services.

 

What are the implications for financial inclusion?

The Consumer Duty has significant implications for financial inclusion. Financial inclusion refers to access to and usage of financial services. While access is the primary objective of financial inclusion it does not always translate into usage due to several inhibiting factors such as price, transaction costs, and service quality. Removing the bottlenecks that limit the usage of financial services is therefore indispensable in unlocking the full benefits of financial inclusion.

The Consumer Duty is expected to trigger behavioural changes among financial institutions leading to significant effects on financial inclusion. Financial institutions are compelled to comply with the Consumer Duty and the cross-cutting rules to avoid regulatory risks that may take the form of sanctions. This implies that consumers will now have access to products and services that are fit for purpose, receive fair value for goods and services purchased, have a better understanding of products and services, and receive the support needed to derive maximum benefits from financial services. In this scenario, financial wellbeing will improve leading to a reduction in poverty and income inequality.

In contrast, however, the Consumer Duty can serve as a disincentive to innovate especially when the costs of compliance far outweigh the benefits, and this has significant implications for financial inclusion. Compliance costs may come in various forms including recruitment or training of staff, updating existing software and systems, or purchasing new ones. To reduce the risks of non-compliance financial institutions will be reluctant to innovate thereby limiting consumer choice. Firms can equally avoid the provision of services in areas and to segments of the population where the risk of non-compliance is high. In this case, vulnerable groups and areas are likely to be excluded from the provision of financial services (financial exclusion). These aspects of firms’ behaviours are more likely to be unobserved and subtle making it difficult to detect.

 

Does Artificial Intelligence matter?

Financial institutions are likely to adopt regulatory technologies and Artificial Intelligence (AI) solutions to comply with the Consumer Duty. This is particularly true given that financial firms are in constant search for automation and AI solutions to drive down the costs of regulatory compliance. The deployment of Machine Learning (ML) and AI in Anti-Money Laundering (AML) systems is taking a front stage in the financial services industry. AI-powered AML systems hold great promise to help financial services firms to detect suspicious activities that are likely to cause significant harm to consumers in real-time.

AI can help financial firms deliver good outcomes to consumers at low costs, especially to those at risk of financial exclusion. AI and ML algorithms can equip financial firms with the capability to remotely onboard customers and conduct remote identification checks thereby reducing costs. AI-powered solutions available to financial institutions during customer onboarding include but are not limited to real-time data access using open Application Programming Interface (API), image forensic, digital signature and verification, facial recognition, and video-based KYC (Know Your Customer). Remote customer onboarding simplifies the account opening process and reduces the costs and inconveniences associated with physical travel to bank branches which can discourage financially excluded consumers from accessing financial services.

AI and Natural Language Processing (NLP) play significant roles in customer-facing roles. The use of chatbots has the prospect of enhancing customer experiences through a rapid resolution of queries. Banks, for example, are moving from simple chatbot technologies to more advanced technologies including Large Language Models and Generative AI to enhance customer service. These advanced technologies facilitate communication between financial institutions and their customers.

AI and ML technologies also support automatic investment or financial advisory services. Robo-advisors use ML algorithms to automatically offer targeted investment or financial advice that is mostly done by human financial advisors. These technologies expand the provision of advisory services to a wide range of consumers including low-income consumers in a cost-effective manner.

AI and ML technologies offer financial institutions the potential to explore alternative sources of risk scoring using both structured and unstructured consumer data to predict their creditworthiness. The use of alternative sources of risk scoring has the potential to facilitate the provision of credit to consumers with limited credit history and low income.

 

What are some of the challenges with AI?

Regulatory technologies such as AI hold great prospects for compliance, but the deployment of these technologies comes with potential risks that can undermine the gains of financial inclusion. AI models for example are prone to embedded bias especially when the underlying dataset discriminates against certain groups or persons leading to differentiation in pricing and service quality. Bias in credit scoring algorithms can exclude vulnerable groups or regions from accessing loans and even if they do have access such loans are likely to be offered at high interest rates owing to unfair credit scoring. Also, bias in the underlying datasets of chatbots and Robo-advisors can lead to misinformation and cause significant harm to consumers. Data privacy concerns are on the increase especially given that any leakage in the dataset used to train AI models can expose sensitive consumer information. AI and ML technologies are not immune to cyber-attacks and technical glitches which can disrupt their functionality and expose consumers to harm. These examples imply that regulatory technologies and AI models pose a non-compliance risk to the Consumer Duty especially if they inhibit the delivery of good outcomes to consumers for example through discrimination and data privacy breaches.

What is the way forward?

The Consumer Duty is an important regulatory initiative with enormous potential to deepen financial inclusion and accelerate the positive contribution of financial inclusion to development. To achieve the objective of delivering good outcomes to consumers there is a need for constant engagement between the Financial Conduct Authority and stakeholders in the financial services industry. This will help timely identification and resolution of challenges that may arise during the implementation of the Consumer Duty.

While regulatory technologies and Artificial Intelligence are likely to play central roles in complying with the Consumer Duty there is the need for financial institutions to ensure that these technologies are themselves compliant with the Consumer Duty. This can be achieved by addressing the risks inherent in regulatory technologies and AI models. Senior managers of financial institutions are expected to play leading roles in mitigating the risk of non-compliance within the firm in line with the Senior Managers & Certification Regime.


About the Author(s)

Godsway Korku Tetteh is a Research Associate at the Financial Regulation Innovation Lab, University of Strathclyde (UK). He has several years of experience in financial inclusion research including digital financial inclusion. His research focuses on the impacts of digital technologies and financial innovations (FinTech) on financial inclusion, welfare, and entrepreneurship in developing countries. His current project focuses on the application of technologies such as Artificial Intelligence to drive efficiency in regulatory compliance. Previously, he worked as a Knowledge Exchange Associate with the Financial Technology (FinTech) Cluster at the University of Strathclyde. He also worked with the Cambridge Centre for Alternative Finance at the University of Cambridge to build the capacity of FinTech entrepreneurs, regulators, and policymakers from across the globe on FinTech and Regulatory Innovation. Godsway has a Ph.D. in Economics from Maastricht University (Netherlands) and has published in reputable journals such as Small Business Economics.

Email: godsway.tetteh@strath.ac.uk

LinkedIn: https://www.linkedin.com/in/godsway-k-tetteh-ph-d-83a82048/

Photo by Tara Winstead: https://www.pexels.com/photo/an-artificial-intelligence-illustration-on-the-wall-8849295/

Embracing Digital Transformation: Key Insights from Legado’s 2024 Private Client Experience Report

The legal sector is on the brink of a transformative shift, according to the latest findings from Scottish fintech company Legado. Their 2024 report, “The Private Client Experience, Legal Report,” unveils crucial insights into how private client solicitors and their clients interact, highlighting a compelling case for the adoption of digital solutions in the legal sector.

The Digital Demand in Legal Client Experience

The traditional modes of communication, predominantly email and post, remain the solicitors’ go-to methods. However, this seems to be out of sync with client expectations and needs. The report reveals that a staggering 60% of clients encounter challenges in their interactions with solicitors, signaling a clear demand for more streamlined and digital approaches. 92% of survey participants expressed the need for a secure digital portal, emphasising its potential to significantly enhance the quality of interaction between clients and legal professionals.

The Challenge of Change

Despite the unanimous use of email by solicitors, half of them find managing client communications challenging. This paradox underscores a broader issue within the sector: while there’s recognition of the need for digital transformation, the actual implementation of such solutions lags behind. Only a minority of law firms currently offer client digital portals, even though 90% of clients prefer this method of interaction.

The Path Forward

This gap between current practices and the potential for digital innovation presents an opportunity for significant improvement in how legal services are delivered. By embracing digital platforms, law firms can not only enhance client satisfaction but also achieve greater operational efficiency and security in document exchange and communication.

A Call to Action

Josif Grace, the founder and CEO of Legado, passionately advocates for this digital shift. He views the findings of the report not as a critique but as a blueprint for growth and advancement in the legal sector. According to Grace, the future of legal services will be determined by the industry’s willingness to adapt and innovate, meeting the evolving digital needs of clients in a secure and user-friendly manner.

How transparency, explainability and fairness are being connected under UK and EU approaches to AI regulation

Article written by Kushagra Jain, research associate for the Financial Regulation Innovation Lab and scholar at the Michael Smurfit Graduate Business School, University College Dublin, Dublin, Ireland.


Introduction and global perspective

Rapid and continuing advances in artificial intelligence (AI) have had profound implications. These have and will continue to reshape our world. Regulators have responsibly and proactively responded to these paradigm shifts. They have begun to put in place regimes to govern AI use.

Global collaboration is taking place in developing these frameworks and policies. For instance, an AI Safety Summit was held in the UK in November 2023. Participants included 28 nations representing the EU, US, Asia, Africa, and the Middle East. Its aim, with internationally coordinated action, was to mitigate AI development “frontier” risks. At the summit, the necessity to collaboratively test next generation AI models against critical national security, safety and societal concerns was identified. Alongside this, the need to develop a report to build international consensus on both risk and capabilities was acknowledged. Two further summits are planned in the next 6 and 12 months respectively. Subsequent summits are expected to continue these topical and crucial global dialogues. These could perhaps build on the first summit’s key insights and realisations.[1]

The UK’s pro-innovation regulation policy paper similarly emphasises continued work with international partners to deliver interoperablility. Further it hopes to incentivise responsible application design, and development of AI. The paper aims for the UK’s AI innovation to be seen as the most attractive in the world. To achieve this aim, it seeks to ensure international compatibility between approaches. Consequently, this would attract international investments and encourage exports (Secretary of State for Science, 2023).[2] Notably however, different regions have taken distinct approaches to regulation applicable within their jurisdiction.

 

Distinctions between the EU and UK approaches

Broadly, the draft EU Artificial Intelligence Act seeks to codify a risk-based approach within its legislative framework. The framework categorises unacceptable, high, and low risks which threaten users’ safety, human safety, and fundamental rights. It also institutes a new AI regulator (Yaros et al., 2023, Yaros et al., 2021). In contrast, the UK’s approach generally espouses being iterative, agile and context dependent. It is designed to make responsible innovation easier. Existing regulators are responsible for its implementation. All of this is outlined in their AI Regulatory Policy Paper and AI White Paper (Secretary of State for Science, 2023, Prinsley et al., 2023, Yaros et al., 2022).

Another key distinction demarcates the two. No all-encompassing definition of what “AI” or an “AI system” constitute exists in the UK’s case. AI is instead framed in the context of autonomy and adaptivity. The objective is ensuring continued relevance of the proposed legislation for new technologies. This means legal ambiguity is inherent in such an approach. However, individual regulator guidance is expected to resolve this within each regulator’s remit (Prinsley et al., 2023, Yaros et al., 2022).

The EU legislation would apply to all AI system providers in the EU. Further, it also applies to users and providers of AI systems, where the system produced output is utilised in the EU. This applicability is regardless of where they are domiciled. It is envisioned as a civil liability regime to redress AI-relevant problems and risks. At the same time, it seeks to do so without unduly constraining or hindering technological development. Maintaining excellence and trust in AI technology at the same time are the dual targets within it (Yaros et al., 2023, Yaros et al., 2021).

Conversely, the UK regulation applies to the whole of the UK. However, it is also territorially relevant beyond the UK in terms of enforcement and guidance applicability. Initially, it is on a non-statutory footing. The rationale is that it could create obstacles for innovativeness and businesses. Moreover, rapid and commensurate responses may also be impeded if statutory duty is imposed straightaway. During this transitory period, existing regulators’ domain expertise is relied upon for implementation. The eventual intention is assessing if a statutory duty needs to be imposed. Another aim is further strengthening regulator mandates for implementation. Finally, allowing regulators flexibility to exercise judgment in applying principles is a target. Over and above these, coordination through central support functions for regulators is envisaged. Innovation-friendly, yet effective and proportionate risk responses would be the desired outcome of such functions. These functions would be within government. However, they would leverage expertise and activities more broadly across the economy. Additionally, they will be complemented and aligned. This will be achieved through voluntary guidance, and technical standards. Assurance techniques will similarly be deployed, alongside trustworthy AI tools, whose use would be encouraged (Secretary of State for Science, 2023, Prinsley et al., 2023).

 

Shared focus on fairness, transparency and explainability

In spite of varied approaches, both the EU and UK share an emphasis on aspects such as fairness, transparency, and explainability. These in particular are of interest owing to their human, consumer, and fundamental rights implications. For the UK, this emphasis is apparent from two of their white paper’s five broad cross-sectoral principles (Secretary of State for Science, 2023, Prinsley et al., 2023, Yaros et al., 2022):

  • Appropriate transparency and explainability: These are traits sought to be present in AI systems. Their decision-making processes should be accessible to parties to ensure heightened public trust, which non-trivially drives AI adoption. It remains to be discovered how relevant parties may be encouraged to implement appropriate transparency measures. This is acknowledged within the white paper.
  • Fairness: Overall involves avoidance of discriminating unfairly, unfair outcomes, and undermining of individual and organisational rights by AI systems. It is understood that developing and publishing appropriate fairness definitions and illustrations for AI systems may become a necessity for regulators within their domains.

This was also encapsulated in the UK’s earlier AI Regulation Policy Paper as follows (Yaros et al., 2022):

  • Appropriately transparent and explainable AI. AI systems may not always be meaningfully explainable. While largely unlikely to pose substantial risk, in specific high-risk cases, such unexplainable decisions may be prohibited by relevant regulators (e.g., a tribunal may decide where a lack of explainability may deprive an individual’s right to challenge the tribunal’s decision).
  • Fairness considerations embedded into AI. Regulators should define “fairness” in their domain/sector. Further, they ought to outline the relevance of fairness considerations (e.g., for job applications).

In contrast, for the EU, this takes the following shape as encoded in the legislation (Yaros et al., 2023, Yaros et al., 2021):

  • Direct human interface systems (such as chat bots) are of limited risk and acceptable if in compliance with certain transparency obligations. Put differently, end-user awareness of machine interaction is needed. For foundation models[3], intelligible instructions and extensive technical documentation preparation may fall into the explainability and transparency bucket. This enables providers downstream to comply with their respective obligations.
  • Prohibition of a premise such as social scoring/ systems exploiting vulnerabilities of specific groups of persons. This is termed an unacceptable risk and can be considered linked to fairness. For foundation models, this may be framed as only incorporating datasets subject to appropriate data governance measures. Examples of these measures include data suitability and potential biases. Fairness may also take the form of context-specific fundamental rights impact assessments. These would bear in mind use context before deploying high-risk AI systems. More dystopian possibilities exist that may irreparably harm fairness. Such scenarios are avoided through outright bans on certain systems. These include those with indiscriminate scraping of databases, sensitive characteristic bio-metric categorisation, bio-metric real-time identity, emotion recognition, face recognition, and predictive policing.

 

Conclusions and future topics

In conclusion, merits and demerits come to mind when considering both the EU’s and UK’s paths to regulating AI innovation. The EU’s approach may be perceived as more bureaucratic. Owing to its stricter compliance approach, it would require anyone to whom it applies to expend significantly more time, cost, and effort. Only then will they ensure they do not fall foul of regulatory guidelines.

That being said, its stronger ethical grounding ensures the best interests of relevant stakeholders. In a similar vein to GDPR, it may serve as a blueprint for future AI regulations adopted by other countries around the world. Coupled with the EU’s new rules on machinery products ensuring new machinery generations guarantee user and consumer safety, it is a very comprehensive legal framework (Yaros et al., 2023, Yaros et al., 2021).

On the other hand, the UK’s approach has received acclaim from industry for its pragmatism and measured approach. The UK Science and Technology Framework singles out AI as one of 5 critical technologies as part of the government’s strategic vision. The need to establish such regulation was highlighted by Sir Patrick Vallance in his Regulation for Innovation review. In response to these factors, the AI Regulation Policy and White Papers were penned. The regulation’s ability to learn from experience while flexibly and continuously adopting best practices will catalyse industry innovation (Secretary of State for Science, 2023, Intellectual Property Office, 2023).

Nonetheless, a dark side of innovation may also manifest as a consequence. Bad players proliferating and exploiting the lack of statutory regulatory oversight may cause reputational damage to the UK, in so far as AI is concerned, if not handled rigorously. This is especially pertinent in insidious cases, such as those illustrated earlier by the banned AI systems under EU law.

Despite significant differences between the EU and UK’s approaches, commonalities exist in pivotal regulatory priorities such as transparency, explainability and fairness. Blended pro-innovation and risk-based regulatory approaches might achieve the best results for these priorities. Such a blend can be ascertained based on how efficacious each approach is in achieving its goals over time. and given the context of its application.

Given the systematic importance of the US in shaping the global economic landscape, it may be interesting to explore in a future blog its approach to AI regulation. In particular, investigating how transparency, explainability and fairness are dealt with in contrast with the EU, and juxtaposed against the UK, might shed new light on how AI regulation should evolve (Prinsley et al., 2023, Yaros et al., 2022, Yaros et al., 2021), with the dawn of what may one day be called the AI age in human history.

References

Intellectual Property Office (2023, 06 29). Guidance: The government’s code of practice on copyright and AI. Retrieved from: https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai

Prinsley, Mark A. and Yaros, Oliver and Randall, Reece and Hadja, Ondrej and Hepworth, Ellen (2023, 07 07). Mayer Brown: UK’s Approach to Regulating the Use of Artificial Intelligence. Retrieved from: https://www.mayerbrown.com/en/perspectives-events/publications/2023/07/uks-approach-to-regulating-the-use-of-artificial-intelligence

Secretary of State for Science, Innovation & Technology (2023, 08 03). Policy paper: A pro-innovation approach to AI regulation. Retrieved from: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

Yaros, Oliver and Bruder, Ana Hadnes and Leipzig, Dominique Shelton and Wolf, Livia Crepaldi and Hadja, Ondrej and Peters Salome (2023, 06 16). Mayer Brown: European Parliament Reaches Agreement on its Version of the Proposed EU Artificial Intelligence Act. Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2023/06/european-parliament-reaches-agreement-on-its-version-of-the-proposed–eu-artificial-intelligence-act

Yaros, Oliver and Bruder, Ana Hadnes and Hadja, Ondrej (2021, 05 05). Mayer Brown: The European Union Proposes New Legal Framework for Artificial Intelligence. Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2021/05/the-european-union-proposes-new-legal-framework-for-artificial-intelligence

Yaros, Oliver and Hadja, Ondrej and Prinsley, Mark A. and Randall, Reece and Hepworth, Ellen (2022, 08 17). Mayer Brown: UK Government proposes a new approach to regulating artificial intelligence (AI). Retrieved from Mayer Brown: https://www.mayerbrown.com/en/perspectives-events/publications/2022/08/uk-government-proposes-a-new-approach-to-regulating-artificial-intelligence-ai

 

About the author

Kushagra Jain is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. His research interests include artificial intelligence, machine learning, financial/regulatory technology, textual analysis, international finance, and risk management, among others. He was awarded doctoral scholarships from the Financial Mathematics and Computation Cluster (FMCC), Science Foundation Ireland (SFI), Higher Education Authority (HEA) and Michael Smurfit Graduate Business School, University College Dublin (UCD). Previously, he worked within wealth management and as a statutory auditor. He completed his doctoral studies in Finance from UCD in 2023, and obtained his MSc in Finance from UCD, his Accounting Technician accreditation from the Institute of Chartered Accountants of India and his undergraduate degree from Bangalore University. He was formerly FMCC Database Management Group Data Manager, Research Assistant, PhD Representative and Teaching Assistant for undergraduate, graduate and MBA programmes.

[1] These details, and further information can be found here, here, and here.

[2] This information and further context can be found here.

[3] AI systems adaptable to a wide range of distinctive tasks, designed for output generality, and trained on broad data at scale.


Photo by Tara Winstead: https://www.pexels.com/photo/robot-pointing-on-a-wall-8386440/

Perspectives on Generative AI in Financial Services

Article written by James Bowden, Mark Cummins, Godsway Tetteh from the University of Strathclyde.

Note: Aligning with the Generative AI focus, segments of this blog were generated by ChatGPT using notes taken on the day capturing the presentations and discussions. The authors edited this generated content accordingly.


 

Presentation Highlights

We are delighted to share some highlights and discussion points from the “Generative AI for Financial Services” event held at the University of Strathclyde in Q4 2023. This event provided an important platform for in-depth discussions and explorations surrounding Generative AI and potential applications in the financial services industry.

The session commenced with Martin Robertson (Chief Commercial Officer) of Level E Research, who offered useful insights into the innovative utilisation of Discriminative AI within Level E’s automated investment strategy offerings. The core emphasis here was on the critical role of explainability in building transparency and trust with investment clients. Martin expertly differentiated between Generative AI and Discriminative AI, sparking thought-provoking discussions regarding the creative potential of Generative AI, especially in the context of content generation.

Following this, our co-organiser, James Bowden (Lecturer in Financial Technology, University of Strathclyde), delved into an extensive exploration of Generative AI applications in the financial services sector. He thoughtfully delineated the associated risks, which included concerns related to data privacy, cybersecurity vulnerabilities, embedded bias, explainability limitations, and implications for financial stability.

Annalisa Riccardi (Senior Lecturer in Mechanical and Aerospace Engineering, University of Strathclyde) then took to the stage to demonstrate a clever use case of Generative AI applied to automate satellite scheduling, with a particular focus on enhancing explainability. Drawing on this discussion, Annalisa then unveiled ongoing research at the University of Strathclyde, conducted in collaboration with Mark Cummins (Professor Financial Technology, University of Strathclyde), James Bowden and Hao Zhang (Research Associate, Financial Regulation Innovation Lab, University of Strathclyde), which is leveraging Generative AI for earnings call analysis.

The engaging presentation session was brought to a close with Blair Brown’s (Senior Knowledge Exchange Fellow in Electronic and Electrical Engineering, University of Strathclyde) insightful overview of AI regulation, standards, and trustworthiness. Drawing from an engineering perspective and its relevance to the financial services sector, Blair emphasised the crucial role of human-AI oversight and interactions, spanning human-before-the-loop, human-in-the-loop, and human-over-the-loop scenarios.

 

Discussion Insights

These thoughtful presentations provided a solid foundation for the rich participant discussions that followed. These exchanges were marked by their liveliness and content-rich discussions, offering valuable insights from both practical and academic perspectives. The key themes covered in these discussions included:

  • Firm-Level Regulatory Responsibility and Compliance:
    • The group emphasised the importance of regulatory compliance in the financial services sector, particularly concerning the use of Generative AI as a nascent technology. As the responsibility for regulatory compliance lies with the financial firm, this may incentivise in-house Generative AI development. The emerging approaches to AI regulation within the UK and the EU in particular provide frameworks within which to consider the responsible and regulatory compliant use of Generative AI within organisations.
  • Data Protection and Zero Tolerance for Breaches:
    • Due to the potential for significant fines, there is zero tolerance for data breaches in financial services. Data protection and consumer protection were key concerns around Generative AI, with different standards and datasets complicating matters. Options around private and localised installations of Generative AI systems need to be considered.
  • Ethics and Accountability:
    • Participants discussed the ethical dimension of AI in finance and the need for accountability. They suggested that CEOs and wider Boards of Directors should be held responsible if ethical breaches occur from the use of Generative AI, and governments might need to force companies to self-regulate with severe penalties for non-compliance.
  • Regulatory Framework and International Challenges:
    • The group highlighted the challenges of creating AI regulation in the EU when a significant portion of the AI market is based in the US, which is particularly the case in respect of Generative AI innovation. The discussion touched on principles-based regulation and the potential shift toward hard regulation, citing the General Data Protection Regulation (GDPR) as an example.
  • Traceability and Auditability:
    • The need for traceability and auditability in AI decision-making was discussed. The presence of an accountable human in the process was emphasised, and there was a concern about the lack of understanding of material risks in Generative AI.

The collective knowledge shared at this event provides important perspectives on the future of Generative AI in the financial services sector. The discussion provides an impetus to the research and innovation ambitions of University of Strathclyde in respect of cutting-edge Generative AI research and industry engagement, while the importance that emerged around regulatory considerations motivates an important direction of travel for the Financial Regulation Innovation Lab in terms of its AI and Compliance priority theme, which focuses on Utilising emerging technologies to simplify compliance process and monitoring.


About the Authors

Professor Mark Cummins is Professor of Financial Technology at the Strathclyde Business School, University of Strathclyde, where he leads the FinTech Cluster as part of the university’s Technology and Innovation Zone leadership and connection into the Glasgow City Innovation District. As part of this role, he is driving collaboration between the FinTech Cluster and the other strategic clusters identified by the University of Strathclyde, in particular the Space, Quantum and Industrial Informatics Clusters. Professor Cummins is the lead investigator at the University of Strathclyde on the newly funded (via UK Government and Glasgow City Council) Financial Regulation Innovation Lab initiative, a novel industry project under the leadership of FinTech Scotland and in collaboration with the University of Glasgow. He previously held the posts of Professor of Finance at the Dublin City University (DCU) Business School and Director of the Irish Institute of Digital Business. Professor Cummins has research interests in the following areas: financial technology (FinTech), with particular interest in Explainable AI and Generative AI; quantitative finance; energy and commodity finance; sustainable finance; model risk management. Professor Cummins has over 50 publication outputs. He has published in leading international discipline journals such as: European Journal of Operational Research; Journal of Money, Credit and Banking; Journal of Banking and Finance; Journal of Financial Markets; Journal of Empirical Finance; and International Review of Financial Analysis. Professor Cummins is co-editor of the open access Palgrave title Disrupting Finance: Fintech and Strategy in the 21st Century. He is also co-author of the Wiley Finance title Handbook of Multi-Commodity Markets and Products: Structuring, Trading and Risk Management. 

Email: mark.cummins@strath.ac.uk

Web: University Profile for Professor Mark Cummins

LinkedIn: Mark Cummins – Professor of Financial Technology – University of Strathclyde | LinkedIn

 

Dr. James Bowden is Lecturer in Financial Technology at the Strathclyde Business School, University of Strathclyde, where he is the programme director of the MSc Financial Technology. Prior to this, he gained experience as a Knowledge Transfer Partnership (KTP) Associate at Bangor Business School, and he has previous industry experience within the global financial index team at FTSE Russell. Dr Bowden’s research focusses on different areas of financial technology (FinTech), and his published work involves the application of text analysis algorithms to financial disclosures, news reporting, and social media. More recently he has been working on projects incorporating audio analysis into existing financial text analysis models, and investigating the use cases of satellite imagery for the purpose of corporate environmental monitoring. Dr Bowden has published in respected international journals, such as the European Journal of Finance, the Journal of Comparative Economics, and the Journal of International Financial Markets, Institutions and Money. He has also contributed chapters to books including “Disruptive Technology in Banking and Finance”, published by Palgrave Macmillan. His commentary on financial events has previously been published in The Conversation UK, the World Economic Forum, MarketWatch and Business Insider, and he has appeared on international TV stations to discuss financial innovations such as non-fungible tokens (NFTs).

Email: james.bowden@strath.ac.uk

Web: University Profile for Dr. James Bowden

LinkedIn: James Bowden – Lecturer in Financial Technology – Strathclyde Business School | LinkedIn

Dr. Godsway Korku Tetteh is a Research Associate at the Financial Regulation Innovation Lab, University of Strathclyde (UK). He has several years of experience in financial inclusion research including digital financial inclusion. His research focuses on the impacts of digital technologies and financial innovations (FinTech) on financial inclusion, welfare, and entrepreneurship in developing countries. His current project focuses on the application of technologies such as Artificial Intelligence to drive efficiency in regulatory compliance. Previously, he worked as a Knowledge Exchange Associate with the Financial Technology (FinTech) Cluster at the University of Strathclyde. He also worked with the Cambridge Centre for Alternative Finance at the University of Cambridge to build the capacity of FinTech entrepreneurs, regulators, and policymakers from across the globe on FinTech and Regulatory Innovation. Godsway has a Ph.D. in Economics from Maastricht University (Netherlands) and has published in reputable journals such as Small Business Economics.

Email: godsway.tetteh@strath.ac.uk

Web: University Profile for Dr. Godsway Tetteh

LinkedIn: Godsway K Tetteh, Ph.D – Research Associate (Financial Regulation Innovation Lab) – University of Strathclyde | LinkedIn

Being a female fintech leader in 2024

As we celebrate International Women’s Day we spoke to 2 female leaders from 2 successful Scottish fintechs. We got their thoughts, opinions and hopes for the future. Recognising progress around inclusion and diversity, their responsibilities in becoming role model, they also offer their thoughts on what the next steps towards a more inclusive sector need to be.


 

Pardeep Cassells – Global Head of Buyside Client Experience at AccessFintech

As a Scottish woman and second-generation immigrant of Indian heritage, I am proud to be an example of intersectionality this International Women’s Day.

Forging a path in the unquestionably male-dominated fintech sector, I am very fortunate to be working for a company where the leadership team advocate for, and support, women in the sector. Knowing that I’m part of a team with higher-than-average female representation – and that the representation covers all role types – is something in which I take great pride.

Having followed a route from investment operations through to financial technology, I’ve had the privilege to be supported by many men and women who ensured my voice was heard and recognised my input whilst giving time and energy without question or condescension.

From the first ”“ mostly male – senior leaders I worked with, who never overlooked the efforts of a vocal and determined young woman, to those who helped me evolve into someone a little more polished and encouraged me as I took what felt like a scary step into the world of fintech, I felt the support of a village around me.

When specifically considering female role models, my mind never hesitates to recall my first Head of Department in Dundee, who came through the ranks in a far less diverse world but carved her own inspiring path, both personally and professionally. However, I now more clearly see that while senior role models and their backing have been key to my progression, the input of my female peers and those less experienced has been just as crucial.

Receiving support not just from those who came ahead of me but from women of my own generation during my time working in fintech has motived me in many ways. Experiencing this support and camaraderie, not just within my own organisation but from colleagues across other fintechs, banks and investment operations firms, has been transformative.

I am, through all of this, keenly aware that I have a platform; that my platform should be used to open the door for others and to put as much energy as I can muster into lifting up the women around me and the next generation to come, whilst encouraging them to do the same for each other.

This is how the world will change.

This ripple effect of reciprocal support, of creating networks where each voice ”“ regardless of gender or ethnicity ”“ is heard and every person encouraged to achieve their potential in their own way is something that I see daily, and I am incredibly excited by this momentum.

 

Julia Salmond – Founder and CEO at CienDos

In the rapidly evolving landscape of fintech, the need for a more diverse workforce is becoming increasingly significant, and Emotional Intelligence (EQ) is now viewed as a critical asset. As a mid-career professional who has worked across a number of sectors, I have witnessed firsthand the unique skills females can bring to fast-paced, innovative, and scaling businesses, and I have also experienced a number of challenges female leaders face.

My journey into fintech has been an interesting one.  Starting out in big corporate’, initially as a consultant before moving into corporate banking, I gained an insight into the intricacies of regulatory compliance and the importance of leveraging technology. More importantly, I was fortunate to be influenced by a number of female role-models, who were pivotal in shaping my early career trajectory.  These women taught me about the importance of balancing logic and critical thinking with emotional awareness, how to develop my personal brand’ and build a voice of authority in an historically male-dominated sector.

I took the leap from big corporate into the start-up world about a decade ago and, suddenly, I was the one in the position of influence. Although certainly not limited to women, high emotional intelligence is a trait I have seen in many of my female mentors, and it is something I have focussed on while developing my leadership style. I am not afraid of sharing my strengths, blind spots, and vulnerabilities ”“ and I encourage my team to do the same. Creating a team culture, where everybody is trusted to take ownership, develops a strong shared vision – a critical component in the success of that first venture ”“ rapidly scaling and exiting to a global media and data business.

As I continue to scale my new venture, CienDos, I am excited about playing a small part in developing the next generation of strong female leaders ”“ a critical ingredient in the recipe for any successful fintech.

An interview with Rachel Curtis, CEO at Inicio.ai, on Morgan Stanley’s Inclusive Ventures Lab. 

We met with CEO and Co-founder of Scottish fintech Inicio.ai, Rachel Curtis, who was one of the 23 companies from North America, Europe, the Middle East, and Africa selected by Morgan Stanley to go through their Inclusive Ventures Lab, an accelerator to help tech and tech-enabled startups develop and scale their companies, and to advance a more equitable investment landscape. To get there, Rachel had previously won the Scottish pitching event.

With the Inclusive Ventures Lab 2024 now open for applications we wanted to know more about Rachel’s experience of the programme and the impact it had on her business.


Rachel, could you introduce yourself and Inicio.ai?

Of course, I’m the CEO of Inicio.ai, a fintech for good focussed on helping vulnerable people get out of debt.

We have built a solution that removes a key barrier for those struggling with debt ”“ it uses cutting edge conversational AI to guide consumers through a self-serve affordability assessment. The solution also has huge benefits for organisations as we deliver a more consistent and efficient process, which captures deeper and better-quality data, whilst saving them up to 90% of their agent costs.

How did you first hear about the Morgan Stanley Inclusive Ventures Lab?

We are part of the FinTech Scotland community and Morgan Stanley are one of FinTech Scotland’s strategic partners. FinTech Scotland contacted us directly to make us aware of this opportunity as they thought we would be a good fit for the Lab.

What’s interesting is that if I’d just seen information on the Lab elsewhere, I would probably have disregarded it as my first thought would have been that we were too small for a giant like Morgan Stanley. However, thanks to this warm introduction from FinTech Scotland we decided to apply.

What other attributes of the programme did you find interesting?

It was a combination of a few things. Of course, the prospect of securing a £250,000 investment was a key driver, but the overall programme looked fantastic with an incredible level of support offered to participants.

At the time we had just gone through the University of Edinburgh’s AI accelerator and the experience had been really positive. Therefore, we thought that continuing with Morgan Stanley’s Inclusive Venture Lab made a lot of sense.

 

Can you tell us more about the Inclusive Ventures Lab?

After winning the Scottish pitch event, we then successfully made it through the Investment Committee screening and a due diligence phase to secure our place.

On the first day of the lab we came together as an EMEA cohort in London at Canary Wharf and were welcomed with our pictures on the TV screens in the lobby and even in the elevators.  They made us feel like movie stars! We were taken to the 11th floor Boardroom where we joined via video with the North America cohort.

During the lab we were given office space on the 10th floor which really helped us punch above our weight, giving us credibility when inviting investors or potential clients to visit us.

When the Lab concluded at the start of February 2024, after 5 intensive months, I had the privilege to attend the Final Demo Day and pitch in front of hundreds of global investors from across the US and EMEA. After the pitching day I even got displayed on the Nasdaq Tower in Times Square, New York. When we shared this on LinkedIn, we got 10 times more responses than any other previous posts. This created a lot of brand awareness for us and gave us fuel for our business.

On a practical level, the support we received was unparalleled. We were given a dedicated team that helped us with information sourcing, presentations, pitching preparation and more. They became a part of the Inicio team and we were supercharged overnight! We also met with our Entrepreneur in Residence every week and they had such broad experience it meant we could cover all aspects of the business in detail.

On top of this I received a lot of coaching around sales, pitching, go to market strategies, investment readiness and more. I was also allocated a Morgan Stanley Managing Director as a mentor, which really demonstrates how committed the company is to the Lab programme.

We were even given free sessions with a top law firm which is not something we could have afforded ourselves.

How was the Inclusive Ventures Lab different from other accelerators you had come across?

The Inclusive Ventures Lab is a pure accelerator in that Morgan Stanley were not looking for a solution for their business but instead were focussed on our success as an investor.

They were very involved to the point that it felt that we had more than doubled the number of Inicio colleagues overnight. The Lab team even helped us rethink our brand and identity and whilst the programme is now over, the team is still helping us with our website redesign and other tasks ”“ they continue to support even after the 5 months.

Finally, Morgan Stanley was really committed to inclusion and diversity and it was fantastic to work alongside the other 22 entrepreneurs who all came from different backgrounds and different countries.

Could you describe your experience?

I felt hugely supported. Sometimes, being a CEO can be lonely. I have an amazing and very supportive board of directors but it’s obviously not possible for them to be involved in every detail of the daily running of the business, nor should they. The Morgan Stanley team was involved 24/7 to help move things at pace and offered the extra help we needed to accelerate our growth.

The programme was intense and soul searching as it made me rethink so much. They didn’t pull any punches and gave me raw feedback which is what entrepreneurs need to ensure success, but they did it in a super supportive way.

Overall, I feel a lot more confident and better at making fast decisions. Prior to joining the Lab, I felt much weaker in investor pitches. There’s been a real shift now and I have confidence that my business is valuable and I find myself asking more of the questions during pitches as I want the right investors for my company.

What has the impact of going through this programme been for Inicio?

This has helped us secure investment and sales which has been a real boost for Inicio. This comes from being able to articulate our proposition much more clearly. I hadn’t realised how complex my business could sound before but the coaching I received helped me to understand how to focus our story for the relevant audience.

Going through the Inclusive Ventures Lab has fast tracked our businesses by years and we’re now seen as so much more credible which has enabled us to open new doors.

Why do you think diversity is important when it comes to financial innovation?

I believe it allows for more diverse thinking and a fresh perspective. I think some of the biggest leaps in innovation are driven by the edge case experiences of those in the minority where there are more challenges. There is much less creativity in the average middle ground and the innovation that comes from solving struggles in the edge cases can then be applied to the whole market so all benefit.

To finish and now you completed the program, what advice would you give to other budding diverse entrepreneurs who might be considering applying to the 2024 Morgan Stanley Inclusive Ventures Lab 2024?

 My advice to anyone thinking about applying is first, ensure you can give it your full commitment in terms of time and effort so that you are able to get the most out of what is an amazing opportunity, and second ”¦ DO IT, DO IT, DO IT!!!