APGDA host roundtable on financial services and AI
The All-Party Parliamentary Data Analytics Group (APGDA) were delighted to host a parliamentary roundtable on Tuesday 22nd October 2019 to discuss the latest advances and policy challenges around the use of AI and machine learning in the financial services sector.
The event was led by the APGDA Chair, Daniel Zeichner MP, and brought together a range of senior figures from the banking, regulatory and professional services sectors. Opening presentations were given by Jessica Lennard of Visa Europe, and John Power of Equifax.
Lennard began by noting the exponential rise in the amount of transaction information across the wider economy, with Visa alone now holding 162 billion data elements a year – ranging from single transactions by shoppers to merchant databases on millions of individual sales. To respond to this shift in information holdings, regulatory efforts by organisations such as Visa currently focus on three main priorities:
- Developing new forms of macroeconomic forecasting by looking at long-term trends and short-term fluctuations in transaction data
- Identifying and protecting at-risk and excluded individuals
- Improving the service provision for consumers, such as by helping to mitigate fraud
She added that any attempts to improve the regulation of AI and machine learning would require a genuine effort by regulatory authorities to establish a global response to such principles.
Lennard was followed by John Power of Equifax, who discussed the efforts that the industry was undertaking to improve the provision of data sets in improving lending and access to credit. Recent studies by HM Treasury and others have found that over three million people in the UK have a high-cost loan, with seven million being heavily indebted in some way. Power noted that AI and machine learning both offer the potential to improve responsible financial lending by credit providers. He concluded by saying that more needs to be done to develop regulatory principles for the use of such technological principles. He noted that of the nineteen countries with a national strategy for AI, only the United States had introduced a legal obligation for AI models in the credit sector to be accurate at the individual level.
Mr Zeichner then opened the discussion to the rest of the audience, noting that financial services would be key in the development of the new national AI strategy and that the APGDA would be looking into their impact as part of the Group’s work on data and technology ethics.
A number of participants discussed the ethical and consumer trade-offs associated with increased use of machine learning in looking at spending patterns and credit sharing. It was noted that the introduction of new regulations in this area by the Financial Conduct Authority have helped to detect more vulnerable customers and allow banks to make more pro-active interventions when individuals are struggling with debt and access to credit, but that individuals may feel threated by the perception of invasive attitudes to their personal life. It was noted that this perception is also related to similar challenges associated with regulating areas such as the gambling industry and the extent to which organisations can make value judgements into the lifestyles of individuals.
The roundtable also considered the implication of algorithmic bias with regard to lending decisions being made. One individual noted that mortgage lending in the United States had often suffered from institutional prejudices, such as treating white and African American individuals with identical credit ratings in different ways. It was also noted that an entirely automatic, machine-led approach to lending would not be trusted by consumers.
Daniel Zeichner added that one of the key issues raised in Parliament during the introduction of GDPR was how to share examples of best practice from around the world. Jessica Lennard noted that whilst she had personally identified forty different regulatory principles from a range of countries, there was a surprisingly high level of coherence and consistency in how they are applied.
It was also noted that – as with many other areas of data policy – the principles underpinning them remain largely the same as in previous decades. The key difference, one person said, was that a loan that would have previously been approved following a meeting with a bank manager would now be offered by an algorithm, but the overall outcome would otherwise be identical. Stuart Holland of Equifax added that a key challenge was how financial service firms could make better use of consumer data to help improve outcomes. He gave the example that individuals with a thin credit rating or limited accessible data, would in many respects be helped if credit rating agencies could make use of information such as council tax payments and renting. He added that new regulatory changes were emerging with a need for more coherences from bodies such as the Information Commissioner’s Office, the Centre for Data Ethics and Innovation, and The Alan Turing Institute.
Participants agreed that such changes would be needed to help avoid a “race to the bottom” approach to financial service regulation, ensuring that vulnerable and excluded individuals would be protected from predatory lenders and other individuals. It was noted that the UK had a key advantage in this respect, with a high level of trust in digital economy amongst the general public, as well as an extensive level of expertise and funding for regulators.
Despite this, Daniel Zeichner asked if policy makers had a right to be anxious with the extent to which machine learning could be used to reduce freedom and civil liberties. Whilst it was noted that the introduction of the controversial Social Credit Programme by the Chinese Government often used financial history and credit rating to judge an individual’s perceived reliability to the state, participants agreed that the vast majority of data providers in the UK and globally were simply concerned with providing their current services in a more data-driven way.
Jessica Lennard also noted that the Bank of England had expressed concern that the financial services sector was moving too slowly in some respects, and that consumers could feel that they were being left behind in contrast to other areas of the economy, such as healthcare. Adding to this, Tom Niven of Accenture said that investment in the current climate was largely focus on uplifting traditional models to be fit for the big-data era, rather than finding radical new models of consumer facing infrastructure.
The end of the roundtable continued this line of discussion, noting that AI needed to play a much greater role in improving corporate efficiency and responding to the challenges to capitalism than had emerged since the start of the financial crisis. One participant added that there was a key challenge in turning rhetoric into reality, and that companies simply being asked to “make machine learning fairer” would be meaningless unless government and regulators could give a strong individuals as to what more equitable outcomes would look like.
There was also consensus that individuals would also be well served by an improved response from government with regard to the sharing and protection of data. Stuart Holland and John Power both noted the success by various Nordic countries in in adopting reliable and trusted methods of online verification. There was a consensus that such schemes had proven far more successful that the British GOV.UK Verify system, as well as the recently scrapped proposals to introduce age verification for adult content online.
In his concluding remarks, Daniel Zeichner said that the key challenge facing legislators was how to best approach making value judgements about choices made by individuals.
Over the coming months, the APGDA will work to identify the major risks and opportunities that AI and machine learning offer the financial service sector and the way in which they engage with individual citizens. Further events and research will be developed in early 2020.