Wednesday, October 11, 2023
HomeBankBias, fairness, and other ethical dimensions in artificial intelligence – Bank Underground

Bias, fairness, and other ethical dimensions in artificial intelligence – Bank Underground


Kathleen Blake

Artificial intelligence (AI) is an increasingly important feature of the financial system with firms expecting the use of AI and machine learning to increase by 3.5 times over the next three years. The impact of bias, fairness, and other ethical considerations are principally associated with conduct and consumer protection. But as set out in DP5/22, AI may create or amplify financial stability and monetary stability risks. I argue that biased data or unethical algorithms could exacerbate financial stability risks, as well as conduct risks.

The term algorithm means a set of mathematical instructions that will help calculate an answer to a problem. The term model means a quantitative method that applies statistical, economic, financial or mathematical theories, techniques and assumptions to process input data into output data. Traditional financial models are usually rules-based with explicit fixed parameterisation, AI models are able to learn the rules and alter model parameterisation iteratively.

AI models have many benefits in the financial sector and can be used to help consumers better understand their financial habits and the best options available to them. For example, by automating actions that best serve customer interests such as automatically transferring funds across accounts when a customer is facing overdraft fees.

How AI can produce or amplify bias

Pure machine-driven AI models, without human judgement or interventions, can produce biased outputs. This is often the result of biases embedded in training data but can also be a result of the structure of the underlying model. These biases can render model outputs and decisions discriminatory as algorithms can become skewed towards particular groups of people. One example comes from the insurance sector where a healthcare algorithm trained on cost data to predict patients’ health risk score was found to demonstrate algorithmic bias in underrating the severity of Black patients’ health conditions relative to their White counterparts, leading to under-provision of health care to Black patients.

There is significant media interest in the ways that AI models can amplify bias especially now given the rise of the use of generative AI models (deep-learning models that take raw data and generate statistically probable outputs when prompted). Algorithms used by financial and insurance firms generally aim to filter between individuals based on an objective assessment of their risk profile. For example, they must be able to provide a reasonable assessment of someone’s risk exposure such as their credit worthiness, or their property’s geographical risk exposure to floods or other natural catastrophes. A key consideration is whether this is done in an unbiased way.

Bias in AI models can be thought of in two ways: data bias and societal bias. Data bias refers to bias embedded in the data used to train the AI models. Through biased data, AI models can embed societal biases and deploy them at scale. One example of data bias was highlighted by Joy Buolamwini, who found that several examples of facial recognition software had higher error rates for minority ethnic people, particularly minority women. The models correctly identified White men 99% of the time but this dropped to 66% for women of colour. This happened because photos in the training data set were over 75% male and more than 80% White. As a consequence, this research demonstrated that the training data used had caused the code to focus on White subjects.

Data bias cannot be prevented by simply removing protected characteristic fields from the input data, because the model may make underlying correlations that lead to biased decision-making based on non-protected features. In other words, the remaining, non-protected features could act as proxies for protected characteristics. One example comes from the unlawful practice of redlining in insurance and mortgage lending. Redlining is the historic unlawful practice of providing exploitative interest rates to minority ethnic people relative to their White counterparts; the policy does so by targeting geographic areas that are predominately none-White and deeming them as risky. If firms train their models on biased historical data which includes redlining, there is a risk of such algorithms learning to copy patterns of discriminatory decision-making. Overall, the use of historical data sets – with potentially discriminatory features – could shape decision-making processes and significantly impact the output of AI models in adverse ways.

Further, a typical AI model will try to maximise overall prediction accuracy for its training data. If a specific group of individuals appear more frequently than others in the training data, the model will optimise for those individuals because this boosts overall accuracy. For example, statistically trained systems, such as Google Translate, default to masculine pronouns as there are more in its training data set. This translation then becomes part of the training data for the next translation algorithm. Therefore, flawed algorithms can amplify biases through feedback loops.

Societal bias is where norms and negative legacy from a society cause blind spots. This was seen in the case of a recruitment algorithm developed by Amazon, where female applicants were negatively scored because the algorithm was trained on resumes submitted to the company over a 10-year period and reflected the male dominance of the industry. The algorithm learnt to recommend candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as ‘executed’ and ‘captured’, and penalised those resumes that included the word ‘women’s’, as in ‘women’s chess club captain’. The blind spot to gender bias meant that initial reviewers and validators of the model outputs did not consider it as a possible problem.

Bias and financial stability

It has been acknowledged that AI could impact financial stability in the future. For example, if multiple firms utilise opaque or black box models in their trading strategies it would be difficult for both firms and supervisors to predict how actions directed by models will affect markets. The Financial Stability Board has stated that financial services firms’ use of such models could lead to macro-level risk.

Issues of fairness are cause for concern alone by some, but it might also be the case that they can exacerbate channels of financial stability risk since trust is key for financial stability. In periods of low trust or high panic, financial firms see increases in financial instability which can produce a spectrum of outcomes such as market instability or bank runs. The De Nederlandsche Bank explains that ‘although fairness is primarily a conduct risk issue, it is vital for society’s trust in the financial sector that financial firms’ AI applications – individually or collectively – do not inadvertently disadvantage certain groups of customers’. Bartlett et al (2019) found that while FinTech algorithms discriminate 40% less than face-to-face lenders, Latinx and African-American groups paid 5.3 basis points more for purchase mortgages and 2.0 basis points more for refinance mortgages, compared to White counterparts. Disparities such as these demonstrate that while the algorithms may be making headway in addressing the issue of discriminatory face-to-face lending decisions, some element of discrimination remains within the AI system, which could negatively affect trust among users, particularly for impacted groups.

Trust is an important concept for financial stability of the financial system in aggregate, but also the stability of individual institutions. For individual financial institutions, the use of biased or unfair AI could lead to reputational and legal risk, risks that many prudential regulators consider in setting capital requirements. The potential impact of AI-related risks to firms may not appear to be significant in isolation but, in combination with other risks, could impact capital and, ultimately, lead to material losses.

We haven’t seen such an event materialise yet, but the risks are starting to emerge. One example relates to the algorithm used by Apple and Goldman Sachs for decisions on credit card applications, which seemingly offered smaller lines of credit to women than to men. While the model used didn’t have gender as an input, the model nonetheless was seen to develop proxies for gender and made biased lending decisions on the basis of sex. In this case, the New York State Department of Financial Services found no violation of fair lending requirements but noted the incident ‘brought the issue of equal credit access to the broader public, sparking vigorous public conversation about the effects of sex-based bias on lending, the hazards of using algorithms and machine learning to set credit terms, as well as reliance on credit scores to evaluate the creditworthiness of applicants’. Future events with different outcomes – and possible adverse regulatory findings – could lead to reputational damage of firms employing such algorithms, as well as harming trust. 

Conclusion

It is possible for AI to embed bias and be used in unethical ways in financial services, as well as other sectors. Beyond the inherent issues with bias, fairness, and ethics, this could potentially lead to stability issues for financial institutions or the financial system as a whole. Should the adoption of AI continue and accelerate as anticipated, central banks will have to consider the significance of risks around bias, fairness and other ethical issues in determining whether the use of AI poses a threat to financial stability, and how such risks should be managed.


Kathleen Blake works in the Bank’s Fintech Hub.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments