Skip to main content

Originally appeared in

AI revolution in credit decisions must be kept accountable

Published in The Sunday Times on November 9, 2025

AI revolution in credit decisions must be kept accountable

I have been grappling with AI adoption recently. Artificial intelligence has become such a pervasive feature of modern life that, in the future, businesses that haven’t adopted it to increase operational efficiencies, data-driven forecasting and risk management may be deemed uncreditworthy. These are also factors that may be given weight by regulators, who may themselves be using AI.

One use to which AI can be put is credit assessment. In the old days when you applied for a loan the decision was made by a human and was based on the three Cs: character, capacity and collateral. Judgmental lending, as it is known, has its limitations. For it to work, the lender had to be experienced. It is also expensive, inconsistent and uneconomic for smaller loans.

Fair Isaac developed the first credit scoring system in the US back in the 1950s by analysing historical loan data to identify factors indicative of creditworthiness. Each of the loan attributes is allocated a score depending on its correlation with risk of default. The total number decides the outcome of your loan application. The Fair Isaac Corporation (Fico) score is still in use in the US today. It was a significant improvement on judgmental lending and made widespread consumer credit possible, even for smaller amounts.

Roll on the new millennium and AI has the capability to put both methods in the shade. When a human assesses an application they apply certain criteria dictated by their organisation’s credit policy, but after this the lender is making a judgment. If the application looks more like one they have approved previously, and which turned out to be a good loan, they will probably approve it. If it looks more like one that turned into a non-performing loan, they will probably decline it.

“AI doesn’t always tell us why it returns a result”

A human making such a decision might take six or seven pieces of data into account and a scoring system might take 30 or 40 factors. An AI programme, however, could easily assess each application based on data from every loan the bank has ever made. You can see how AI is going to be a force to be reckoned with in such a data-driven decision. But AI is a murky world, and it doesn’t always tell us why it returns a particular result. With great power comes great responsibility.

One of the negatives of credit scoring in the early days was that if the analysis of the bank’s loan book revealed that the applicant’s address was a good indicator of creditworthiness, then this was built into the scorecard. Areas like the Bronx in New York suffered economically due to the application of this rule. As a result the practice of using an address as an attribute, which was called “red-lining”, was restricted.

But it’s easy to see how the power of AI decision-making could be applied to so much more than lending. It’s also easy to see how it could take red-lining to a new level, to the detriment of individuals and society.

Other sectors ripe for automated decision-making include healthcare, recruitment and criminal justice. If, as is almost certain, many decision-making processes adopt AI, could this result in a form of digital red-lining? For important decisions being made by a black box with no accountability? If you donate blood, for example, would you be happy for an AI engine to decide whether your donation can be accepted? What if it drew from inferences could it draw from your name, address, when you go on holiday? Legislation such as GDPR and the EU AI Act aim to deal with this, but can they ever keep ahead of such fast-moving technology?

They say that the law lags society, meaning that changes in the law usually come about some time after the societal changes that make them necessary, creating a gap between what is on the statute books and what is considered acceptable. This won’t be good enough when it comes to regulating AI.

The effects of red-lining, which was practised originally by government agencies as early as the 1930s and perpetuated by credit scoring, are still evident today. The Bronx is the most underbanked borough in New York City. AI-driven decision making has the potential to become a Pandora’s box of unintended consequences if it is not tightly regulated.

Eoghan Gavigan is a certified financial planner and the owner of Highfield Financial Planning hfp.ie

View Full Page Here

The material and information contained on this website is for general information purposes only. Neither the writer nor Highfield Financial Planning Ltd makes any warranty as to the completeness, accuracy or reliability of the information or the suitability or availability of products or services, referred to on the website, for any purpose. You should not rely on any information contained on this website as a basis for making any financial, legal, taxation or other decision. The information presented does not include all the considerations which are relevant to the topic discussed as to do so would render it un-readable. When considering any financial issue you should seek the advice of a suitably qualified adviser.  

Warning: If you invest in this product you may lose some or all of the money you invest.

Warning: The value of your investment may go down as well as up.  You may get back less than you invest.

Warning: This product may be affected by changes in currency exchange rates.

Warning: The income you get from this investment may go down as well as up.

×

How Much Can Your Company Contribute to an Executive Pension Set Up For Your Benefit

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
DD slash MM slash YYYY

Skip to content