Why Fintech’s Love Affair With Artificial Intelligence Needs a Reality CheckWhy Fintech’s Love Affair With Artificial Intelligence Needs a Reality Check

The future belongs to companies that understand that AI should augment human intelligence, not replace it

John Downie, John Downie, CEO and founder, SteadyPay

January 2, 2025

5 Min Read
Share price charts
Getty Images

When I started in fintech, our most sophisticated tool was linear regression. We'd spend hours fine-tuning models, proud of our "advanced" statistical methods. Now, everyone's racing to slap an AI label on their product and I'm watching history repeat itself—but this time, the stakes are higher. 30% of all dollars invested went into Artificial Intelligence in Q2, dwarfing any other sector. So, it’s natural that everyone in fintech wants to push their chips in when the pot is bigger. 

Don't misunderstand me—AI isn't a villain in this story. At SteadyPay, we use it every day. Our risk engine leverages machine learning and LLMs to analyze open banking data. But we succeed because we're clear about what AI can and cannot do.

The future belongs to companies that understand this crucial distinction: AI should augment human intelligence, not replace it. When a customer's financial future hangs in the balance, they deserve better than a probability score from a neural network.

The Inconvenient Truth About AI in Lending

Most fintech companies are doing AI wrong. They're using sledgehammers to crack nuts, deploying complex neural networks where simple decision trees would suffice. 

Lending is based on affordability and there are plenty of AI black boxes that are great examples of overengineering (and overpaying) for a solution. We’ve tried many and very fancy ways to predict future affordability and yet it is the typical - very basic - affordability corridors that still produce the most reliable answer. It doesn’t need decoding and it’s easy to explain why it made the decision it did.

Related:AI Could Ease Record Holiday Travel Disruptions

When it comes to measuring AI's value, we look at the metrics that matter. Are our customers getting happier or grumpier with AI? How complex is our code getting? How long does it take to get someone onboarded? We're in the business of reducing friction, not creating it. Yes, we use LLMs - the MIT kids tell us we're running the fastest one out there - but even that's too slow for large datasets in real time. Sometimes AI isn’t the answer.

There are also implementation costs to consider. They weigh heavily on every decision we make. As someone obsessed with building a sustainable business with positive unit economics, I've seen it all. Tech teams come to me wanting to spend months optimizing our cloud spend, burning through cash just to save pennies. It's madness. 

That said, AI does deliver in some areas. Take classification, for instance. It's cheaper to maintain now than our old specialist models ever were. But at the end of the day, everything comes down to the business case. AI isn't special - it's just another tool to help you get to an outcome.

Related:The Benefits of Integrating Low-Code and Generative AI

Three Uncomfortable Realities About AI in Financial Services

The personalization paradox

Most AI personalisation is bloat. Chatbots add verbose, chatty conversations instead of solving customer problems. Want examples? AI subscription spend optimization sounds great but makes you feel dumb. 

Here's what actually works: Someone with a Yahoo or Hotmail email is statistically a lower fraud risk than Gmail. Then you just need to listen to customer feedback: "I'm bad with money, can you do it for me." No budget graphs needed, just solutions. Most fintechs still push generic personalization, fitting people into audiences and then recommending products. What customers want is empathy, not condescending coffee-spend advice.

The risk assessment myth

The myth is that more complex AI models mean better risk assessment. Here's what we learned reviewing our machine learning models: Switching to neural networks gave us the same results as current models. To get any real value, we'd need trillions of transactions. Good news though - tools exist to fill quality gaps without costing the earth.

Another myth is that single "golden" data points predict risk. We found that no single data item beats a large collection optimized through machine learning. Having tested both, I can say with certainty that machine learning does a significantly better job at predicting lending risk than rules-based models.

There’s also the myth that AI removes bias. Take postcode - it's statistically significant, which is exactly why lenders used it to discriminate. The scary part is that there are unknown biases that black box models can learn. That's why you need a team that fundamentally believes in explainable AI - even with 500 features, you need to explain every decision made.

The speed vs. accuracy trap

Thinking faster isn’t always better. We used to evaluate customer finances as soon as we hit the minimum number of transactions. Now we intentionally wait longer to get a fuller picture. Why? So we can know hand on heart it's affordable and right for the customer. The same goes for model updates - we run them in parallel for extended periods.

There’s also an assumption that humans make lending decisions better than machines. Actually, humans slow things down and add bias. We pride ourselves on fast, algorithmic decisions. While this doesn't suit everyone (think large amounts and complex structures), customers prefer a quick yes/no and knowing nobody's poking about in their finances.

That said, humans monitor everything else: Loan defaults, affordability, vulnerable customers, the right term and missed data points. We keep lending decisions automated, usually within seconds. If we need more time, it's usually identity checks. Simple really - provide the info, get the funds.

I’m not immune from AI hype, so it’s useful for me to step back and challenge fintech's love affair with AI. At SteadyPay, we've learned when to embrace AI and when to keep it simple. But knowing what AI can't, or shouldn't, do is only half the battle.

About the Author

John Downie

John Downie, CEO and founder, SteadyPay, SteadyPay

John Downie is a serial entrepreneur on a mission to make finance more open and inclusive. He's currently the CEO and founder at Steadypay.

Sign Up for the Newsletter
The most up-to-date AI news and insights delivered right to your inbox!

You May Also Like