John San Filippo
FairPlay AI’s Kareem Saleh Talks Fairness in AI Lending
Part of Our Money20/20 Interview Series
By John San Filippo
Money20/20 USA, the fintech mega-conference, was once again held in Las Vegas, this year from Oct. 23-26. And once again, Finopotamus was onsite talking with industry leaders about a wide range of topics. These interviews are captured in this series of articles.
Finopotamus spoke with FairPlay AI’s founder and CEO Kareem Saleh about how to ensure artificial intelligence (AI)-based underwriting tools are as unbiased as they claim to be.
Finopotamus: Briefly describe what your company does.
Saleh: We refer to ourselves as the world's first “fairness-as-a-service” company. We set up governance processes and systems that allow you to harness machine learning because left to its own devices, it can pose a threat to the safety and soundness of community financial institutions and the communities they serve.
Finopotamus: Can you expand on that idea?
Saleh: Left to its own devices, AI will focus on populations that it understands very well. But the real need in America lies with thin-file, no-file people. That’s 50 to 60 million Americans who struggle to access the system. And, of course, there are people who perhaps have had some kind of credit event in their past, like a bankruptcy or a foreclosure, and those are all people for whom the data is more likely to be messy, missing or wrong.
When you have data that's messy, missing or wrong, you often hear the phrase “garbage in, garbage out.” So, these systems are really only as good as the data that they're trained on. The data from the last several years doesn't reflect reality because there have been these massive interventions into the type of government, into the economy.
I think of AI as a really advanced race car. If somebody was driving a Camry before and now you give them a Formula One car, they need some training and some aptitude to drive a Formula One car. You don't drive it the same way you drive a Camry. Think of AI underwriting similarly. If you're going to have 21st century underwriting, you need 21st century governance and 21st century compliance. One without the other leads to a lot of trouble.
Finopotamus: How does FairPlay AI make that all happen?
Saleh: Most of our customers today are fintechs that sell their loans to credit unions and other FIs. For credit unions to feel comfortable buying those loans, the credit unions want to assure themselves that those loans have been fairly made. Our fairness-as-a-service solution is used by the fintechs to audit the models and produce model validation and fair lending reporting that's easy to understand, so that a credit union can evaluate for themselves whether these loans are fair and whether or not they want to buy them.
Finopotamus: Do you have any interest in providing your solution directly to credit unions?
Saleh: Of course, we do. We've worked with a few directly. In our experience, credit unions are just now starting to think about compliance modernization as part of the kind of overall broader technology, you know, digital transformations.
Finopotamus: It sounds like compliance modernization is a big topic and you're just taking care of this little piece. How should credit unions address compliance modernization as a bigger issue?
Saleh: Ultimately, you need systems and processes in place across the waterfront of compliance where you're going to be using advanced AI systems to make high-stakes decisions. For example, in the fraud space, there's a lot of great fintechs who are trying to help credit unions do better. Marketing is another area where I think you have the potential for digital redlining if you're employing marketing models or marketing vendors that are using AI systems. There’s a potential to either exclude certain communities or not penetrate as deeply into low and moderate income or minority neighborhoods. There's a range of decisions that get made in the customer journey that need to be made fairly, and there are different compliance providers at different stages who are trying to facilitate that transition.
Finopotamus: One might argue that these thin-file, no-file consumers are more likely to be members of smaller credit unions. What are you doing to make your technology accessible to smaller institutions?
Saleh: There are a couple of things that make our technology more accessible to even small fintechs who are just starting out. First, you don't have to be a data scientist to use our tool. Everything starts from the idea that you shouldn't have to be an underwriting or a compliance expert to understand what's going on with an AI system. People need to make technical tools for non-technical people as AI proliferates throughout the enterprise. This means focusing on user experience with a user interface that is more intuitive.
The second thing is offering a price point with a menu of options that allows people to dip their toe in and get started easily. At many credit unions, technology budgets are constrained. If you see some of the players that are now having some success in the AI underwriting space with credit unions, all of them have had to bring their price points down to be more in line with the technology budgets at a smaller institution. It’s all about having pricing that allows folks to get value quickly for not a lot of money and introduce efficiencies to their process.
Finally, we're trying to enable the technology providers that are increasingly partnering with the credit unions to do a better job of helping the credit unions meet their compliance obligations. We do that by automating some of the regulatory reporting that the NCUA and other examiners expect to see. And we do that by providing better analytics to the technology providers who themselves are serving the credit unions.
Finopotamus: Any closing thoughts?
Saleh: We we're still very early in the adoption of these technologies, which means that like we haven't seen a bunch of AI systems run businesses into the wall yet, but that's going to happen. There's this thing called the AI incident tracker, and it monitors the number of times AI systems do something terrible. The waters have been pretty choppy lately.
You've got a recession. You've got coming off the heels of a pandemic. You've got a war in Europe with energy shortages and unemployment, right? Underwriting in this environment, if you're not governing these systems properly, you could pose a threat to the safety and soundness of your institution, pose a threat to consumers that you want to serve. And that has terrible legal consequences in this country.