This piece originally appeared in the June 2022 edition of DS News magazine, online now.
Recent comments by Rohit Chopra, Director of the Consumer Financial Protection Bureau (CFPB), indicate that the Bureau will begin to focus increasingly on the potential misuse of artificial intelligence (AI) in lending decisions.
Specifically, Director Chopra has noted that AI may result in “digital redlining” and “robo-discrimination,” previously stating that he believed “black box underwriting algorithms are not creating a more equal playing field and only exacerbate the biases fed into them.”
The CFPB’s attention on AI arrives against the backdrop of the technology gaining significant momentum in mortgage lending. According to Forbes (Dec '21), “83% of [lender] executives report that AI is important to the future of the industry.”
While there are guardrails to be laid down with respect to AI in mortgage lending, we need to be careful here that government regulators, CFPB included, do not become a drag on the very thing most of our policymakers and government leaders want: mortgage financing available to families in need of affordable, quality housing, thereby creating more equitable homeownership.
Supporters of AI, such as myself, believe as a technology, it has the potential to deliver more accurate decision-making in underwriting and to increase the availability of credit to those in need. The critics of AI, evidenced in Chopra’s comments, are skeptical of its data accuracy, the potential for applicants to be excluded, and the belief that AI underwriting processes remain opaque. This line of reasoning is curious to me given the opaqueness inherent in manual underwriting decisions made by sentient humans.
The development and widespread usage of AI in the lending process was borne of the industry’s desire to become “color blind” to its applicants. Lenders saw the opportunity—in a less-biased way, in fact—to expand credit and to improve their ability to determine the creditworthiness of borrowers with a limited credit history.
With thousands of rules and regulations governing the mortgage market and watchdogs already in place, unclear guidelines crudely applied to those lenders trying to expand credit through innovations like AI will lead to reduced access to credit for many of the consumers we are trying to help.
Rusty Axe vs. Surgeon’s Knife
Regulators in Washington, D.C., must be careful not to wield a rusty axe, when a surgeon’s knife is more warranted. The CFPB’s recent comments are a case in point: there is little to no clarity given around CFPB’s guidelines, an ambiguity that can result in overreach in its regulation and enforcement activities. The result could be increased fees, fewer services, and diminished access to affordable credit for aspiring homeowners as lenders and others in the mortgage industry become more skittish about innovation and product development, weighing the risk that they will have to defend themselves in court or simply give up and pay the imposed fine on an accusation. Without a thorough and prudent understanding of their appropriate role for the agencies, they could produce harmful unintended consequences.
To be sure, for at least the last 10 years, I have said repeatedly that lenders need to be careful in their use of AI (as in the panel “Machine Learning on the Ground—Problems and Challenges,” moderated by Dain Ehring, part of the Machine Learning in Lending Summit, September 2017). I have cautioned about the use of AI not because I worried that the technology might be misused or that the AI algorithms won’t improve decision-making, but because lending organizations need to be aware of the aggressive tactics used by regulators and consumer advocates.
When the inevitable lawsuit comes from an aggressive private lawyer, or potentially the CFPB itself, it is very difficult to go back and replay the algorithm that warranted a particular credit decision. The reason is that most AI environments are not “deterministic” like a rules engine; rather they are more appropriately classified as “stochastic,” with algorithms that learn through experience. This means that the decision made a decade ago is very difficult to recreate, however accurate it may have been at that time.
There are tools being developed to address some of these critical challenges. Wells Fargo is exploring a technology called Explainable AI that allows its users to breakdown and understand the math in AI algorithms.
Explainable AI is a form of computer “artificial neural network,” or simply “neural network,” inspired by our system of biological neurons.
These artificial neural networks have a function known as Rectified Linear Unit or “ReLU” that can define an output given the input (or set of inputs) and can be broken down and examined to explain its results. According to Wells Fargo, "ReLU Neural Networks can be decomposed and represented exactly into linear sub-models, and you can see which factor is the most significant—you can see it very clearly, just like traditional statistical models,” said Agus Sudjianto, EVP and Head of Corporate Model Risk at Wells Fargo (“XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk,” NVIDIA 2021).
Nevertheless, while I am a true fan of AI, I am not a fan of expensive lawsuits that harm the industry and do little to advance homeownership. For this reason, I still recommend using AI in decisioning only alongside a very careful analysis of the organization’s business risk and full acceptance of that risk by senior leadership with fiduciary duty approval. In 2021 alone, the Department of Justice with the CFPB collected $5.6 billion in settlements from lenders. Even by government standards, that’s real money.
While I could concede to the potential, without guardrails, for CFPB’s AI concerns (digital redlining/robo-determination; “black box underwriting; exacerbated bias of the poor data fed into them), I will point out there are already best practices in place to ensure proper treatment of protected classes, many summarized in a May 2019 article by Brookings (“Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms”), which I believe most lenders are already employing.
Where Director Chopra and I really differ is his belief that the algorithms can never "be free of bias" and may result in credit determinations that are unfair to consumers. The real question is: compared to what?
The Unbottled Genie
Technology is, by its design, agnostic and less prone to predisposition and profiling than the minds of human beings. Lenders are already forced to navigate the potential risk of bias—or even the perception of same—and to be able to prove that lack of bias at any given time. As a technologist, I disagree with the blanket bias premise.
I believe the use of algorithms, rather, is pro-competitive and expansive of credit on the whole, especially for qualified, under-banked groups that have neither a credit score nor the required documentation necessary to obtain credit to buy a new home.
Worldwide, AI in banking is currently a $4 billion market and is expected to grow to $64 billion by 2030. At this point, I believe AI in the industry is here to stay. Mohammed Rashid, Head of Fintech for Tavant, put it this way: “Tavant, as a solution provider for many of the nation’s top home lenders, sees AI as a very important technology that can aid the cost and time of loan origination, remove the inherent bias and errors of human interaction in loan qualification, and remove many of the obstacles facing the potential home buyer from getting their dream home. Today, most of our lenders are currently using, have implemented, or have plans to implement AI.”
There are many great uses of AI in our housing finance market outside of the underwriting and credit decisions. A few examples include automation of loan processes and best execution of operations that removes time and cost of a loan application; automation of customer services and loan servicing functions, including default management and loss mitigation; disparate impact analysis and reporting; detection of fraud; and enhancements to privacy and enterprise security. There are many more opportunities and applications of AI that, again, do not influence the credit decision, or the price of the home.
Ultimately, however, the use of AI has the potential to bring a huge positive impact to the mortgage industry, which would benefit both consumers and lenders alike. It also has the potential to create more equity in homeownership for those who are underbanked. While home lenders will need to be very discerning as they implement artificial intelligence (especially in decisioning), it is important that they seek opportunities to partner with consumer advocates.
I strongly believe all stakeholders in lending embrace equity in homeownership and the pursuit of sustainable housing for everyone.
Going way back to my college days, where I achieved an advanced degree in astrophysics, I have been an ardent admirer of Stephen Hawking, who long ago opined on the topic of AI: “The genie is out of the bottle. We need to move forward on artificial intelligence development, but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.”