DSNews delivers stories, ideas, links, companies, people, events, and videos impacting the mortgage default servicing industry.
Issue link: http://digital.dsnews.com/i/1470604
58 sentient humans. e development and widespread usage of AI in the lending process was borne of the industry's desire to become "color blind" to its applicants. Lenders saw the opportunity—in a less-biased way, in fact—to expand credit and to improve their ability to determine the creditworthiness of borrowers with a limited credit history. With thousands of rules and regulations governing the mortgage market and watchdogs already in place, unclear guidelines crudely applied to those lenders trying to expand credit through innovations like AI will lead to reduced access to credit for many of the consumers we are trying to help. RUSTY AXE VS. SURGEON'S KNIFE Regulators in Washington, D.C., must be careful not to wield a rusty axe, when a surgeon's knife is more warranted. e CFPB's recent comments are a case in point: there is little to no clarity given around CFPB's guidelines, an ambiguity that can result in overreach in its regulation and enforcement activities. e result could be increased fees, fewer services, and diminished access to affordable credit for aspiring homeowners as lenders and others in the mortgage industry become more skittish about innovation and product development, weighing the risk that they will have to defend themselves in court or simply give up and pay the imposed fine on an accusation. Without a thorough and prudent understanding of their appropriate role for the agencies, they could produce harmful unintended consequences. To be sure, for at least the last 10 years, I have said repeatedly that lenders need to be careful in their use of AI (as in the panel "Machine Learning on the Ground—Problems and Challenges," moderated by Dain Ehring, part of the Machine Learning in Lending Summit, September 2017). I have cautioned about the use of AI not because I worried that the technology might be misused or that the AI algorithms won't improve decision-making, but because lending organizations need to be aware of the aggressive tactics used by regulators and consumer advocates. When the inevitable lawsuit comes from an aggressive private lawyer, or potentially the CFPB itself, it is very difficult to go back and replay the algorithm that warranted a particular credit decision. e reason is that most AI environments are not "deterministic" like a rules engine; rather they are more appropriately classified as "stochastic," with algorithms that learn through experience. is means that the decision made a decade ago is very difficult to recreate, however accurate it may have been at that time. ere are tools being developed to address some of these critical challenges. Wells Fargo is exploring a technology called Explainable AI that allows its users to breakdown and understand the math in AI algorithms. Explainable AI is a form of computer "artificial neural network," or simply "neural network," inspired by our system of biological neurons. ese artificial neural networks have a function known as Rectified Linear Unit or "ReLU" that can define an output given the input (or set of inputs) and can be broken down and examined to explain its results. According to Wells Fargo, "ReLU Neural Networks can be decomposed and represented exactly into linear sub-models, and you can see which factor is the most significant—you can see it very clearly, just like traditional statistical models," said Agus Sudjianto, EVP and Head of Corporate Model Risk at Wells Fargo ("XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk," NVIDIA 2021). Nevertheless, while I am a true fan of AI, I am not a fan of expensive lawsuits that harm the industry and do little to advance homeownership. For this reason, I still recommend using AI in decisioning only alongside a very careful analysis of the organization's business risk and full acceptance of that risk by senior leadership with fiduciary duty approval. In 2021 alone, the Department of Justice with the CFPB collected $5.6 billion in settlements from lenders. Even by government standards, that's real money (https://tinyurl.com/mr3npmu4). While I could concede to the potential, without guardrails, for CFPB's AI concerns (digital redlining/robo-determination; "black box underwriting; exacerbated bias of the Feature By: Dain Ehring "ReLU Neural Networks can be decomposed and represented exactly into linear sub- models, and you can see which factor is the most significant … just like traditional statistical models." —Agus Sudjianto, EVP and Head of Corporate Model Risk, Wells Fargo