The Consumer Financial Protection Bureau (CFPB) understands tools such as artificial intelligence and machine learning are making their way into underwriting departments at auto finance companies and other providers of financial services.
It’s why CFPB director Kathleen Kraninger recently reiterated that the regulator is keeping close tabs on the potential impact artificial intelligence (AI) and machine learning could have on deciding who is approved for credit.
Kraninger made the assertions during the TCH + BPI Annual Conference, a gathering for financial services executives, regulators, policymakers, and academics to discuss the changing regulatory landscape and the future of payments hosted by the Clearing House and the Bank Policy Institute. Kraninger emphasized in her remarks that the bureau is strongly committed to helping spur innovation, while being mindful of possible risks.
“Alternative modeling techniques, such as the use of machine learning algorithms, have the potential to expand access to credit for some of the approximately 45 million Americans with no or thin credit files,” Kraninger said. “The technologies also can make models more efficient, leading to faster decision times and potentially reducing the cost of credit. Given these potential benefits, we see these technologies as important to our mission.
“Despite AI’s potential to expand access to credit, uncertainty about how AI fits into the existing regulatory framework may be hindering adoption of the technology, especially for credit underwriting,” she continued.
“One issue we have heard a lot about is whether complex AI models are compatible with the adverse action notice requirements in the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA),” Kraninger went on to say. “For example, ECOA requires creditors to explain to consumers the main reasons for a denial of credit or other adverse action. FCRA includes additional requirements for credit report and similar information used in taking adverse action.”
Kraninger noted that the CFPB is aware development of tools and technologies to explain complex AI decisions accurately continues to develop.
“These developments hold great promise as ways to comply with the adverse action notice requirements,” she said.
Furthermore, Kraninger mentioned the regulator is interested in exploring three specific areas regulatory uncertainty, including for adverse action notices. They include:
— Methods for determining the main reasons for a denial of credit or other adverse action
— The accuracy of explainability methods
— Experimentation on how to convey the reasons in a manner that accurately reflects the factors used in a model and is understandable to the consumer.
“The bureau intends to leverage experiences gained through the innovation policies,” Kraninger said. “For example, applications granted under the innovation policies, as well as other stakeholder engagement with the bureau, may ultimately be used to help support an amendment to a regulation or its commentary.”