COMMENTARY: Amber light for AI and how finance companies should proceed with caution
Artificial intelligence (AI). In recent years, it has been touted as the future of nearly every industry. It is certainly the future of auto finance. In fact, it has already begun to transform several of the sector's key processes, such as credit decisioning. But there is some distance to go before AI can reliably underpin the vehicle credit process from end to end.
At this point, auto lenders should be using AI to supplement the existing underwriting process. Why? Because the technology can draw on data, such as customer spending habits, to supplement standard credit ratings and help underwriters to make determinations about whether to extend a loan. The goal is to integrate AI to support, not replace, the human in the loop.
Today, auto lenders use AI most heavily in the pre- and post-underwriting stages of the credit process. For example, before the credit decision they use AI to segment customers into risk groups in order to prevent fraud. After the decision, they use it in end-of-lease inspections — to more speedily and consistently assess damages and excessive wear and tear – or to run analyses on lost deals, when the customer chooses to seek credit elsewhere. And this is its best use — for now.
In the middle portion of the underwriting process – the yes/no decision on whether to grant credit — auto lenders should use AI in addition to current, robust, and replicable regression models, which use statistics to determine a customer’s credit worthiness. AI can help to feed more data into those models, or to challenge their outputs. For this reason, we like to think of AI as augmented, rather than artificial, intelligence.
But why place a go-slow on the implementation of AI in credit, when the potential gains in underwriting are so great? Because the opportunity for getting it wrong is significant. And because any industry that deals with regulatory compliance will benefit hugely from co-creating deep structural changes with regulators.
For regulators, the outcome is what matters
Fortunately, regulators are very supportive of the use of AI in credit processes. But that doesn't mean they are blind to the downside. Machine learning models are known to discriminate if the training data set is not carefully curated. So, regulators are understandably cautious.
As a result, regulators require that the same validation processes that apply to traditional regression models also apply to AI. Specifically, auto lenders should be able to easily explain and interpret AI models. And AI models should yield consistent and reproducible results. It is not good enough to implement 'black box' models that consume scores of data points and deliver a decision that cannot be unpacked.
With current, traditional models, the provider can explain to the customer which parameters fed the model (such as payment history, for example), what happened during the decision process, and why they were given a particular answer. And this will remain the standard. Generally speaking, regulators will focus on outcomes. If these outcomes appear to be discriminatory, the regulatory risk, financial penalties, and reputational damage for lenders could be devastating.
So yes, engaging with regulators may mean moving a little slower. But it also means that the industry will ultimately adopt standards that are best-in-class. Regulators don't want uniformity in the way auto lenders use AI – as long as it is fair. Rather, they desire a vibrant industry that isn't brought low by bias.
AI in auto: Four key considerations for lenders
If AI is the future of our industry, what should auto lenders consider on their AI journey?
1. Make sure your use of AI is explainable. Be prepared to explain how AI is being used right through the credit process, end-to-end. Genpact’s recent AI360 research shows that the majority (67%) of consumers worry about AI discriminating them, and most also fear that AI will make decisions that affect them without their knowledge. To allay concerns, be transparent with your customers. And know that regulators' attention won't just be on the decisioning portion of the credit process.
2. Engage with regulators, soon and sincerely. Describe your goals for the use of AI to regulators and explain the likely challenges. Regulators will want to be helpful.
3. Don't be made to feel like a laggard by fintech providers. The best use of AI in auto finance, for now, is pre and post-credit decision and as a challenger model to powerful, stable regression models. Over-implementing could go very wrong, very quickly. And buying software that builds black box models that you can't explain won't go down well with customers or regulators.
4. Keep a human in the loop. AI is here to augment human decisions, not to replace decision-makers. Supporting your underwriters with enhanced information provided by AI and good governance processes will allow your firm to spend more time on strategic, relationship-based activities that provide the best experience and outcome for the customer who, after all, is the focus of these efforts.
When it comes to using AI in credit decisioning, therefore, auto lenders should put the pedal to the metal on neither the gas nor the brakes. Don't hit the accelerator and haphazardly apply AI to all aspects of the credit decision process. On the other hand, don't slam on the anchors and neglect to reap the enormous speed and accuracy benefits of including AI in the process.
Instead, apply AI prudently. To automate rote tasks in the pre- and post-decision phases of the credit process. And to supplement traditional regression models and augment the decision making of human underwriters along the way. And don't forget to engage with regulators on your AI journey.
Do this, and your auto finance company is sure to have a smooth ride. All the way to the bank.
Enrico Dallavecchia is the head of risk practice for North America and Niraj Juneja is the analytics consulting leader for banking and capital markets at Genpact, a global professional services firm.