Artificial intelligence and banking: Why representation matters

When we think of artificial intelligence, the first things that usually come to mind are depictions from popular culture. Star Wars’ R2D2. Knight Rider’s KITT. 2001: A Space Odyssey’s Hal.

Artificial intelligence (AI), however, doesn’t just exist in futuristic movies. It’s already a part of the way we live, shop, work and bank. Rather than a robot gone rogue, AI is simply technology programmed by humans with the ability to memorize information, learn from experience, communicate facts, and/or make decisions.

And because humans are the ones creating AI, we must ask the question: what are we teaching our machines, and what are they learning from us?

Unconscious Bias and Artificial Intelligence

Recently, the AI Now Institute at NYU found that 18% of authors at leading AI conferences were female; 80% of professors teaching AI were male; and only 2.5% of Google’s and 4% Microsoft’s workforce are Black. Looking at these statistics, we can begin to imagine some of the unconscious biases that might be passed on to AI technology.

The New York Times reported an example of this last year. After a husband and wife applied for the new Apple Card, they discovered the credit limit given to the wife was 20 times lower than her husband’s, despite the fact that the wife had a higher credit limit. The disparity prompted New York State regulators “to investigate the algorithm used by Apple Card to determine the creditworthiness of applicants.” As you can see, giving AI too much control can have fair lending implications, lead to policy decisions that inadvertently harm a protected group of individuals, and open an institution up to possible litigation.

Question to Ask Your Fintech Partners

If your financial institution is planning to work with a fintech partner to develop and implement AI technology, it’s important to ask the following questions:

What are you company’s demographics?

When the members of a team represent a wide range of races, genders, abilities, and backgrounds, they can access a 360 degree view of how an individual might be affected by the technology. This makes it much easier to identify and avoid issues that a more homogenous group might not recognize.

What bias assumptions does your team make, and what are your mitigation controls?

Unconscious bias cannot be completely eliminated—it’s part of the human experience. That’s why it’s so important to have a plan to help mitigate bias. Asking this question is a great way to discern if a company understands the presence of bias, and has a plan for addressing it when it appears.

Can I speak with the lead developers on the project?

At some point, it’s important to talk with the team building the technology, as well as the customers who are live on it. This will give you a better picture about what groups are represented, and how it works in the real world.

By taking the time to ask questions, investigate assumptions, and prioritize representation, you can ensure that your financial institution’s efforts to develop, implement and deploy AI reflect a true step forward for your customers and your community.

– Zedrick Applin, Principal Product Manager, nCino