A.I. has a discrimination problem. In banking, the consequences can be severe

by | Jun 23, 2023 | Financial

Artificial intelligence algorithms are increasingly being used in financial services — but they come with some serious risks around discrimination.Sadik Demiroz | Photodisc | Getty ImagesAMSTERDAM — Artificial intelligence has a racial bias problem.From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.Deloitte notes that AI systems are ultimately only as good as the data they’re given: Incomplete or unrepresentative datasets could limit AI’s objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.A.I. can be dumbNabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.”The thing about how good an AI product is, there’s kind of two variables,” Manji told CNBC in an interview. “One is the data it has access to, and second is how good the large language model is. That’s why the data side, you see companies like Reddit and others, they’ve come out publicly and said we’re not going to allow companies to scrape our data, you’re going to have to pay us for that.”As for financial services, Manji said a lot of the backend data systems are fragmented in different languages and formats.”None of it is consolidated or harmonized,” he added. “That is going to cause AI-driven products to be a lot less effective in financial services than it …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nnArtificial intelligence algorithms are increasingly being used in financial services — but they come with some serious risks around discrimination.Sadik Demiroz | Photodisc | Getty ImagesAMSTERDAM — Artificial intelligence has a racial bias problem.From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.Deloitte notes that AI systems are ultimately only as good as the data they’re given: Incomplete or unrepresentative datasets could limit AI’s objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.A.I. can be dumbNabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.”The thing about how good an AI product is, there’s kind of two variables,” Manji told CNBC in an interview. “One is the data it has access to, and second is how good the large language model is. That’s why the data side, you see companies like Reddit and others, they’ve come out publicly and said we’re not going to allow companies to scrape our data, you’re going to have to pay us for that.”As for financial services, Manji said a lot of the backend data systems are fragmented in different languages and formats.”None of it is consolidated or harmonized,” he added. “That is going to cause AI-driven products to be a lot less effective in financial services than it …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]
Share This