When it comes to customer expectations, the pandemic has changed everything

We’ve been seeing the headlines for years: “Researchers find flaws in the algorithms used…” for nearly every use case for AI, including , , , , or . Most conclude that if the algorithm had only used the right data, was well vetted, or was trained to minimize drift over time, then the bias never would have happened. But the question isn’t if a machine learning model will systematically discriminate against people, it’s who, when, and how.

There are several practical strategies that you can adopt to instrument, monitor, and mitigate bias through a disparate

Read More At Article Source | Article Attribution