Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
“Black box” artificial intelligence (AI) systems are designed to automate decision-making, mapping a user’s features into a class predicting individual behavioral traits such as credit risk, health status, and so on, without revealing why. This is problematic, not only because of the lack of transparency, but also because of potential biases inherited by algorithms from human prejudices or any hidden elements in the training data that may result in unfair or incorrect decisions.
As AI continues to proliferate, there is an increasing need for technology companies to demonstrate the ability to trace back through the decision-making process, a functionality called explainable AI. This would essentially help them understand why a certain prediction or decision was made, what the important factors were in making that prediction or decision, and how confident the model is in that prediction or decision.
To help instill user confidence that operational decisions are built on a foundation of fairness and transparency, Diveplane claims its products are designed around three principles: predict, explain and show.
Explosive growth in the AI software market
Raleigh, North Carolina-based Diveplane today announced that it has raised $25 million in series A funding to bolster its position in the AI software market and invest further in its explainable AI solutions that provide fair and transparent decision-making and data privacy.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Gartner estimates that the AI software market will reach $62 billion in 2022, and continue to grow at a rate of more than 30% through 2027. Diveplane claims it’s positioned to …