This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

reputable news agency

proofread

Like cereal, AI needs 'nutrition labels,' AI CEO Q&A

AI app
Credit: Pixabay/CC0 Public Domain

People should demand transparency in artificial intelligence like they do in their breakfast food, says Mike Capps, whose Raleigh company Howso allows users to see how AI arrives at its conclusions.

"You'll want the same thing for what's deciding whether you get or if your kid gets into college," he said. "You want to have the same nutrition label on the side."

The former president of Epic Games, Capps cofounded Howso (originally named Diveplane) in 2018. Since then, has taken off. Today, organizations use it for major decisions surrounding , credit approvals and even paroles.

Capps contends too many current AI program engines offer variations of " AI" that obscure how final judgments are made

"We want people to stop using that crap and use ours instead," he said. "It's a big uphill battle, but we will achieve that."

In September, Howso open sourced its AI engine, which allows users to design explainable artificial intelligence-driven platforms. And late last month, Capps traveled to Washington, D.C., to address the 7th Annual U.S. Senate AI Insight Forum.

The next day, he spoke to The News & Observer about what big tech companies get wrong about AI, what he believes Howso gets right, and why everyone should care.

This conversation has been edited for clarity.

Q: Can you give an example of 'black box' artificial intelligence in use today?

A: Spotify is putting AI in and they are growing their business with great AI that's black box. Is the DJ choosing music for you based on your history or because Spotify would like you to see more artists like this? We don't know.

Q: How is Howso's engine different?

A: We do attributable AI, which is that you can say, If Brian gets the surgery, here are the 17 that were most important in that decision. I can attribute this decision back to these data points right here.

We have a client, Scanbuy, that works with all the big retailers to do customer intelligence. So they use our tool built in so they can do predictions on what customers will buy, but do it in a way that's explainable.

N.C. State and UNC are both using it for some projects. There are a few other universities that aren't public yet that are working with it. Part of going is (anyone can) use it. So it's pretty recent.

Other Howso clients include Mastercard, the Spanish insurance company Mutua de MadrileƱa, and the Virginia Department of Behavioral Health and Developmental Services.

Q: How is black box AI a problem overall?

A: The way I explain it is: Why do we care what's in cereal?

We put the ingredients on the side because if it's not there, do we really trust every food manufacturer to be doing it the right way? Now legally you have to do it. And we know not everybody even reads it, but the fact that it's there is part of how we know.

(Doing black box) is also bad software development.

We're talking about decisions that have serious human impact. If there is a bug in software, you can go fix it. But if you use a big black box AI system, you don't know if it has a bug or not. It's too complicated to figure it out. And there's no way to fix it.

There's no way to go back and say, "Why is it that Brian didn't get the promotion when the AI system said no?" If you can't audit, and you can't fix it, then you have this massive replacement cost.

Q: Parole decisions are obviously immensely important. How do you see existing AI falling short in this area?

A: Parole decisions are made in a way that have taken prior parole decisions, which may or may not be racially biased, and then doing it at scale.

We all know the court systems are behind. We all know we'd like a way to make them faster and more effective. But the trade-off you're making is we might be scaling racism for speed.

Q: What obstacles stand between the current AI landscape the explainable AI future you'd like to see?

A: True transparent AI systems is just not what's popular these days. We've been beating the drum, along with Dr. (Cynthia) Rubin at Duke, that it's possible, it works. We shouldn't keep going down this black box road.

Generally, people want this transparency, but there's concern that by sharing all the data you use to train a big AI model, that that's part of the secret sauce.

But there was a great speaker (at the Senate hearing) saying if we can't work together and see how these models are trained, we can't get better together. So, it's basically protecting their financial space as opposed to helping the overall field improve.

2023 The News & Observer. Distributed at Tribune Content Agency, LLC.

Citation: Like cereal, AI needs 'nutrition labels,' AI CEO Q&A (2023, December 13) retrieved 27 April 2024 from https://techxplore.com/news/2023-12-cereal-ai-nutrition-ceo-qa.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Startup Diveplane wants to make AI ethical: CEO explains what that means

9 shares

Feedback to editors