Should self-driving cars come with black box recorders?

by | Aug 1, 2022 | Technology

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Every commercial airplane carries a “black box” that preserves a second-by-second history of everything that happens in the aircraft’s systems as well as of the pilots’ actions, and those records have been priceless in figuring out the causes of crashes.

Why shouldn’t self-driving cars and robots have the same thing? It’s not a hypothetical question.

Federal transportation authorities are investigating a dozen crashes involving Tesla cars equipped with its “AutoPilot” system, which allows nearly hands-free driving. Eleven people died in those crashes, one of whom was hit by a Tesla while he was changing a tire on the side of a road.

Yet, every car company is ramping up its automated driving technologies. For instance, even Walmart is partnering with Ford and Argo AI to test self-driving cars for home deliveries, and Lyft is teaming up with the same companies to test a fleet of robo-taxis.

Read: Governing AI Safety through Independent Audits

But self-directing autonomous systems go well behind cars, trucks, and robot welders on factory floors. Japanese nursing homes use “care-bots” to deliver meals, monitor patients, and even provide companionship. Walmart and other stores use robots to mop floors. At least a half-dozen companies now sell robot lawnmowers.  (What could go wrong?)

And more daily interactions with autonomous systems may bring more risks. With those risks in mind, an international team of experts — academic researchers in robotics and artificial intelligence as well as industry developers, insurers, and government officials — has published a set of governance proposals to better anticipate problems and increase accountability. One of its core ideas is a black box for any autonomous system.

“When things go wrong right now, you get a lot of shoulder shrugs,” says Gregory Falco, a co-author who is an assistant professor of civil and systems engineering at Johns Hopkins University and a researcher at the Stanford Freeman Spogli Institute for International Studies. “This approach would help assess the risks in advance and create an audit trail to understand failures. The main goal is to create more accountability.”

The new proposals, published in Nature Machine Intelligence, focus on three principles: preparing prospective risk asses …

Article Attribution | Read More at Article Source

Share This