Cruise, the driverless car startup acquired by GM for $581 million in 2016, today detailed in a how its fleet of over 180 self-driving Chevrolet Bolts are learning to anticipate human drivers’ behaviors. It’s part of a new series the company is publishing on Medium called How Self-Driving Cars Think, each installment of which will spotlight a different component of Cruise’s autonomous stack.

“Every day, San Franciscans drive through six-way intersections, narrow streets, steep hills, and more. While driving in the city, we check mirrors, follow the speed limit, anticipate other drivers, look for pedestrians, navigate crowded streets, and more,” wrote Cruise software engineer Rachel Zucker and staff software engineer Shiva Ghose. “In SF, each car encounters construction, cyclists, pedestrians, and emergency vehicles up to 46 times more frequently than in suburban environments, and each car learns how to maneuver around these aspects of the city every day.”

One of these obstacles is double-parked cars — lots and lots of double-parked cars. The probability of encountering one in downtown San Francisco is 24:1 compared with the suburbs, according to Cruise, making learning to maneuver around them safely a necessity.

In order to do this, Cruise’s cars must first identify them, which they accomplish by “looking” for a number of cues such as vehicles’ distance from road edges, the appearance of brake and hazard lights, and distance from the furthest intersection. They additionally use contextual cues like vehicle type (delivery trucks double-park frequently), construction activity, and the relative scarcity of nearby parking.

Cruise’s Bolts perceive these things through sensors — specifically lidar sensors from Velodyne, as well as short- and long-range radar sensors, articulating radars, and video cameras. Cameras recognize vehicle indicator light state and road features (such as safety cones or signage), while lidars and radars measure distance and speed, respectively. Then, machine learning models running on onboard computers derive from the raw bitstreams objects like bikes, pedestrians, and other vehicles.

GM Cruise

A type of AI architecture called a recurrent neural network (RNN) determines whether a vehicle is double-parked, given all available sensory and map information (including parking availability, road type, and lane boundaries). Zucker and Ghose note that RNNs are unique in their ability to remember long-term dependencies, which effectively enable Cruise’s cars to accumulate decision-making confidence.

Sussing out a

Read More At Article Source | Article Attribution