In yet another twist on the road to autonomous vehicles, self-driving cars are being taught to recognize and predict pedestrian behavior.
By collecting vehicle data through cameras, LiDar (Light Detection and Ranging) and GPS, researchers at the University of Michigan have been capturing video snippets of humans in motion and recreating them in 3D computer simulations
The result is a recurrent neural network that catalogs human movements, allowing them to predict poses and future locations of pedestrians up to 150 feet from the vehicle.
“Equipping vehicles with the necessary predictive power requires the network to dive into the minutiae of human movement: the pace of a human’s gait (periodicity), the mirror symmetry of limbs and the way in which foot placement affects stability during walking,” states the university announcement of the research.
By running the video clips for several seconds, the university system can study the first half of the video to make predictions and then verify its accuracy with the second half, as detailed in a university video.
The research takes into account various things pedestrians may be doing, such as looking at their phone or walking while carrying a cup of coffee.
For the dataset needed to train the project’s neural network, researchers parked a vehicle with Level 4 autonomous features at several Ann Arbor intersections for recording of multiple days of data at a time, with the car’s cameras and LiDAR facing the intersection.
“Now, we’re training the system to recognize motion and making predictions of not just one single thing—whether it’s a stop sign or not—but where that pedestrian’s body will be at the next step and the next and the next,” states Matthew Johnson-Roberson, associate professor in U-M’s Department of Naval Architecture and Marine Engineering.
Self-driving cars may not be clogging the roads any time soon, but at least they’re in the process of learning how to identify and predict the motions of pedestrians around them.