Self-Driving Cars Could See Like Humans
Researchers are testing a computer program for robotically driven or assisted vehicles that combs visual information for lane markings and roadway edges.
CREDIT: Ben Riggan, North Carolina State University
Researchers have taken a significant step in computer vision that could someday allow cars to drive by themselves or react in crisis situations where a human driver becomes incapacitated.
Rather than relying on expensive and error-prone technologies like radar or the Global Positioning System, the new method for robotic car steering is modeled on how real people actually see the road right in front of them.
When it comes to driving, "Humans do everything with vision . . . [we] are really, really good and we're a lot smarter and more efficient than computers," said Wesley Snyder, a professor of electrical and computer engineering at North Carolina State University (NCSU) and co-author of a new paper describing the ongoing research.
Nevertheless, human drivers make mistakes all the time, as grimly indicated by the traffic accidents that claimed 34,000 lives in the United States last year, according to the National Highway Traffic Safety Administration.
To cut down on bad accidents and everyday fender-benders while making driving less of a hassle, researchers have sought for decades to electronically assist or even replace the human driver. Current cutting-edge technologies include collision- and lane-departure warning systems found in some luxury cars.
But these features fall far short of the robotic self-driving that could handle the high-speed yet tedious trips on highways and rural roads where many traffic deaths occur.
Taking the good of human driving
To figure out how people reliably assemble all the visual information on a roadway – lane markings, traffic signs, other vehicles, obstacles, and so on – Snyder and his colleagues are testing a computer program that visually forms a "consensus" about street conditions. The program maps out the straight lines and objects detected by a camera mounted on a vehicle.
"The basic idea is if you look at a scene, what you're really interested in is the size of the road," said Snyder. "It might be curving, there might be intersections . . . but the problem is that there's lots of other stuff out there, such as trees and buildings, all of which contribute straight lines to the image."
The NCSU computer program weeds through all this potentially confounding information by using a technique called Parametric Transform for Multi-lane Detection. Essentially, the program accumulates data points and "looks for evidence that suggests where the sides of the road and the lane markings are," said Snyder.
From this basic, but critical gauging of the shape of the roadway, a robotic vehicle can then adopt the correct steering angle and speed to stay safely within a lane.
The computer program detects multiple lanes simultaneously and objects on a roadway better than previous attempts, Snyder said. The program has guided the actions of real human drivers in automobiles on test runs and in electrically powered kids' toy cars in the hallways at NCSU.
Promisingly, the computer program is not thrown off by shadows, colors and texture and can handle both straight and curving roadways, Snyder said.
Yue Wang, a senior research fellow in computer vision and image understanding at Singapore's Institute for Infocomm Research who is not affiliated with the NCSU work, agreed that the program should work fine on a well-constructed road with "strong lane boundary or lane mark[ings]."
Getting the basics down
For now, the approach is decidedly low-tech, and involves using a laptop to analyze one to two images snapped of a roadway per second – far too slow to drive at any appreciable speed, said Snyder.
The bottom-to-top technique of deciding where road lanes are located also suffers from problems such as mistaking a wide pedestrian sidewalk as another lane on the road, for example.
Eventually, Snyder hopes to vastly increase the computing power and give the accumulator algorithm a go in real-time traffic.
"Though our goal is autonomous driving, [our research] is in the context of reasonable applications," Snyder told TechNewsDaily.
He said a more practical, nearer-term advantage of a computer system like that under development at NCSU would be to guide a vehicle with a stricken driver safely to the side of a road.
"If you’ve had a heart attack or hypoglycemia [low blood sugar from diabetes] or a stroke, what you'd like the car to do is something reasonable," said Snyder. "You don’t want to slam on the brakes and you don’t want to run into anyone else, you want to pull off the road and come to a graceful stop and call 9-1-1 . . . So there is a brief period where a car must perform autonomously, and that’s the shorter term goal that this work is moving towards," Snyder said.
A paper describing the research will be presented in Anchorage, Alaska in early May at the IEEE International Conference on Robotics and Automation, chaired by Snyder.
• Emerging Tech Could Make Tomorrow's Cars Safer
• Distracted Driving: The Dangers of Mobile Texting and Phone Calls
• Military Eyes 'Smart Camera' to Boost Robotic Visual Intelligence