Is there a better way for autonomous cars to learn to drive?
Self-driving cars are already being tested on public roads, but even their biggest cheerleaders would admit that they are not yet ready to handle all road situations.
To help improve the training of driverless cars, artificial intelligence specialist Wayve has taken an approach known as deep reinforcement learning, doing away with hand-written rules and 3D maps and instead teaching a car through simple trial and error — much like how you learnt to ride a bicycle. Rather than fitting the car with a complex system of cameras and sensors, the company used a single monocular camera image as input.
As this video shows, in just 15-20 minutes Wayve successfully taught the car to follow a lane from scratch using only when the safety driver took over as training feedback.
The potential implications of this approach are huge, the company believes.
“Imagine deploying a fleet of autonomous cars, with a driving algorithm which initially is 95% the quality of a human driver,” Wayve said. Such a system would be almost capable of dealing with junctions, roundabouts and traffic lights. After a full day of driving and learning from when the human safety driver takes over, perhaps the system would improve to 96%. After a week, 98%. After a month, 99%.
“After a few months, the system may be super-human, having benefited from the feedback of many different safety drivers,” the company suggested.
With today’s self-driving cars “stuck at good but not good enough performance levels”, reinforcement learning presents a way of quickly improving driving algorithms to make the vehicles roadworthy, Wayve concluded.
Wayve is now working on scaling the technology to more complex driving tasks.