Just last week, I had written about creating an infrastructure to make sure that Autonomous Vehicles (AVs) don’t deviate from a set path thereby avoiding pedestrians and accidents. While we introduce such critical disruptive technologies into the world, we must also be aware that they come with a learning cost. AVs could have a notorious relation with fatalities on road (possibly at a much lesser rate when compared to humans).
We know that earlier this week, there was a self driving car from Uber that got involved in an accident where unfortunately a pedestrian got killed in Arizona. While we don’t have any concrete details of the accident, just the fact that an accident occurred despite the presence of an operator in the self driving car drives the need to build supporting infrastructure to support AVs. After all, safety is of paramount importance! From the initial reports, it appeared that the victim suddenly came out of the blue on her bike to cross the road. It is still not clear whether the AV or the operator had any appropriate reaction time to apply brakes to avoid the collision. Unfortunately, irrespective of the cause, we are now trading in human life at the altar of AV’s learning. While there are still no studies yet as to how many accidents will be caused by AVs later, I hope that the AV technology will be superior in avoiding accidents and eventually save more human lives on the road (especially since 94% of road accidents are caused by human errors).
Photo credit from Secure energy.org :http://secureenergy.org/wp-content/uploads/2016/09/Autonomous-Vehicle.jpg
We are now at the cusp of blending AI, machine learning and hi-tech engineering into one single machine (self driving car). However, there is one thing I am sure AV cannot learn – and that is the human remorse after making mistakes.
Can AVs add as much learning into its algorithms to understand the fact that it’s involved in killing a human life?! When humans are involved in bad accidents, most often they are life changing experiences. They emerge (hopefully) as more empathetic human beings – and most even change their driving behaviors and become safer drivers on the road. Similarly, will an AV change its behavior after killing a human? I assume there are no distinctions for an AV – whether it hits a human or a rock. How far are we willing to go to bring AV technology into the world for the greater good. While we may have invested in far more dangerous technologies such as nuclear power, to the public eye, people dying on the road appears to make a far more profound impact.
Before we start testing AVs on real roads, we need to focus on building the requisite supporting infrastructure to avoid such AV accidents in the future. As for us, the self-driving car technology calls for a change in the way we behave on roads – to not only accept AVs on roads and but also be cognizant that AVs can cause accidents – just like us, humans! Just like there are mechanisms and civil engineering infrastructure for other transportation technologies like airplanes and railroads which ensure that humans don’t come in their paths, there must be similar such safe infrastructure as overbridges or no-human zones on AV paths, when we introduce AVs on the roads.
The self-driving car still doesn’t know what it did was wrong but unfortunately the future of autonomous vehicles should not be paved on gravestones of human life.
Disclaimer: The views of the author are personal.