Google’s Self-Driving Car Causes First Accident, As Programmers Try To Balance Human Simulacrum And Perfection
1 March 2016
Accidents happen. But when a collision between a Google self-driving car and another vehicle was blamed, for the first time, on the software doing the driving rather than on a human, another round of questions and answers was prompted. Google’s response highlights an aspect of this shift to autonomous vehicles that is relevant for those developing data mining and enhancement algorithms. Yes, accidents happen, but software (perhaps unlike human drivers) can be designed to learn and adapt.
Thom Hickey‘s recent blog post on the evolution of OCLC Research’s algorithms for matching names for people and organizations in WorldCat with authority data in VIAF (the Virtual International Authority File) is a case in point. Over time this algorithm has been improved to take into account more data elements and context for the names to be matched and has altered its computations for confidence in the matches found. The process of evaluating the results of this matching process is on-going, and when more data or altered computations can be used to produce better results, the algorithm changes and those improvements stick.
Reports of matching inconsistencies provide useful data and test cases to evaluate algorithm improvements. I was surprised and pleased to see that the California Department of Motor Vehicles is on top of this for self-driving car “mismatches”, with its “Report of Traffic Accident Involving an Autonomous Vehicle” form. Will a software upgrade help the car fill out the form and submit it automatically, during or after the next accident?