An Update From MIT………
As an MIT Alumnus as well as a former mathematics instructor at that Institution, this Ombudsman is most pleased to share, discuss, and analyze the latest information on the controversial subject of self-driving cars. This information may be found in my December, 2016 copy of the “MIT Technology Review”. The editor, Jason Pontin, has granted our column permission to quote directly from this MIT publication.
While this Ombudsman certainly recognizes all of the important technological advances that have been made over the last several decades with the use of computers, the theme of three articles in this column (including the present one) is on the possible pitfalls which may result from the strict reliance upon Computer Controls when it may possibly contradict “Human Judgment.”
In this Ombudsman’s very first article in April of this year, I discussed a serious automobile accident which might very well have been caused by faulty software in the computer controls of a 2011 mercedes-Benz SUV that crashed into a Metro-North train in Valhalla, NY in 2015. This accident was termed “the deadliest in this commuter rail line’s 33-year history”
On a more happy note, in a subsequent article in this column. this writer discussed the courageous action of the pilot (portrayed in the film, “SULLY”) who purposefully crashed his jumbo jet into the Hudson River in 2009, DEFYING the Computer-Controlled Orders for him to land his plane at a nearby airport (when his own experience told him this would have been suicidal). The result of this courageous action, based on his judgment, was the saving of the lives of all 155 individuals on his plane! This fact was proven in a subsequent investigation and hearing of the FAA.
In our current article, this Ombudsman will focus on those aspects of self-driving cars that present similar questions relating to issues of “Computer Control” versus “Human Judgment”.
We first quote directly from the December, 2016 “MIT Technology Review” as follows: “Most carmakers, notably Tesla Motors, Audi, Mercedes-Benz. Volvo and General Motors, and even a few big tech companies, including Google and (reportedly) Apple, are testing self-driving vehicles. Tesla cars drive themselves under many circumstances (although the company warns drivers to use the system only on highways and asks them to delete 1 “to” pay attention and keep their hands on the steering wheel)”
On the question of reliability of driverless cars, MIT cites an expert named Rajkumer as follows: “Besides the reliability of a car’s software, Rajkumer worries that a driverless vehicle could be hacked. ‘We know about the terror attack in Nice, where the terrorist driver was mowing down hundreds of people. Imagine there is no driver in the vehicle, he says.
Rajkumer also warns that fundamental progress is needed to get computers to interpret the real world more intelligently. “We are cognitive, sentient beings. We comprehend. We reason. And we take action.
When you have automated vehicles, they are just programmed to do certain things for certain scenarios.”
We end this article with a most compelling example of the type of risk society takes at this time when using driver-less cars.
MIT states: “An obvious example is how people react when they see a toy sitting in the road and conclude that a child might not be far away.”
Would a computer-driven car necessarily have this contingency built into their software?