I know I promised…

…but here’s one more really fascinating article regarding self-driving cars.  Written by Matt Windsor for UAB News, it focuses on a truly interesting question regarding this technology:

Will Your Self-Driving Car Be Programmed to Kill You?

Ok, the headline may seem…cheesy, but the ideas explored are actually very serious: What if we reach a point where self-driving cars are the norm and a situation arises where the car has to choose between saving you or other(s)?

The gist of the article is contained in this paragraph (I have put in bold what I consider the most important question raised):

Google’s cars can already handle real-world hazards, such as cars’ suddenly swerving in front of them. But in some situations, a crash is unavoidable. (In fact, Google’s cars have been in dozens of minor accidents, all of which the company blames on human drivers.) How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?

Certainly this is not a small consideration and will no doubt be a source of great debate in the years to come.

Having read about how the self-driving cars operate, my understanding is they are very slow and careful, essentially “granny”-drivers and therefore, at least in city settings, would find very, very few examples where they might face a “serious” collision.  On the other hand, there are those who speculate that highway driving will allow self-driving cars to operate at speeds far in excess of the speed limit and that self-driving vehicles might form a “chain” of cars not unlike a train to move along highways.  The possibility for something going very wrong in a high speed situation is obviously raised.

Then again, the point may become mute: There are those who theorize the day that self-driving cars become a reality, “human” drivers and cars will be limited to driving only in certain areas.  Some have even speculated that human driving might become outlawed entirely on public roads.  Before you think this is the rise of some kind of automotive fascism, one must also realize that if self-driving cars are successful, people will likely welcome the technology.  All that time you spend driving could instead be devoted to watching TV, reading the newspaper or a book, or talking with friends on your phone.

Further, if it becomes the norm that no one actually drives themselves and, assuming any software glitches are accounted for and taken care of, self-driving cars may be put into a collision situations only on the smallest of occasions.

At least one hopes that becomes the case!

Regardless, the article is a fascinating look into yet another facet of what I’m increasingly certain will be the future of personal travel.