If your Ford had a Matthew McConaughey, it would be a Lincoln

A pseudo-philosophical look at robot car justice

Finally, an opinion on autonomous cars we can all get behind! Let’s thank Forbes for the soapbox they so kindly lent this automotive expert economist to stand on.

Ethics Won’t Be a Big Problem for Driverless Cars

Sorry, but ethics is a huge problem for AV’s. In fact most would say it is the BIGGEST problem for AV’s to be adopted with any kind of critical mass. Me included. Let’s dig in.

Advertisement

The central tenet our hero moralist concludes is that AV’s don’t have to be the best drivers ever, they just have to suck less than your average human. How hard can that be? After all, a human did this.

The real meat and potatoes of this logical fallacy starts off when the author declares that good judgement is not really a necessity to operate motor vehicles.

“The idea that humans will act ethically and wisely while driving is an absurd and false assumption.”

Advertisement

A very good point! However, a critical distinction is whether the humans are making good or bad decisions behind the wheel. Good drivers generally don’t crash, and thereby avoid needlessly squashing fellow motorists and their cars. Bad drivers, when they do these things, go to jail. I believe “vehicular manslaughter” is the trendy vernacular.

Justice, in some way, is served. The topic of what constitutes fair punishment is not ripe for discussion in this article, all that matters is that BAD drivers are punished, and hopefully after rehabilitation, learn to become GOOD drivers. Good drivers get to keep their privilege of having a happy hour punch card. (That’s a drivers license).

Advertisement

You can’t serve justice to a machine. Ever tried kicking the crap out of your printer? It just works even worse than before. And don’t you think we are already giving the driving robots a little too much credit? For mucks sake, it’s like all the Consumer Electronics people think the robots are going to be better drivers than Ayrton Senna. Except they’ll never speed. They’ll never take chances. They’ll talk politely, in binary of course, to all their robot friends and say “no, you sir, you were here first! By golly, I think you have the right of way, after you and good day!” They’ll never get their 1’s and 0’s mixed up, and they’ll never crash.

Just wait I say. These aren’t just computers, they are a mix of mechanical and digital machines. Mechanical things need maintenance, and mechanical things break. And it doesn’t really matter if said breakage tosses a car safely into a sunny meadow on the side of the road, or careening into a schoolbus full of underprivileged puppies. Autonomous accidents may happen only once for every hundred stupid-people accidents. But once is enough.

Advertisement

Once that happens, I can assure you, there will be calls for justice. Someone WILL be held responsible. Will it be the owner, who was Facetiming in the backseat? Or the mechanic who didn’t notice a damaged control arm the last time it was in for service? Or will it be the manufacturer, who built this car and said “hey everyone, our programmers say it’s safe, so let’s start the cue for orders at the Wendy’s down the street from our new Google Auto franchise.”

Right now, responsibility behind the wheel is mostly cut and dry. You crash, you lose. I don’t think society is ready to absolve everyone from guilt, and hold blameless 100% of all actors, in the event of a robot killing a human. It certainly won’t feel like justice to go all Office Space on a Nissan Leaf when it runs over your Mom, even though statistically we are all safer because of it.

Advertisement

Ethics are certainly a big problem for driverless cars, and apparently they are for some economists too.

Share This Story