Sparky, you dumb, dumb, fuck.
Image: ridesparky.com (https://www.ridesparky.com/wp-content/uploads/2016/11/Trolley-left.jpg)

The trolley problem is one of the most flawed ethical thought experiments imaginable, and I’m sick and tired of seeing it trotted out as some kind of gold standard for understanding morality and moral decision-making, human or otherwise.

(I originally posted this as a comment to Torch’s FP article yesterday about the big autonomous car study, but it got really long, so I decided to make it an Oppo post.)

It’s mind-boggling to me that a study of this magnitude, with a data set this impressive (funded at the cost of, well, a lot, I assume), went forward with an underlying experiment that is fundamentally bullshit.

Daniel Engber over on Slate has a more complete discussion about why the TP isn’t even worthy of your ass (see what I did there?). But, in short, it forces the subjects of the experiment into a purely utilitarian ethical universe where the actual range of human decision-making is arbitrarily delimited in Manichean ways that don’t make any sense, given the actual complexity of the real world (as so many other commenters on the post pointed out), and how actual humans behave in that world.

Advertisement

A thought experiment about morality that has minimal overlap with the real moral universe isn’t much of an experiment, especially when the researchers are suggesting that the conclusions should help to direct how we program our AI. And yet, here we are! Its either/or answer set makes the Trolley Problem fantastic for large-scale, statistically-driven quantitative social science like this, especially science that flattens cultural differences, but the results and conclusions derived of any bad experiment are, by their nature, going to be bad.

We prefer to run over fat people and cats instead of doctors and babies? Robot cars in China should favor saving old people over kids? These are the kinds of in-depth insights we’re supposed to glean from all this? This is the kind of survey data that’s supposed to inform AI about how to behave in the world on our behalf?

Give me a fucking break.

The whole point of AI-controlled autonomous vehicles is that they should be good enough to avoid the very situations posed in the Trolley Problem in the first place. And, I think that AI should be sophisticated enough to perceive the complexities of the world it operates in beyond arbitrary, utilitarian-universe ethical structures - just like humans do. Programming cats into the bottom of a rigid decision-making ethical hierarchy and babies at the top is the dumbest kind of AI I can imagine. I’m not getting in that car, no thank you.

Advertisement

Of course, engineers working on this kind of AI wouldn’t dream of doing it that way, and I think even now its development is well ahead of that curve. So, it all begs the question - what’s the point of a massive study like this, other than reporting on how captive subjects responded to questions about situations in a moral universe that doesn’t even really exist?

Don’t even get me started on whether or not we should consider robots ethical actors not.