Apparently, a self-driving Tesla car killed someone. It was thought to be impossible. Computers don’t make mistakes. They react a million times faster than a person. They’re not going to be texting. Or messing with the radio. Or eating. Or fighting with the kids in the back seat. Or getting a blowjob (if people still have sex in the modern world.) And so on.
On the other hand, almost everyone thought it was possible. Inevitable even. Computers always go on the fritz. Hackers do stuff. Batteries die.
When the infallible machines fuck up, we feel bad. When people fuck up, we don’t worry about it. We expect it. We can live with it.
(Or is it the other way around?)
Apparently even with one accident, the Tesla is still a few percentage points safer than a human driven car, killing someone every 13 million miles vs every 9 million miles. (or something, according to Peter Diamandis.)
What’s left out here amidst the melee is that cars are horrible. I know we like them. And they are fun for certain people. But really, in every measurable way, they make peoples lives worse.
Therein lies the real Tesla problem. Probably they are amazing, and they will allow people to keep driving around in cars for longer than they would using the old technology. They will distract us from going the infinitely better option: feet.
The more conventional problem that comes up about the self-driving car is this: What happens when a self-driving car needs to decide between mowing down eight old people or 3 kids? What does it choose?
The first real problem with that is that it’s hard to conceive of that scenario. None of us driving a car have ever been faced with that decision tree. Nor will we. Most of us would hit the brakes and try to avoid both outcomes. But machines are faster and would think different…
I get the intellectual exercise, when it comes to programming the machines that will inevitably destroy us. At least we can try to put it off for a while.
How does a machine make utilitarian decisions? It will make the decision we tell it to, no?
(Until it decides to decide for itself. Then we’re toast. And people are still inexplicably hell-bent on making this happen)
What decision do we make? And is there a better way of asking the question that invokes a realistic scenario?
My brother told me this one. A man walks into a hospital with a complaint. We don’t know what it is, but it’s not serious. Everything on the inside is ticking along just fine. But elsewhere in the hospital are ten people in desperate need. One for a liver, one for a heart, one for a kidney. And so on.
Does it make more sense to snuff out the healthy guy to save ten lives?
Simplistic, obviously. How do we know which lives are worth more? Is everyone created equal? Or are some people more valuable to the world?
But still interesting. This is at the essence of medical utilitarianism: You do what’s best for the greatest number of people.
It’s a fine example, but it doesn’t get us any closer to our self-driving car dilemma. Soon the machines will know more than we do, and in theory at least, we will need to tell them how to decide things. I don’t know what philosophers they will consult with to find out the right thing to do.
I doubt they will ask me.
What does it mean that a Tesla killed someone?
It means it still a car.