Wednesday, November 29, 2023

Hypotheticals

This is Joseph.

I wanted to return to a very old topic. The use of hypotheticals to induce moral intuitions is often stated in a very stylized way that does not really convey how real decisions are made. Let us consider the famous trolley problem:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two (and only two) options:

Do nothing, in which case the trolley will kill the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will kill one person. 
Which is the more ethical option? Or, more simply: What is the right thing to do?

The problem with this is that these problems remove the complexity and uncertainty of real decision making. It is like the famous "use torture to find the location of a nuclear bomb" hypothetical. In a real situation nobody knows for sure that torture will work. The existence of the actual (working) bomb may not be clear. If everybody knew all of this with high confidence, I wonder how you get a jury of people (who were just freshly saved by the defendant) to convict of anything except a parade. I wonder how the law against torture would change the decisions of the person if they know, for certain, that millions of lives could be saved? I doubt it would based on what we see in actual dynamic combat situations. This example provides roughly zero insight into the confusion and terror of real situations. I remember 9/11 and just how many false and confusing narratives were present on that day. Confusion is what a crisis looks like, not precise clarity. 

We see the same simplifications in video games. If I play a total war game I miss all of the important stuff (how do I feed my army?) and simplify the hardest part of battle (how to do I give orders to a screamingly mass of engaged soldiers hundreds of feet away?). Bret Devereaux has a great series on this point

But what I think is most missing here is the hard part of these problems is how to manage uncertainty. Sam Bankman-Fried famously said:

“She recalled that Bankman-Fried once posed a coin-flip scenario where if the coin landed on tails, the world would end. But if the coin landed on heads, ‘the world would be twice as good.’ Bankman-Fried said he would take the bet.”

But the problem is not calculating rational expectations when you know what the outcome sets and probabilities are exactly. It is dealing with the uncertainty that makes calculations hard. 

Climate change would be much easier if we knew precisely the consequences between a 2.1C and 2.3C temperature change, know precisely how much carbon needed to be eliminated to be sure of it, and could project the pace of green technology to plan exactly what should be phased in or out. Instead we have a complex discussions about electric cars, which have a higher upfront carbon cost for a lower lifetime carbon output. Given feedback effects, could the switch trigger a crisis before the benefits emerge? Unlikely, but who can be sure? If electric cars have a shorter lifespan then that would be bad but we don't have 25 year old Tesla to look at (company was founded in 2003) but there are 14 millions ICE cars on the road

So I suspect we have very large confidence intervals on a lot of these calculations. That, I think, suggests some humility on making these decisions. The trolley problem looks different if you can dash to try and free the one person after flipping the switch (even if it is hard) but could never cut 5 people free. 

Given that, some intuition on probability theory would seem to be a good thing to cultivate. Let's check in on that. Oh. Oh, no. Philosophy Bear provides some useful context about timing, that rather dramatically shift the Bayesian priors, but the idea of using priors alone after you have data seems to make no sense at all. 

So, what I am proposing? We should start thinking about uncertainty a lot more and a lot less about cases where all of the unknowns are sharply understood. 

1 comment:

  1. I think we need to have "confidence values" behind every cell on every spreadsheet. (I don't think I'm expressing myself very well here, apologies.) The point is that at every spreadsheet's conclusion, the Big Insight, there'd be a auto-generated confidence estimate. A z-axis to every graph, a reveal of where well-intentioned guesstimates cause the overall conclusion to tun to smoke.

    ReplyDelete