I believe that Sam Harris has already mastered this thought experiment. Paraphrased from his debate with William Lane Craig: ”There exists a hypothetical universe in which there is the absolute most amount of suffering possible. Actions that move us away from that universe are considered good; actions that move us towards that universe are considered bad”.
This is why I find Harris frustrating. He’s stating something pretty much everyone agrees with, but they all make different substitutions for the variable “suffering.” And then Harris is vague about what he personally plugs in.
At least as paraphrased here, the definition of “move towards” is very unclear. Is it a universe with more suffering? A universe with more suffering right now? A universe with more net present suffering, according to some discount rate? What if I move to a universe with more suffering both right now and for all possible future discount rates, assuming no further action, but for which future actions that greatly reduce suffering are made easier? (In other words, does this system get stuck in local optimums?)
I think there is much that this approach fails to solve, even if we all agree on how to measure suffering.
(Included in “how to measure suffering” is a bit of complicated stuff like average vs total utilitarianism, and how to handle existential risks, and how to do probability math on outcomes that produce a likelihood of suffering.)
This is why I find Harris frustrating. He’s stating something pretty much everyone agrees with, but they all make different substitutions for the variable “suffering.” And then Harris is vague about what he personally plugs in.
At least as paraphrased here, the definition of “move towards” is very unclear. Is it a universe with more suffering? A universe with more suffering right now? A universe with more net present suffering, according to some discount rate? What if I move to a universe with more suffering both right now and for all possible future discount rates, assuming no further action, but for which future actions that greatly reduce suffering are made easier? (In other words, does this system get stuck in local optimums?)
I think there is much that this approach fails to solve, even if we all agree on how to measure suffering.
(Included in “how to measure suffering” is a bit of complicated stuff like average vs total utilitarianism, and how to handle existential risks, and how to do probability math on outcomes that produce a likelihood of suffering.)