I’ve been doing thought experiments involving a utilitometer: a device capable of measuring the utility of the universe, including sums-over-time and counterfactuals (what-if extrapolations), for any given utility function, even generic statements such as, “what I value.” Things this model ignores: nonutilitarianism, complexity, contradictions, unknowability of true utility functions, inability to simulate and measure counterfactual universes, etc.
Unfortunately, I believe I’ve run into a pathological mindset from thinking about this utilitometer. Given the abilities of the device, you’d want to input your utility function and then take a sum-over-time from the beginning to the end of the universe and start checking counterfactuals (“I buy a new car”, “I donate all my money to nonprofits”, “I move to California”, etc) to see if the total goes up or down.
It seems quite obvious that the sum at the end of the universe is the measure that makes the most sense, and I can’t see any reason for taking a measure at the end of an action as is done in all typical discussions of utility. Here’s an example: “The expected utility from moving to California is negative due to the high cost of living and the fact that I would not have a job.” But a sum over all time might show that it was positive utility because I meet someone, or do something, or learn something that improves the rest of my life, and without the utilitometer, I would have missed all of those add-on effects. The device allows me to fill in all of the unknown details and unintended consequences.
Where this thinking becomes a problem is when I realize I have no such device, but desperately want one, so I can incorporate the unknown and the unintended, and know what path I should be taking to maximize my life, rather than having the short, narrow view of the future I do now. In essence, it places higher utility on ‘being good at calculating expected utility’ than almost any other actions I could take. If I could just build a true utilitometer that measures everything, then the expected utility would be enormous! (“push button to improve universe”). And even incremental steps along the way could have amazing payoffs.
Given that a utilitometer as described is impossible, thinking about it has still altered my values to place steps toward creating it above other, seemingly more realistic options (buying a new car, moving to California, etc). I previously asked the question, “How much time and effort should we put into improving our models and predictions, given we will have to model and predict the answer to this question?” and acknowledged it was circular and unanswerable. The pathology comes from entering the circle and starting a feedback loop; anything less than perfect prediction means wasting the entire future.
Pathological utilitometer thought experiment
I’ve been doing thought experiments involving a utilitometer: a device capable of measuring the utility of the universe, including sums-over-time and counterfactuals (what-if extrapolations), for any given utility function, even generic statements such as, “what I value.” Things this model ignores: nonutilitarianism, complexity, contradictions, unknowability of true utility functions, inability to simulate and measure counterfactual universes, etc.
Unfortunately, I believe I’ve run into a pathological mindset from thinking about this utilitometer. Given the abilities of the device, you’d want to input your utility function and then take a sum-over-time from the beginning to the end of the universe and start checking counterfactuals (“I buy a new car”, “I donate all my money to nonprofits”, “I move to California”, etc) to see if the total goes up or down.
It seems quite obvious that the sum at the end of the universe is the measure that makes the most sense, and I can’t see any reason for taking a measure at the end of an action as is done in all typical discussions of utility. Here’s an example: “The expected utility from moving to California is negative due to the high cost of living and the fact that I would not have a job.” But a sum over all time might show that it was positive utility because I meet someone, or do something, or learn something that improves the rest of my life, and without the utilitometer, I would have missed all of those add-on effects. The device allows me to fill in all of the unknown details and unintended consequences.
Where this thinking becomes a problem is when I realize I have no such device, but desperately want one, so I can incorporate the unknown and the unintended, and know what path I should be taking to maximize my life, rather than having the short, narrow view of the future I do now. In essence, it places higher utility on ‘being good at calculating expected utility’ than almost any other actions I could take. If I could just build a true utilitometer that measures everything, then the expected utility would be enormous! (“push button to improve universe”). And even incremental steps along the way could have amazing payoffs.
Given that a utilitometer as described is impossible, thinking about it has still altered my values to place steps toward creating it above other, seemingly more realistic options (buying a new car, moving to California, etc). I previously asked the question, “How much time and effort should we put into improving our models and predictions, given we will have to model and predict the answer to this question?” and acknowledged it was circular and unanswerable. The pathology comes from entering the circle and starting a feedback loop; anything less than perfect prediction means wasting the entire future.