My utility function says SPECKS. I thought it was because it was rounding the badness of a dust speck down to zero.
But if I modify the problem to be 3^^^3 specks split amongst a million people and delivered to their eyes at a rate of one per second for the rest of their lives, it says TORTURE.
If the badness of specks add up when applied to a single person, then a single dust speck must have non-zero badness. Obviously, there’s a bug in my utility function.
If the badness of specks add up when applied to a single person, then a single dust speck must have non-zero badness.
If I drink 10 liters of water in an hour, I will die from water intoxication, which is bad. But this doesn’t mean that drinking water is always bad—on the contrary, I think we’ll agree that drinking some water every once in a while is good.
Utility functions don’t have to be linear—or even monotonic—over repeated actions.
With that said, I agree with your conclusion that a single dust speck has non-zero (in particular, positive) badness.
If the background rate at which dust specks enter eyes is, say, once per day, then an additional dust speck is barely even noticeable. The 3^^^3 people probably wouldn’t even be able to tell that they got an “extra” dust speck, even if they were keeping an excel spreadsheet and making entries every time they got a dust speck in their eye, and running relevant statistics on it. I think I just switched back to SPECKS. If a person can’t be sure that something even happened to them, my utility function is rounding it off to zero.
I expect that more than one of my brain modules are trying to judge between incompatible conclusions, and selectively giving attention to the inputs of the problem.
My thinking was similar to yours—it feels less like I’m applying scope insensitivity and more like I’m rounding the disutility of specks down due to their ubiquity, or their severity relative to torture, or the fact that the effects are so dispersed. If one situation goes unnoticed, lost in the background noise, while another irreparably damages someone’s mind, then that should have some impact on the utility function. My intuition tells me that this justifies rounding the impact of a speck down to zero, that the difference is a difference of kind, not of degree, that I should treat these as fundamentally different. At the same time, like Vincent, I’m inclined to assign non-zero disutility value to a speck.
One brain, two modules, two incompatible judgements. I’m willing to entertain the possibility that this is a bug. But I’m not ready yet to declare one module the victor.
My utility function says SPECKS. I thought it was because it was rounding the badness of a dust speck down to zero.
But if I modify the problem to be 3^^^3 specks split amongst a million people and delivered to their eyes at a rate of one per second for the rest of their lives, it says TORTURE.
If the badness of specks add up when applied to a single person, then a single dust speck must have non-zero badness. Obviously, there’s a bug in my utility function.
If I drink 10 liters of water in an hour, I will die from water intoxication, which is bad. But this doesn’t mean that drinking water is always bad—on the contrary, I think we’ll agree that drinking some water every once in a while is good.
Utility functions don’t have to be linear—or even monotonic—over repeated actions.
With that said, I agree with your conclusion that a single dust speck has non-zero (in particular, positive) badness.
You know what? You are absolutely right.
If the background rate at which dust specks enter eyes is, say, once per day, then an additional dust speck is barely even noticeable. The 3^^^3 people probably wouldn’t even be able to tell that they got an “extra” dust speck, even if they were keeping an excel spreadsheet and making entries every time they got a dust speck in their eye, and running relevant statistics on it. I think I just switched back to SPECKS. If a person can’t be sure that something even happened to them, my utility function is rounding it off to zero.
This may be already obvious to you, but such a utility function is incoherent (as made vivid by examples like the self-torturer).
I expect that more than one of my brain modules are trying to judge between incompatible conclusions, and selectively giving attention to the inputs of the problem.
My thinking was similar to yours—it feels less like I’m applying scope insensitivity and more like I’m rounding the disutility of specks down due to their ubiquity, or their severity relative to torture, or the fact that the effects are so dispersed. If one situation goes unnoticed, lost in the background noise, while another irreparably damages someone’s mind, then that should have some impact on the utility function. My intuition tells me that this justifies rounding the impact of a speck down to zero, that the difference is a difference of kind, not of degree, that I should treat these as fundamentally different. At the same time, like Vincent, I’m inclined to assign non-zero disutility value to a speck.
One brain, two modules, two incompatible judgements. I’m willing to entertain the possibility that this is a bug. But I’m not ready yet to declare one module the victor.