So in this case my question is why Kaj suggests his proposal instead of using bounded utility.
Two reasons.
First, like was mentioned elsewhere in the thread, bounded utility seems to produce unwanted effects, like we want utility to be linear in human lives and bounded utility seems to fail that.
Second, the way I arrived at this proposal was that RyanCarey asked me what’s my approach for dealing with Pascal’s Mugging. I replied that I just ignore probabilities that are small enough, which seems to be thing that most people do in practice. He objected that that seemed rather ad-hoc and wanted to have a more principled approach, so I started thinking about why exactly it would make sense to ignore sufficiently small probabilities, and came up with this as a somewhat principled answer.
Admittedly, as a principled answer to which probabilities are actually small enough to ignore, this isn’t all that satisfying of an answer, since it still depends on a rather arbitrary parameter. But it still seemed to point to some hidden assumptions behind utility maximization as well as raising some very interesting questions about what it is that we actually care about.
First, like was mentioned elsewhere in the thread, bounded utility seems to produce unwanted effects, like we want utility to be linear in human lives and bounded utility seems to fail that.
This is not quite what happens. When you do UDT properly, the result is that the Tegmark level IV multiverse has finite capacity for human lives (when human lives are counted with 2^-{Kolomogorov complexity} weights, as they should). Therefore the “bare” utility function has some kind of diminishing returns but the “effective” utility function is roughly linear in human lives once you take their “measure of existence” into account.
I consider it highly likely that bounded utility is the correct solution.
I agree that bounded utility implies that utility is not linear in human lives or in other similar matters.
But I have two problems with saying that we should try to get this property. First of all, no one in real life actually acts like it is linear. That’s why we talk about scope insensitivity, because people don’t treat it as linear. That suggests that people’s real utility functions, insofar as there are such things, are bounded.
Second, I think it won’t be possible to have a logically coherent set of preferences if you do that (at least combined with your proposal), namely because you will lose the independence property.
I agree that, insofar as people have something like utility functions, those are probably bounded. But I don’t think that an AI’s utility function should have the same properties as my utility function, or for that matter the same properties as the utility function of any human. I wouldn’t want the AI to discount the well-being of me or my close ones simply because a billion other people are already doing pretty well.
Though ironically given my answer to your first point, I’m somewhat unconcerned by your second point, because humans probably don’t have coherent preferences either, and still seem to do fine. My hunch is that rather than trying to make your preferences perfectly coherent, one is better off making a system for detecting sets of circular trades and similar exploits as they happen, and then making local adjustments to fix that particular inconsistency.
Two reasons.
First, like was mentioned elsewhere in the thread, bounded utility seems to produce unwanted effects, like we want utility to be linear in human lives and bounded utility seems to fail that.
Second, the way I arrived at this proposal was that RyanCarey asked me what’s my approach for dealing with Pascal’s Mugging. I replied that I just ignore probabilities that are small enough, which seems to be thing that most people do in practice. He objected that that seemed rather ad-hoc and wanted to have a more principled approach, so I started thinking about why exactly it would make sense to ignore sufficiently small probabilities, and came up with this as a somewhat principled answer.
Admittedly, as a principled answer to which probabilities are actually small enough to ignore, this isn’t all that satisfying of an answer, since it still depends on a rather arbitrary parameter. But it still seemed to point to some hidden assumptions behind utility maximization as well as raising some very interesting questions about what it is that we actually care about.
This is not quite what happens. When you do UDT properly, the result is that the Tegmark level IV multiverse has finite capacity for human lives (when human lives are counted with 2^-{Kolomogorov complexity} weights, as they should). Therefore the “bare” utility function has some kind of diminishing returns but the “effective” utility function is roughly linear in human lives once you take their “measure of existence” into account.
I consider it highly likely that bounded utility is the correct solution.
I agree that bounded utility implies that utility is not linear in human lives or in other similar matters.
But I have two problems with saying that we should try to get this property. First of all, no one in real life actually acts like it is linear. That’s why we talk about scope insensitivity, because people don’t treat it as linear. That suggests that people’s real utility functions, insofar as there are such things, are bounded.
Second, I think it won’t be possible to have a logically coherent set of preferences if you do that (at least combined with your proposal), namely because you will lose the independence property.
I agree that, insofar as people have something like utility functions, those are probably bounded. But I don’t think that an AI’s utility function should have the same properties as my utility function, or for that matter the same properties as the utility function of any human. I wouldn’t want the AI to discount the well-being of me or my close ones simply because a billion other people are already doing pretty well.
Though ironically given my answer to your first point, I’m somewhat unconcerned by your second point, because humans probably don’t have coherent preferences either, and still seem to do fine. My hunch is that rather than trying to make your preferences perfectly coherent, one is better off making a system for detecting sets of circular trades and similar exploits as they happen, and then making local adjustments to fix that particular inconsistency.