I feel that the word ‘utilons’ needs to be disambiguated or tabooed, and that once I see the actual winnings (money? prestige? sweet, sweet heroin?), I could see how it might be ‘utilons’ at the point it’s won, but negative utility later on.
Okay, let’s make it money, and assume you’re a money-optimiser.
Or make it utilons, and you’ve been told it’s utilons by your friend, who has a utiliometer.
EDIT:(I am forced to give such arbitrary, but certain, examples by the nature of the issue you’re having; you seem to be seeing anything with an uncertain part as being completely indistinguishable; to an extent that makes torture indistinguishable from chocolate.)
Hmmm, perhaps there is one example that could work: replace utilons with “hours worth of progress on making the utiliometer”, but make all the negative amounts 0 instead.
In each of these cases: do the random bits cancel?
During the flipping of the coin, and the winning of the utilons, yes. If they’re taking the measure with the utilitometer at the point-in-time of winning, then it will show ‘utilons’, but I think that’s the wrong place to take the measurement. There’s the possibility that more now means less later, or over all. If they take the measurement at end-of-time, then I would expect massive differences between each coin flip, as measured by the utilitometer, or no effect whatsoever.
I still think the problem is inherent in the definition, though, so asking me questions based strictly on that definition is, uh, problematic, even as a thought experiment.
Value is complex. Humans are contradictory. I doubt there is such a thing as a true utilon, or a simplistic optimizer of any kind. I asked Clippy what it valued, and didn’t get satisfactory results when talking about prediction and value problems.
During the flipping of the coin, and the winning of the utilons, yes. If they’re taking the measure with the utilitometer at the point-in-time of winning, then it will show ‘utilons’, but I think that’s the wrong place to take the measurement. There’s the possibility that more now means less later, or over all. If they take the measurement at end-of-time, then I would expect massive differences between each coin flip, as measured by the utilitometer, or no effect whatsoever.
The friend with the utiliometer set it up so that there are no differences between each flip. One might alter windflow over the artic, the other might kill a fish in the pacific, total utility is the same.
Value is complex. Humans are contradictory. I doubt there is such a thing as a true utilon, or a simplistic optimizer of any kind.
Then why bother trying to make a utiliometer?
Remember all those unintended consequences? Your making an imperfect utiliometer is as likely to have huge negative effects on the far future as any other action you make.
And you making a perfect utiliometer is impossible; the total future is unbounded.
Again: if you have two possibilities that are, on average, the same; apart from a small, known difference (ie. torturing someone to death or giving them chocolate; both are almost equally likely to prevent the end of the world [there is good reason to think that the chocolate is a better choice in that regard, but the effect is minor] both are equally likely to decrease the death toll in the year 5583224308, but one gives someone chocolate, and the other tortures the person to death) why can’t you cancel the bit that’s the same, and look at the difference?
Because as described, it would do the impossible :-P Obviously I’m not ever intending to build one, just thinking about it, which led me to the rest of this discussion, and my problems with utility and value.
Remember all those unintended consequences? Your making an imperfect utiliometer is as likely to have huge negative effects on the far future as any other action you make.
And you making a perfect utiliometer is impossible; the total future is unbounded.
Exactly why I feel like the entire future is wasted or random static, regardless of the actions I take.
why can’t you cancel the bit that’s the same, and look at the difference?
Because I think ‘the bit’ is different. Time moves in a linear fashion, and effects propagate outward from their point of origin, flipping all sorts of coins all over the place that would have otherwise landed on the other side.
Because I think ‘the bit’ is different. Time moves in a linear fashion, and effects propagate outward from their point of origin, flipping all sorts of coins all over the place that would have otherwise landed on the other side.
Of course “the bit” is, in actuality, different.
If “the bit” wasn’t different, d(utility)/d(work-on-utilitiometer) would be zero. But unless you know the difference, the effective difference to you is zero.
If I present you with two locked boxes, one with a diamond in, the other without, picking one will get you the diamond, the other won’t.
But unless you have some way of telling which box contains the diamond, you might as well pick the one that looks nicer.
Likewise with the coins; which way up you put it will affect the series of tosses unpredictably. But on average, it evens out. That″s the needed realisation.
unless you know the difference, the effective difference to you is zero.
I think that’s leaving the future out of the calculation since it’s otherwise hard to predict, and gets back to the original point that increases in predictive power seem to be more powerful than any other kind of utility, to the point where a loop forms.
As long as you recognise that there must be a point at which that is no longer true (ie. when your expected remaining rational lifespan is <1 year, will that still be true?) then it’s not necessarily a problem.
Honing your skills before beginning work is often good. Honing your skills until the day you die is always bad.
But you need to actually pay attention to how effective increases in your prediction are. If 2 years worth of work makes you 5% better at generting utility, then you need to stop work once you’ve got 40 or less years left.
Okay, let’s concentrate on this for a second, why do you disagree with the coinflip example?
Do you feel that the two sets of coinflips DON’T have the same average utility? Do you feel that the average utility of the coinflips isn’t zero?
Do you feel that utility can’t be measured? (in which case, whence the utiliometer?)
I feel that the word ‘utilons’ needs to be disambiguated or tabooed, and that once I see the actual winnings (money? prestige? sweet, sweet heroin?), I could see how it might be ‘utilons’ at the point it’s won, but negative utility later on.
Okay, let’s make it money, and assume you’re a money-optimiser.
Or make it utilons, and you’ve been told it’s utilons by your friend, who has a utiliometer.
EDIT:(I am forced to give such arbitrary, but certain, examples by the nature of the issue you’re having; you seem to be seeing anything with an uncertain part as being completely indistinguishable; to an extent that makes torture indistinguishable from chocolate.)
Hmmm, perhaps there is one example that could work: replace utilons with “hours worth of progress on making the utiliometer”, but make all the negative amounts 0 instead.
In each of these cases: do the random bits cancel?
During the flipping of the coin, and the winning of the utilons, yes. If they’re taking the measure with the utilitometer at the point-in-time of winning, then it will show ‘utilons’, but I think that’s the wrong place to take the measurement. There’s the possibility that more now means less later, or over all. If they take the measurement at end-of-time, then I would expect massive differences between each coin flip, as measured by the utilitometer, or no effect whatsoever.
I still think the problem is inherent in the definition, though, so asking me questions based strictly on that definition is, uh, problematic, even as a thought experiment.
Value is complex. Humans are contradictory. I doubt there is such a thing as a true utilon, or a simplistic optimizer of any kind. I asked Clippy what it valued, and didn’t get satisfactory results when talking about prediction and value problems.
The friend with the utiliometer set it up so that there are no differences between each flip. One might alter windflow over the artic, the other might kill a fish in the pacific, total utility is the same.
Then why bother trying to make a utiliometer?
Remember all those unintended consequences? Your making an imperfect utiliometer is as likely to have huge negative effects on the far future as any other action you make.
And you making a perfect utiliometer is impossible; the total future is unbounded.
Again: if you have two possibilities that are, on average, the same; apart from a small, known difference (ie. torturing someone to death or giving them chocolate; both are almost equally likely to prevent the end of the world [there is good reason to think that the chocolate is a better choice in that regard, but the effect is minor] both are equally likely to decrease the death toll in the year 5583224308, but one gives someone chocolate, and the other tortures the person to death) why can’t you cancel the bit that’s the same, and look at the difference?
Because as described, it would do the impossible :-P Obviously I’m not ever intending to build one, just thinking about it, which led me to the rest of this discussion, and my problems with utility and value.
Exactly why I feel like the entire future is wasted or random static, regardless of the actions I take.
Because I think ‘the bit’ is different. Time moves in a linear fashion, and effects propagate outward from their point of origin, flipping all sorts of coins all over the place that would have otherwise landed on the other side.
Of course “the bit” is, in actuality, different. If “the bit” wasn’t different, d(utility)/d(work-on-utilitiometer) would be zero. But unless you know the difference, the effective difference to you is zero.
If I present you with two locked boxes, one with a diamond in, the other without, picking one will get you the diamond, the other won’t.
But unless you have some way of telling which box contains the diamond, you might as well pick the one that looks nicer.
Likewise with the coins; which way up you put it will affect the series of tosses unpredictably. But on average, it evens out. That″s the needed realisation.
I think that’s leaving the future out of the calculation since it’s otherwise hard to predict, and gets back to the original point that increases in predictive power seem to be more powerful than any other kind of utility, to the point where a loop forms.
As long as you recognise that there must be a point at which that is no longer true (ie. when your expected remaining rational lifespan is <1 year, will that still be true?) then it’s not necessarily a problem.
Honing your skills before beginning work is often good. Honing your skills until the day you die is always bad.
But you need to actually pay attention to how effective increases in your prediction are. If 2 years worth of work makes you 5% better at generting utility, then you need to stop work once you’ve got 40 or less years left.
Not if “do nothing, then die” is the optimal path… otherwise agreed.