You don’t have to do the sum explicitly. As a turing-complete being (well, probably), you can do all sorts of cool things that fall under the category of mathematical proof. So if you haven’t sent Tim your money you either have to not be capable of mathematical proofs, you have to have a bounded utility function, or you have to have no well-defined utility function at all.
Okay, so it’s probably the third one for all humans. But what if you were designing an AI that you knew could do mathematical proofs and had a well-defined utility function? Should it to send Tim its money or not?
Or we accept that the premise is flawed; I can have a defined, unbound utility function, and I can certainly do mathematical proofs, without sending god all my money :)
To be less aggravating, I’ll pre-explain: nothing personal, of course. I don’t believe any person has a defined utility function. As for unbounded: there’s a largest number your brain can effectively code. I can buy an unbounded (except by mortality) sequence of equally subjectively strong preferences for a sequence of new states, each one equally better than the last, with time elapsed between so as for the last improved state to become the present baseline. But I don’t see how you’d want to call that an “unbounded utility function”. I’d appreciate a precise demonstration of how it is one. Maybe you could say that the magnitude of each preference is the same as would be predicted by a particular utility function.
If i’m charitable, I can believe a similar claim to your original: you don’t know of or accept any reason why it shouldn’t be possible that you actually have an (approximation to?) an unbounded utility function. Okay, but that’s not the same as knowing it’s possible.
(speculation aired to ward off possible tedious game-playing. let me know if I missed the mark)
If your argument is that I can’t have a defined utility function, and concede that therefore I can’t be gamed by this, then I don’t think we actually disagree on anticipations, just linguistics and possibly some philosophy. Certainly nothing I’d be inclined to argue there, yeah :)
Close enough (I didn’t have any therefore in mind, just disagreement with what I thought you claimed), though I wouldn’t call the confusion linguistics or philosophy.
It does seem like I attempted to understand you too literally. I’m not entirely sure exactly what you meant (if you’d offered a reason for your belief, it might have been clearer what that belief was).
Thanks for helping us succeed in not arguing over nothing—probably a bigger coup than whatever it was we were intending to contribute.
So if you haven’t sent Tim your money you either have to not be capable of mathematical proofs, you have to have a bounded utility function, or you have to have no well-defined utility function at all.
You buy into this nonsense?!? What “mathematical proof” says to send Tim all your money?
You don’t have to do the sum explicitly. As a turing-complete being (well, probably), you can do all sorts of cool things that fall under the category of mathematical proof. So if you haven’t sent Tim your money you either have to not be capable of mathematical proofs, you have to have a bounded utility function, or you have to have no well-defined utility function at all.
Okay, so it’s probably the third one for all humans. But what if you were designing an AI that you knew could do mathematical proofs and had a well-defined utility function? Should it to send Tim its money or not?
Or we accept that the premise is flawed; I can have a defined, unbound utility function, and I can certainly do mathematical proofs, without sending god all my money :)
But you don’t. Why should I believe you can?
blinks. I’m honestly not sure why you’d assume I don’t, but you seem pretty certain. Let’s start there?
Let’s see the definition, then.
To be less aggravating, I’ll pre-explain: nothing personal, of course. I don’t believe any person has a defined utility function. As for unbounded: there’s a largest number your brain can effectively code. I can buy an unbounded (except by mortality) sequence of equally subjectively strong preferences for a sequence of new states, each one equally better than the last, with time elapsed between so as for the last improved state to become the present baseline. But I don’t see how you’d want to call that an “unbounded utility function”. I’d appreciate a precise demonstration of how it is one. Maybe you could say that the magnitude of each preference is the same as would be predicted by a particular utility function.
If i’m charitable, I can believe a similar claim to your original: you don’t know of or accept any reason why it shouldn’t be possible that you actually have an (approximation to?) an unbounded utility function. Okay, but that’s not the same as knowing it’s possible.
(speculation aired to ward off possible tedious game-playing. let me know if I missed the mark)
If your argument is that I can’t have a defined utility function, and concede that therefore I can’t be gamed by this, then I don’t think we actually disagree on anticipations, just linguistics and possibly some philosophy. Certainly nothing I’d be inclined to argue there, yeah :)
Close enough (I didn’t have any therefore in mind, just disagreement with what I thought you claimed), though I wouldn’t call the confusion linguistics or philosophy.
It does seem like I attempted to understand you too literally. I’m not entirely sure exactly what you meant (if you’d offered a reason for your belief, it might have been clearer what that belief was).
Thanks for helping us succeed in not arguing over nothing—probably a bigger coup than whatever it was we were intending to contribute.
You buy into this nonsense?!? What “mathematical proof” says to send Tim all your money?