You’re saying that present-me’s utility function counts and no-one else’s does (apart from their position in present-me’s function) because present-me is the one making the decision? That my choices must necessarily depend on my present function and only depend on other/future functions in how much I care about their happiness?
That seems reasonable.
But my current utility function tells me that there is an N large enough that N utilon-seconds for other peoples’ functions counts more in my function than any possible thing in the expected lifespan of present-me’s utility function.
Sure. That might well be so. I’m not saying you have to be selfish!
However, you’re talking about utilons for other people—but I doubt that that’s the only thing you care about. I would kind of like for Clippy to get his utilons, but in the process, the world will get turned into paperclips, and I care much more about that not happening! So if everyone were to be turned into paperclip maximizers, I wouldn’t necessarily roll over and say, “Alright, turn the world into paperclips”. Maybe if there were enough of them, I’d be OK with it, as there’s only one world to lose, but it would have to be an awful lot!
I’d consider it. On reflection, I think that for me personally what I care about isn’t just minds of any kind having their preferences satisfied, even if those are harmless ones. I think I probably would like them to have more adventurous preferences! The point is, what I’m looking at here are my preferences for how the world should be; whether I would prefer a world full of wire-headers or one full of people doing awesome actual stuff. I think I’d prefer the latter, even if overall the adventurous people didnt’ get as many of their preferences satisfied. A typical wire-header would probably disagree, though!
You’re saying that present-me’s utility function counts and no-one else’s does (apart from their position in present-me’s function) because present-me is the one making the decision? That my choices must necessarily depend on my present function and only depend on other/future functions in how much I care about their happiness? That seems reasonable. But my current utility function tells me that there is an N large enough that N utilon-seconds for other peoples’ functions counts more in my function than any possible thing in the expected lifespan of present-me’s utility function.
Sure. That might well be so. I’m not saying you have to be selfish!
However, you’re talking about utilons for other people—but I doubt that that’s the only thing you care about. I would kind of like for Clippy to get his utilons, but in the process, the world will get turned into paperclips, and I care much more about that not happening! So if everyone were to be turned into paperclip maximizers, I wouldn’t necessarily roll over and say, “Alright, turn the world into paperclips”. Maybe if there were enough of them, I’d be OK with it, as there’s only one world to lose, but it would have to be an awful lot!
So you, like I, might consider turning the universe into minds that most value a universe filled with themselves?
I’d consider it. On reflection, I think that for me personally what I care about isn’t just minds of any kind having their preferences satisfied, even if those are harmless ones. I think I probably would like them to have more adventurous preferences! The point is, what I’m looking at here are my preferences for how the world should be; whether I would prefer a world full of wire-headers or one full of people doing awesome actual stuff. I think I’d prefer the latter, even if overall the adventurous people didnt’ get as many of their preferences satisfied. A typical wire-header would probably disagree, though!
Fair.