In real life, I’ve had some trouble recently admitting I hadn’t thought of something when it was plausible to claim I had. I think that admitting it would/will cost me status points, as it does not involve rationalists, “rationalists”, aspiring rationalists, or “aspiring rationalists”.
Are you sure you chose the phrase “simply that your utility function does not speak about things that require them” to describe the state of affairs where no human utility function would have it, and hence it would be unimportant to Eliezer?
If you see the thought expressed in my comment as trivially obvious, then:
1) we disagree about what people would find obvious,
2) regardless of the truth of what people find obvious, you are probably smarter than I to make that assumption, rather than simply less good at modeling other humans’ understanding,
3) I’m glad to be told by someone smarter than I that my thoughts are trivial, rather than wrong.
The comment wasn’t really intended for anyone other than Eliezer, and I forgot to correct for the halo making him out to me basically omniscience and capable of reading my mind.
he could still accept superhappies or papperclipers caring about it say.
I think he actually might intrinsically value their desires too. One can theoretically make the transition from “human” to “paperclip maximizer” one atom at a time; differences in kind are the best way for corrupted/insufficiently powerful software to think about it, but here we’re talking about logical impurity, which would contaminate with sub-homeopathic doses.
Well, in that case it’s new information and we can conclude that either his utility function DOES include things in those universes that he claim can not exist, or it’s not physically possible to construct an agent that would care about them.
In real life, I’ve had some trouble recently admitting I hadn’t thought of something when it was plausible to claim I had. I think that admitting it would/will cost me status points, as it does not involve rationalists, “rationalists”, aspiring rationalists, or “aspiring rationalists”.
Are you sure you chose the phrase “simply that your utility function does not speak about things that require them” to describe the state of affairs where no human utility function would have it, and hence it would be unimportant to Eliezer?
If you see the thought expressed in my comment as trivially obvious, then:
1) we disagree about what people would find obvious, 2) regardless of the truth of what people find obvious, you are probably smarter than I to make that assumption, rather than simply less good at modeling other humans’ understanding, 3) I’m glad to be told by someone smarter than I that my thoughts are trivial, rather than wrong.
The comment wasn’t really intended for anyone other than Eliezer, and I forgot to correct for the halo making him out to me basically omniscience and capable of reading my mind.
I think he actually might intrinsically value their desires too. One can theoretically make the transition from “human” to “paperclip maximizer” one atom at a time; differences in kind are the best way for corrupted/insufficiently powerful software to think about it, but here we’re talking about logical impurity, which would contaminate with sub-homeopathic doses.
Well, in that case it’s new information and we can conclude that either his utility function DOES include things in those universes that he claim can not exist, or it’s not physically possible to construct an agent that would care about them.
I would say “care dependent upon them”. An agent could care dependent upon them without caring about them, the converse is not true.
That’s even wider, although probably by a very small amount, thanks!