people with no cosmic ambitions and brainwashed by the self-help industry, don’t even have any goals in life that require direct brain editing, aren’t much willing to imagine them because it implies that their own brains are (gasp!) inadequate.
Is this your causal theory? Literally, that pjeby considered a goal that would have required direct brain editing, noticed that the goal would have implied that his brain was inadequate, felt negative self-image associations, and only then dropped the goal from consideration, and for no other reason? And further, that this is why he asked: “If you have a system that’s perfectly capable of making changes on its own, debugged by millions of years of evolution, why on earth would you want to bypass those safeties?”
I think that, where you are imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, and where you imagine pjeby to be imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, pjeby may be actually imagining someone who already has a brain-editing device and no safetiness theory, and who is faced with a short-range practical decision problem about whether to use the device when the option of introspective self-modification is available. pjeby probably has a lot of experience with people who have simple technical tools and are not reflective like you about whether they are safe to use. That is the kind of person he might be thinking of when he is deciding whether it would be better advice to tell the person to introspect or to use the brain editor.
(Also, someone other than me should have diagnosed this potential communication failure already! Do you guys prefer strife and ad-hominems and ill will or something?)
The x you get from
argmax_(x) U(x, y)
for fixed y is, in general, different from the x you get from
argmax_(x, y) U(x, y).
But this doesn’t mean you can conclude that the first argmax calculated U() wrong.
Is this your causal theory? Literally, that pjeby considered a goal that would have required direct brain editing, noticed that the goal would have implied that his brain was inadequate, felt negative self-image associations, and only then dropped the goal from consideration, and for no other reason? And further, that this is why he asked: “If you have a system that’s perfectly capable of making changes on its own, debugged by millions of years of evolution, why on earth would you want to bypass those safeties?”
I think that, where you are imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, and where you imagine pjeby to be imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, pjeby may be actually imagining someone who already has a brain-editing device and no safetiness theory, and who is faced with a short-range practical decision problem about whether to use the device when the option of introspective self-modification is available. pjeby probably has a lot of experience with people who have simple technical tools and are not reflective like you about whether they are safe to use. That is the kind of person he might be thinking of when he is deciding whether it would be better advice to tell the person to introspect or to use the brain editor.
(Also, someone other than me should have diagnosed this potential communication failure already! Do you guys prefer strife and ad-hominems and ill will or something?)
The x you get from
argmax_(x) U(x, y)
for fixed y is, in general, different from the x you get from
argmax_(x, y) U(x, y).
But this doesn’t mean you can conclude that the first argmax calculated U() wrong.