The problem always was about not “knowing what you mean”, but about “caring about what you mean”.
Well that certainly wasn’t the impression I got—some texts explicitly made that clear.
Genie knows, but doesn’t care, for example.
OK do you disagree with Nora’s assessment of how Superintelligence has aged?
https://forum.effectivealtruism.org/posts/JYEAL8g7ArqGoTaX6/ai-pause-will-likely-backfire#fntcbltyk9tdq
The genie you have there seems to require a very fast takeoff to be real and overwhelmingly powerful compared to other systems.
I honestly think thay many of such opinions come from overupdates/overgeneralizations on ChatGPT.
Yeah, but that argument was wrong, too
Making one system to change another is easy, making one system changing another into aligned superintelligence is hard.
What’s that relevant to?
The problem always was about not “knowing what you mean”, but about “caring about what you mean”.
Well that certainly wasn’t the impression I got—some texts explicitly made that clear.
Genie knows, but doesn’t care, for example.
OK do you disagree with Nora’s assessment of how Superintelligence has aged?
https://forum.effectivealtruism.org/posts/JYEAL8g7ArqGoTaX6/ai-pause-will-likely-backfire#fntcbltyk9tdq
The genie you have there seems to require a very fast takeoff to be real and overwhelmingly powerful compared to other systems.
I honestly think thay many of such opinions come from overupdates/overgeneralizations on ChatGPT.
Yeah, but that argument was wrong, too
Making one system to change another is easy, making one system changing another into aligned superintelligence is hard.
What’s that relevant to?