In discussions, Ngo argues that less general, less capable AI could still be useful and less risky. He suggests “solving scientific problems” or “theoretically proving mathematical theorems” as alternative pivotal acts an AI could perform.
Maybe that’s a tad too literal? I’d laugh out loud if Claude changed the examples to “summarize debate among rationalists ” as alternative parodical acts an AI might arguably perform in some distant future post 2021.
Hmm I can see the humor but I cannot feel it, I’ve explained why this wouldn’t be especially surprising if you take conversation seriously and a part of me aches that people still don’t.
Nope. It’s a failure if I need to explain the irony, but the actual summary proves one of the point Claude summarizes, while my own parody would support your side of the argument (as it would suggest Claude could chose to have some fun rather than execute the instructions as ordered to). In other words, one of the feature was meta-laughing at myself for finding this scenario funny.
if you take conversation seriously and a part of me aches that people still don’t.
You’re right, I still don’t understand why many interesting minds (including clear genius like Scott Alexander) would take many of LW classics on x-risks seriously. Sorry my opinion got you angry.
Maybe that’s a tad too literal? I’d laugh out loud if Claude changed the examples to “summarize debate among rationalists ” as alternative parodical acts an AI might arguably perform in some distant future post 2021.
Typo, should be “alternative pivotal acts”.
Hmm I can see the humor but I cannot feel it, I’ve explained why this wouldn’t be especially surprising if you take conversation seriously and a part of me aches that people still don’t.
Nope. It’s a failure if I need to explain the irony, but the actual summary proves one of the point Claude summarizes, while my own parody would support your side of the argument (as it would suggest Claude could chose to have some fun rather than execute the instructions as ordered to). In other words, one of the feature was meta-laughing at myself for finding this scenario funny.
You’re right, I still don’t understand why many interesting minds (including clear genius like Scott Alexander) would take many of LW classics on x-risks seriously. Sorry my opinion got you angry.