In the context of being a direct reply to the post, it’s suggesting that “solve alignment” is something that GPT-4 could plausibly do. I certainly disagree with that and voted disagreement accordingly.
It actually wouldn’t surprise me if it could be done by a human alignment theorist working with an existing GPT, where the GPT serves mostly as a source of ideas.
It would be useful if it could solve alignment...
Why are people disagreeing with this statement?
In isolation, it’s technically correct.
In the context of being a direct reply to the post, it’s suggesting that “solve alignment” is something that GPT-4 could plausibly do. I certainly disagree with that and voted disagreement accordingly.
It actually wouldn’t surprise me if it could be done by a human alignment theorist working with an existing GPT, where the GPT serves mostly as a source of ideas.