Tho I would think that Nate’s a too subtle thinker to think that AI assistance is literally useless—just that most of the ‘hardest work’ is not easily automatable which seems pretty valid.
i.e. in my reading most of the hard work of alignment is finding good formalizations of informal intuitions. I’m pretty bullish on future AI assistants helping, especially proof assistants, but this doesn’t seem to be a case where you can simply prompt gpt-3 scaled x1000 or something silly like that. I understand Nate thinks that if it could do that it is secretly doing dangerous things.
Yes this seems clearly true.
Tho I would think that Nate’s a too subtle thinker to think that AI assistance is literally useless—just that most of the ‘hardest work’ is not easily automatable which seems pretty valid.
i.e. in my reading most of the hard work of alignment is finding good formalizations of informal intuitions. I’m pretty bullish on future AI assistants helping, especially proof assistants, but this doesn’t seem to be a case where you can simply prompt gpt-3 scaled x1000 or something silly like that. I understand Nate thinks that if it could do that it is secretly doing dangerous things.