This seems to have generated lots of internal discussions, and that’s cool on its own.
However, I also get the impression this article is intended as external communication, or at least a prototype of something that might become external communication; I’m pretty sure it would be terrible at that. It uses lots of jargon, overly precise language, references to other alignment articles, etc. I’ve tried to read it three times over the week and gave up after the third.
I’m mainly trying to communicate with people familiar with AI alignment discourse. If other people can still understand it, that’s useful, but not really the main intention.
This seems to have generated lots of internal discussions, and that’s cool on its own.
However, I also get the impression this article is intended as external communication, or at least a prototype of something that might become external communication; I’m pretty sure it would be terrible at that. It uses lots of jargon, overly precise language, references to other alignment articles, etc. I’ve tried to read it three times over the week and gave up after the third.
I’m mainly trying to communicate with people familiar with AI alignment discourse. If other people can still understand it, that’s useful, but not really the main intention.