Curated! This is a good description of a self-contained problem for a general class of algorithms that aim to train aligned and useful ML systems, and you’ve put a bunch of work put into explaining reasons why it may be hard, with a clear and well-defined example for conveying the problem (i.e. that Carmichael numbers fool Fermi’s Primality Test).
The fun bit for me is talking about how if this problem goes one way (where we cannot efficiently distinguish different mechanisms) this invalidates many prior ideas, and if it doesn’t then we can be more optimistic that we’re close to a good alignment algorithm, but you’re honestly not sure! (You give it a 20% chance of success.) And you also go through a list of next-steps if it doesn’t work out. Great contribution.
I am tempted to say something about how the writing seems to me much clearer than previous years of your writing, but I think this is also in part due to me (a) understanding what you are trying to do better and (b) having stronger basic intuitions for thinking about machine learning models. Still, I think the writing is notably clearer, which is another reason to curate.
Curated! This is a good description of a self-contained problem for a general class of algorithms that aim to train aligned and useful ML systems, and you’ve put a bunch of work put into explaining reasons why it may be hard, with a clear and well-defined example for conveying the problem (i.e. that Carmichael numbers fool Fermi’s Primality Test).
The fun bit for me is talking about how if this problem goes one way (where we cannot efficiently distinguish different mechanisms) this invalidates many prior ideas, and if it doesn’t then we can be more optimistic that we’re close to a good alignment algorithm, but you’re honestly not sure! (You give it a 20% chance of success.) And you also go through a list of next-steps if it doesn’t work out. Great contribution.
I am tempted to say something about how the writing seems to me much clearer than previous years of your writing, but I think this is also in part due to me (a) understanding what you are trying to do better and (b) having stronger basic intuitions for thinking about machine learning models. Still, I think the writing is notably clearer, which is another reason to curate.
I also found the writing way clearer than usual, which I appreciate—it made the post much easier for me to engage with.