For my part, I have been wondering this week, what a constructive reply to this would be.
I think your proposed imperatives and experiments are quite good. I hope that they are noticed and thought about. I don’t think they are sufficient for correctly aligning a superintelligence, but they can be part of the process that gets us there.
That’s probably the most important thing for me to say. Anything else is just a disagreement about the nature of the world as it is now, and isn’t as important.
Started promisingly, but like everyone else, I don’t believe in the ten-year gap from AGI to ASI. If anything, we got a kind of AGI in 2022 (with ChatGPT), and we’ll get ASI by 2027, from something like your “cohort of Shannon instances”.