For what it’s worth, I was just learning about the basics of MIRI’s research when this came out, and reading it made me less convinced of the value of MIRI’s research agenda. That’s not necessarily a major problem, since the expected change in belief after encountering a given post should be 0, and I already had a lot of trust in MIRI. However, I found this post by Jessica Taylor vastly clearer and more persuasive (it was written before “Rocket Alignment”, but I read “Rocket Alignment” first). In particular, I would expect AI researchers to be much more competent than the portrayal of spaceplane engineers in the post, and it wasn’t clear to me why the analogy should be strong Bayesian evidence for MIRI being correct.
For what it’s worth, I was just learning about the basics of MIRI’s research when this came out, and reading it made me less convinced of the value of MIRI’s research agenda. That’s not necessarily a major problem, since the expected change in belief after encountering a given post should be 0, and I already had a lot of trust in MIRI. However, I found this post by Jessica Taylor vastly clearer and more persuasive (it was written before “Rocket Alignment”, but I read “Rocket Alignment” first). In particular, I would expect AI researchers to be much more competent than the portrayal of spaceplane engineers in the post, and it wasn’t clear to me why the analogy should be strong Bayesian evidence for MIRI being correct.