Curated. This post cleanly gets at some core disagreement in the AI [Alignment] fields, and I think does so from a more accessible frame/perspective than other posts on LessWrong and the Alignment forum. I’m hopeful that this post and others in the sequence will enable better and more productive conversations between researchers, and for that matter, just better thoughts!
Thanks Ruby! Now that the other posts are out, would it be easy to forward-link them (by adding links to the italicized titles in the list at the end)?
Curated. This post cleanly gets at some core disagreement in the AI [Alignment] fields, and I think does so from a more accessible frame/perspective than other posts on LessWrong and the Alignment forum. I’m hopeful that this post and others in the sequence will enable better and more productive conversations between researchers, and for that matter, just better thoughts!
Thanks Ruby! Now that the other posts are out, would it be easy to forward-link them (by adding links to the italicized titles in the list at the end)?
We can also make a Sequence. I assume “More Is Different for AI” should be the title of the overall Sequence too?
Yup! That sounds great :)
Here it is! https://www.lesswrong.com/s/4aARF2ZoBpFZAhbbe
You might want to edit the description and header image.
Done!