I asked GPT-4 what the differences between Eliezer Yudkowsky and Paul Christiano’s approaches to AI alignment are, using only words with less than 5 letters.
(One-shot, in the same session I talked earlier with it with prompts unrelated to alignment)
When I first shared this on social media, some commenters pointed out that (1) is wrong for current Yudkowsky as he now pushes for a minimally viable alignment plan that is good enough to not kill us all. Nonetheless, I think this summary is closer to being an accurate summary for both Yudkowsky and Christiano than the majority of “glorified autocomplete” talking heads are capable of, and probably better than a decent fraction of LessWrong readers as well.
I asked GPT-4 what the differences between Eliezer Yudkowsky and Paul Christiano’s approaches to AI alignment are, using only words with less than 5 letters.
(One-shot, in the same session I talked earlier with it with prompts unrelated to alignment)
When I first shared this on social media, some commenters pointed out that (1) is wrong for current Yudkowsky as he now pushes for a minimally viable alignment plan that is good enough to not kill us all. Nonetheless, I think this summary is closer to being an accurate summary for both Yudkowsky and Christiano than the majority of “glorified autocomplete” talking heads are capable of, and probably better than a decent fraction of LessWrong readers as well.