Like, if I believe that AI Alignment won’t matter much and I use that to write off the field of AI Alignment, it feels like I’m either pre-emptively ignoring potentially relevant information, or I’m making a claim that I have some larger grounded insights into how the field is confused.
I think the key here is that if AGI only is something like say the internet, or perhaps the industrial revolution, then AGI alignment doesn’t matter much. A lot of the field of AGI alignment only really makes sense if the impact of AGI is very, very large.
I think the key here is that if AGI only is something like say the internet, or perhaps the industrial revolution, then AGI alignment doesn’t matter much. A lot of the field of AGI alignment only really makes sense if the impact of AGI is very, very large.