That’s pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration.
“Don’t talk too much about how powerful AI could get because it will just make other people get excited and go faster” was a prevailing view at MIRI for a long time, I’m told. (That attitude pre-dates me.) At this point many folks at MIRI believe that the calculus has changed, that AI development has captured so much energy and attention that it is too late for keeping silent to be helpful, and now it’s better to speak openly about the risks.
Awesome, that’s great to hear and these recommendations/guidance are helpful and more than I expected, thank you.
I can’t wait to see what you’ve cooked up for the upcoming posts, MIRI’s outputs (including decisions) are generally pretty impressive and outstanding, and I have a feeling that I’ll appreciate and benefit from them even more now that people are starting to pull out the stops.
“Don’t talk too much about how powerful AI could get because it will just make other people get excited and go faster” was a prevailing view at MIRI for a long time, I’m told. (That attitude pre-dates me.) At this point many folks at MIRI believe that the calculus has changed, that AI development has captured so much energy and attention that it is too late for keeping silent to be helpful, and now it’s better to speak openly about the risks.
Awesome, that’s great to hear and these recommendations/guidance are helpful and more than I expected, thank you.
I can’t wait to see what you’ve cooked up for the upcoming posts, MIRI’s outputs (including decisions) are generally pretty impressive and outstanding, and I have a feeling that I’ll appreciate and benefit from them even more now that people are starting to pull out the stops.