The way I see it, the future of superhuman AI is emerging from the interactions of three forces: Big Tech companies that are pushing AI towards superintelligence, researchers who are trying to figure out how to make superintelligent AI safe, and activists who are trying to slow down or stop the march towards superintelligence. MIRI used to be focused on safety research, but now it’s mostly trying to stop the march towards superintelligence, by presenting the case for the extreme danger of the current trajectory. Does that sound right?
I think that’s pretty close, though when I hear the word “activist” I tend to think of people marching in protests and waving signs, and that is not the only way to contribute to the effort to slow AI development. I think more broadly about communications and policy efforts, of which activism is a subset.
It’s also probably a mistake to put capabilities researchers and alignment researchers in two entirely separate buckets. Their motivations may distinguish them, but my understanding is that the actual work they do unfortunately overlaps quite a bit.
MIRI used to be focused on safety research, but now it’s mostly trying to stop the march towards superintelligence, by presenting the case for the extreme danger of the current trajectory.
Yeah, given the current state of the game board we think that work in the comms/policy space seems more impactful to us on the margin, so we’ll be focusing on that as our top priority and see how things develop, That won’t be our only focus though, we’ll definitely continue to host/fund research.
The way I see it, the future of superhuman AI is emerging from the interactions of three forces: Big Tech companies that are pushing AI towards superintelligence, researchers who are trying to figure out how to make superintelligent AI safe, and activists who are trying to slow down or stop the march towards superintelligence. MIRI used to be focused on safety research, but now it’s mostly trying to stop the march towards superintelligence, by presenting the case for the extreme danger of the current trajectory. Does that sound right?
I think that’s pretty close, though when I hear the word “activist” I tend to think of people marching in protests and waving signs, and that is not the only way to contribute to the effort to slow AI development. I think more broadly about communications and policy efforts, of which activism is a subset.
It’s also probably a mistake to put capabilities researchers and alignment researchers in two entirely separate buckets. Their motivations may distinguish them, but my understanding is that the actual work they do unfortunately overlaps quite a bit.
Yeah, given the current state of the game board we think that work in the comms/policy space seems more impactful to us on the margin, so we’ll be focusing on that as our top priority and see how things develop, That won’t be our only focus though, we’ll definitely continue to host/fund research.