My impression was that imminence is a point of contention, much less orthogonality.
This article is a good place to start in clarifying the MIRI position. Since their estimate for imminence seems to boil down to “we asked the community what they thought and made a distribution,” I don’t see that as contention.
There is broad uncertainty about timelines, but the MIRI position is “uncertainty means we should not be confident we have all the time we need,” not “we’re confident it will happen soon,” which is where someone would need to be for me to say they’re “for imminence.”
Interesting. I considered imminence more of a point of contention b/c the most outspoken “AI risk is overhyped” people are mostly using it as an argument (and I consider this bunch way more serious than Searle and Brooks: Yann LeCun, Yoshua Bengio, Andrew Ng).
My impression was that imminence is a point of contention, much less orthogonality. Who specifically do you have in mind?
This article is a good place to start in clarifying the MIRI position. Since their estimate for imminence seems to boil down to “we asked the community what they thought and made a distribution,” I don’t see that as contention.
There is broad uncertainty about timelines, but the MIRI position is “uncertainty means we should not be confident we have all the time we need,” not “we’re confident it will happen soon,” which is where someone would need to be for me to say they’re “for imminence.”
Interesting. I considered imminence more of a point of contention b/c the most outspoken “AI risk is overhyped” people are mostly using it as an argument (and I consider this bunch way more serious than Searle and Brooks: Yann LeCun, Yoshua Bengio, Andrew Ng).