I think the right way of thinking about that aspect is more: there are a bunch of methodologies to analyze the AI x-risk situation, and only one of the methods seems to give tremendously high credence to FOOM & DOOM.
Not so much a ‘you could be wrong’ argument, because I do think that in the Eliezer framework, it brings little comfort if you’re wrong about your picture of how intelligence works, since its highly improbable you’re wrong about something that makes a problem easier rather than harder.
This leads to the natural next question: what alternative methodologies & why do you have the faith you do in them when contrasted with the set of methodologies claiming FOOM & DOOM. And possibly a discussion of whether or not those alternative methodologies actually support the anti-FOOM & DOOM position. For example, you may claim that extrapolating lines on graphs says humans will continue to flourish, but the actual graphs we have are about GDP, and other crude metrics of human welfare. Those could very well continue without the human flourishing part, and indeed if they do continue indefinitely we should expect human welfare to be sacrificed to the gods of straight-lines-on-graphs to achieve this outcome.
I think the right way of thinking about that aspect is more: there are a bunch of methodologies to analyze the AI x-risk situation, and only one of the methods seems to give tremendously high credence to FOOM & DOOM.
Not so much a ‘you could be wrong’ argument, because I do think that in the Eliezer framework, it brings little comfort if you’re wrong about your picture of how intelligence works, since its highly improbable you’re wrong about something that makes a problem easier rather than harder.
This leads to the natural next question: what alternative methodologies & why do you have the faith you do in them when contrasted with the set of methodologies claiming FOOM & DOOM. And possibly a discussion of whether or not those alternative methodologies actually support the anti-FOOM & DOOM position. For example, you may claim that extrapolating lines on graphs says humans will continue to flourish, but the actual graphs we have are about GDP, and other crude metrics of human welfare. Those could very well continue without the human flourishing part, and indeed if they do continue indefinitely we should expect human welfare to be sacrificed to the gods of straight-lines-on-graphs to achieve this outcome.