OK but it isn’t hard (or wouldn’t be in the context we are discussing) to come to the understanding that creating an intelligence explosion renders the interests of any particular state irrelevant.
If you’re correct, then the best way to stave off the optimists from trying is to make an indisputable case for pessimism and disseminate it widely. Otherwise, eventually someone else will get optimistic, and won’t see why they shouldn’t give it a go.
I expect that once recognition of the intelligence explosion as a plausible scenario becomes mainstream, pessimism about the prospects of (unmodified) human programmers safely and successfully implementing CEV or some such thing will be the default, regardless of what certain AI researchers claim.
In that case, optimists are likely to have their activities forcefully curtailed. If this did not turn out to be the case, then I would consider “pro-pessimism” activism to change that state of affairs (assuming nothing happens to change my mind between now and then). At the moment however I support the activities of the Singularity Institute, because they are raising awareness of the problem (which is a prerequisite for state involvement) and they are highly responsible people. The worst state of affairs would be one in which no-one recognised the prospect of an intelligence explosion until it was too late.
ETA: I would be somewhat more supportive of a CEV in which only a select (and widely admired and recognised) group of humans was included. This seems to create an opportunity for the CEV initial dynamic implementation to be a compromise between intelligence enhancement and ordinary CEV, i.e. a small group of humans can be “prepared” and studied very carefully before the initial dynamic is switched on.
So really it’s a complex situation, and my post above probably failed to express the degree of ambivalence that I feel regarding this subject.
If you’re correct, then the best way to stave off the optimists from trying is to make an indisputable case for pessimism and disseminate it widely. Otherwise, eventually someone else will get optimistic, and won’t see why they shouldn’t give it a go.
I expect that once recognition of the intelligence explosion as a plausible scenario becomes mainstream, pessimism about the prospects of (unmodified) human programmers safely and successfully implementing CEV or some such thing will be the default, regardless of what certain AI researchers claim.
In that case, optimists are likely to have their activities forcefully curtailed. If this did not turn out to be the case, then I would consider “pro-pessimism” activism to change that state of affairs (assuming nothing happens to change my mind between now and then). At the moment however I support the activities of the Singularity Institute, because they are raising awareness of the problem (which is a prerequisite for state involvement) and they are highly responsible people. The worst state of affairs would be one in which no-one recognised the prospect of an intelligence explosion until it was too late.
ETA: I would be somewhat more supportive of a CEV in which only a select (and widely admired and recognised) group of humans was included. This seems to create an opportunity for the CEV initial dynamic implementation to be a compromise between intelligence enhancement and ordinary CEV, i.e. a small group of humans can be “prepared” and studied very carefully before the initial dynamic is switched on.
So really it’s a complex situation, and my post above probably failed to express the degree of ambivalence that I feel regarding this subject.