I happen to value technological progress as an intrinsic good, so classifying a Singularity as “positive” or “negative” is not easy for me.
A uniform category of “good” or “positive” fails to compare its elements. Just how good are different AIs, compared to each other? Can one be much better than another? There is an opportunity cost for settling for comparatively worse AIs. Given the astronomical scale of consequences, any difference may be quite significant, which would make it an important problem to ensure the creation of one of the better possible AIs, rather than an AI that technology would stumble on by default.
A uniform category of “good” or “positive” fails to compare its elements. Just how good are different AIs, compared to each other? Can one be much better than another? There is an opportunity cost for settling for comparatively worse AIs. Given the astronomical scale of consequences, any difference may be quite significant, which would make it an important problem to ensure the creation of one of the better possible AIs, rather than an AI that technology would stumble on by default.