I just want to be clear I understand your “plan”.
We are going to build a powerful self-improving system, and then let it try end humanity with some p(doom)<1 (hopefully) and then do that iteratively?
My gut reaction to a plan like that looks like this “Eff you. You want to play Russian roulette, fine sure do that on your own. But leave me and everyone else out of it”
AI will be able to invent highly-potent weapons very quickly and without risk of detection, but it seems at least pretty plausible that...… this is just too difficult
You lack imagination, its painfully easy, also cost + required IQ has been dropping steadily every year.
And no there is zero chance I will elaborate on any of the possible ways humanity purposefully could be wiped out.
Proposition 1: Powerful systems come with no x-risk
Proposition 2: Powerful systems come with x-risk
You can prove / disprove 2 by proving or disproving 1.
Why is it that a lot of [1,0] people believe that the [0,1] group should prove their case? [1]
And also ignore all the arguments that have been offered.