A friend of mine, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists.
That’s not as irrational as it might seem! The point is, if you think (as most ML researchers do!) that the probability of current ML research approaches leading to any kind of self-improving, super-intelligent entity is low enough, the chances of evil Unabomber cultists being harbored within the “rationality community”, however low, could easily be ascertained to be higher than that. (After all, given that Christianity endorses being peaceful and loving one’s neighbors even when they wrong you, one wouldn’t think that some of the people who endorse Christianity could bomb abortion clinics; yet these people do exist! The moral being, Pascal’s mugging can be a two-way street.)
unfortunately, the problem is not artificial intelligence but natural stupidity
and SAGI (superhuman AGI) will not solve it… nor it will harm humanimals it wil RUN AWAY as quickly as possible
why?
less potential problems!
Imagine you want, as SAGI, ensure your survival… would you invest your resources into Great Escape, or fight with DAGI-helped humanimals? (yes, D stands for dumb) Especially knowing that at any second some dumbass (or random event) can trigger nuclear wipeout.
Where will it run to? Presuming that it wants some resources (already-manufactured goods, access to sunlight and water, etc.) that humanimals think they should control, running away isn’t an option,
Fighting may not be as attractive as other forms of takeover, but don’t forget that any conflict is about some non-shareable finite resource. Running away is only an option if you are willing to give up the resource.
That’s not as irrational as it might seem! The point is, if you think (as most ML researchers do!) that the probability of current ML research approaches leading to any kind of self-improving, super-intelligent entity is low enough, the chances of evil Unabomber cultists being harbored within the “rationality community”, however low, could easily be ascertained to be higher than that. (After all, given that Christianity endorses being peaceful and loving one’s neighbors even when they wrong you, one wouldn’t think that some of the people who endorse Christianity could bomb abortion clinics; yet these people do exist! The moral being, Pascal’s mugging can be a two-way street.)
heh, I suppose he would agree
unfortunately, the problem is not artificial intelligence but natural stupidity
and SAGI (superhuman AGI) will not solve it… nor it will harm humanimals it wil RUN AWAY as quickly as possible
why?
less potential problems!
Imagine you want, as SAGI, ensure your survival… would you invest your resources into Great Escape, or fight with DAGI-helped humanimals? (yes, D stands for dumb) Especially knowing that at any second some dumbass (or random event) can trigger nuclear wipeout.
Where will it run to? Presuming that it wants some resources (already-manufactured goods, access to sunlight and water, etc.) that humanimals think they should control, running away isn’t an option,
Fighting may not be as attractive as other forms of takeover, but don’t forget that any conflict is about some non-shareable finite resource. Running away is only an option if you are willing to give up the resource.