The problem is that already the existing parasites (plants, animals, wall street, socialism, politics, state, you name it) usually have absolutely minimal self control mechanisms (or plain zero) and maximize their utility functions till the catastrophic end (death of the host organism/society).
Because… it’s so simple, it’s so “first choice”. Viruses don’t even have to be technically “alive”. No surprise that we obviously started with computer viruses as the first self-replicators on the new platform.
So we can expect zilions of fast replicating “dumb AGI” ( dAGI) agents maximising all sorts of crazy things before we get anywhere near the “intelligent AGI” (iAGI). And these dumb parasitic AGIs can be much more dangerous than that mythical singularitarian superhuman iAGI. May never even come if this dAGI swarm manages to destroy everything. Or attack iAGI directly.
In general, these “AI containing”, “aligning” or “friendly” idealistic approaches look dangerously naive if they are the only “weapon” we are spupposed to have… maybe these should be complemented with good old military option (fight it… and it will come when goverment/military forces jump into the field). Just in case… to be prepared it things go wrong (sure, there’s this esotheric argument that you cannot fight very advanced AGI, but at least this “very” limit deserves further study).
The problem is that already the existing parasites (plants, animals, wall street, socialism, politics, state, you name it) usually have absolutely minimal self control mechanisms (or plain zero) and maximize their utility functions till the catastrophic end (death of the host organism/society).
The problem is that already the existing parasites (plants, animals, wall street, socialism, politics, state, you name it) usually have absolutely minimal self control mechanisms (or plain zero) and maximize their utility functions till the catastrophic end (death of the host organism/society).
Because… it’s so simple, it’s so “first choice”. Viruses don’t even have to be technically “alive”. No surprise that we obviously started with computer viruses as the first self-replicators on the new platform.
So we can expect zilions of fast replicating “dumb AGI” ( dAGI) agents maximising all sorts of crazy things before we get anywhere near the “intelligent AGI” (iAGI). And these dumb parasitic AGIs can be much more dangerous than that mythical singularitarian superhuman iAGI. May never even come if this dAGI swarm manages to destroy everything. Or attack iAGI directly.
In general, these “AI containing”, “aligning” or “friendly” idealistic approaches look dangerously naive if they are the only “weapon” we are spupposed to have… maybe these should be complemented with good old military option (fight it… and it will come when goverment/military forces jump into the field). Just in case… to be prepared it things go wrong (sure, there’s this esotheric argument that you cannot fight very advanced AGI, but at least this “very” limit deserves further study).
This is false.