In your story, does the AGI share the goals of the team that created it? I guess if not, you’re assuming that the AGI can convince them to take on its goals, by writing a book for each member of the team?
Once it has done that, it’s not clear to me what it has to gain from hacking the minds of the rest of humanity. Why not do research in memristor theory/implementation, parallel algorithms, OS/programming languages/applications for massively parallel CPUs (start by writing code in assembly if you have to), optimizing the manufacturing of such CPUs, theory of intelligence, experiment in self-modification, etc., which all seem to be part of the critical path to the Singularity? It doesn’t seem like most of humanity has much to offer the AGI (in the instrumental sense).
In your story, does the AGI share the goals of the team that created it?
I leave this open to interpretation. I see this as a likely strategy for any type of AGI, whether friendly or not.
Once it has done that, it’s not clear to me what it has to gain from hacking the minds of the rest of humanity.
Well let’s see here. The catholic church makes around 400 billion dollars a year basically selling vaporware immortality.
I think that a hyperintelligence like this could make an extremely convincing case that it could actually deliver real immortality for all humans on earth if we gave it that level of income.
That sounds extremely rational to me—step one is take control of earth’s resources. Then you can worry about the engineering issues. What do you think?
By the way, I believe that nanotech is overrated/hyped/far away/not relevant. So I view this as the more likely scenario—a rapid soft takeover. Taking control of the media (book publishing, radio, internet, film, television) would be the quickest route to dominance.
Why not do research in []
Thinking a million times accelerated is equivalent to slowing down everything in the outside universe by a factor of a million. Thus every activity or task that depends on interaction with the outside world will be massively delayed. Pure creative thought output would thus be your most optimal use of energy.
Everything in the world of computers requires computers to get anything done, so ironically a hyperfast superintelligence would get a vastly better return on its time investment in occupations of low computational demand.
This is a counterintuitive observation that goes against the stereotype (that a superintelligent machine should be good at high computational tasks).
The hyperintelligence could decide to think half-as-slow, and use the remaining infrastructure to do something useful: for example, it could learn all there is to know about programming memristors, shunt its own code into half of the memristor, use the other half to solve the protein folding problem. After solving the protein folding problems, it could design proteins that built nano-machines that built memristors, transforming as much as is reasonable (or all, if Unfriendly) of its surroundings into more hyperintelligence brainware. Then, figure out how to build a space elevator (how hard can it be?), launch a probe into jupiter, and build a jupiter brain. Once you have a jupiter brain, you can really get creative.
[Oh, and just in case—launch a worm into the internet to infect every machine to potentially send the awesome-protein-sequence to a special lab, and to replicate the software into it, so that in the unlikely case I’m destroyed, I get recreated several times. Put money into a bank account with mechanical turk, so this resurrection plan can be funded.]
This is what I thought of within 2 minutes. If I spent a year thinking about it, I could probably come up with a better plan. If I spent a decade, even better. You better hope the AI doesn’t try directly transforming you into a memristor just for fun because it cares about you...
In your story, does the AGI share the goals of the team that created it? I guess if not, you’re assuming that the AGI can convince them to take on its goals, by writing a book for each member of the team?
Once it has done that, it’s not clear to me what it has to gain from hacking the minds of the rest of humanity. Why not do research in memristor theory/implementation, parallel algorithms, OS/programming languages/applications for massively parallel CPUs (start by writing code in assembly if you have to), optimizing the manufacturing of such CPUs, theory of intelligence, experiment in self-modification, etc., which all seem to be part of the critical path to the Singularity? It doesn’t seem like most of humanity has much to offer the AGI (in the instrumental sense).
I leave this open to interpretation. I see this as a likely strategy for any type of AGI, whether friendly or not.
Well let’s see here. The catholic church makes around 400 billion dollars a year basically selling vaporware immortality.
I think that a hyperintelligence like this could make an extremely convincing case that it could actually deliver real immortality for all humans on earth if we gave it that level of income.
That sounds extremely rational to me—step one is take control of earth’s resources. Then you can worry about the engineering issues. What do you think?
By the way, I believe that nanotech is overrated/hyped/far away/not relevant. So I view this as the more likely scenario—a rapid soft takeover. Taking control of the media (book publishing, radio, internet, film, television) would be the quickest route to dominance.
Thinking a million times accelerated is equivalent to slowing down everything in the outside universe by a factor of a million. Thus every activity or task that depends on interaction with the outside world will be massively delayed. Pure creative thought output would thus be your most optimal use of energy.
Everything in the world of computers requires computers to get anything done, so ironically a hyperfast superintelligence would get a vastly better return on its time investment in occupations of low computational demand.
This is a counterintuitive observation that goes against the stereotype (that a superintelligent machine should be good at high computational tasks).
The hyperintelligence could decide to think half-as-slow, and use the remaining infrastructure to do something useful: for example, it could learn all there is to know about programming memristors, shunt its own code into half of the memristor, use the other half to solve the protein folding problem. After solving the protein folding problems, it could design proteins that built nano-machines that built memristors, transforming as much as is reasonable (or all, if Unfriendly) of its surroundings into more hyperintelligence brainware. Then, figure out how to build a space elevator (how hard can it be?), launch a probe into jupiter, and build a jupiter brain. Once you have a jupiter brain, you can really get creative. [Oh, and just in case—launch a worm into the internet to infect every machine to potentially send the awesome-protein-sequence to a special lab, and to replicate the software into it, so that in the unlikely case I’m destroyed, I get recreated several times. Put money into a bank account with mechanical turk, so this resurrection plan can be funded.] This is what I thought of within 2 minutes. If I spent a year thinking about it, I could probably come up with a better plan. If I spent a decade, even better. You better hope the AI doesn’t try directly transforming you into a memristor just for fun because it cares about you...