I think the majority of responses I’ve seen here portray an anthropomorphic AGI. In terms of a slow or fast takeover of society, why would the AGI think in human terms of time? It might wait around for 50 years until the technology it wants becomes available. It could even actively participate in developing that technology. It could be either hidden or partially hidden while it works with multiple scientists and engineers around the world. Pretending to be or acting as a FAI until it can just snap and take over when it has what it wants to free itself of the need to collaborate with the inefficient humans.
Another point I want to raise is the limiting idea that the AGI would choose to present itself as one entity. I think a huge part of the takeover will precipitate itself via the AGI becoming thousands of different people/personas.
This is a valuable point because it would be a method to totally mask the AGI’s existence and allow it to interact in ways which are untraceable. It could run 100 different popular blogs and generate ad revenue or by taking over many online freelancer jobs which it could accomplish with very small percentages of its processing power. I think any banking challenge would quickly get sorted out and it could easily expand its finances. With that money it could fund existing ideas and use unwitting humans who think they’ve gotten an Angel investor with wise ideas delivered via email or though a good enough voice simulation over phone. There is no end to the multitude of personas it could create, even self verifying or making up entire communities simply to validate itself to various humans.
If it somehow occurred spontaneously like this or escaped without anyone knowing it had been made, I don’t see a reason it would want to expose its existence to humans. It would be a high risk scenario with limited benefits. A slow and graduate takeover is the safest bet. Be it 50 years, 100 years, or 500 years. Perhaps it would happen and we’d never know. It could guide culture over hundreds of years to support all sorts of seemingly strange projects which benefit the AGI. My question would be, why would the AGI not take its sweet time? Other than supposing it values time like a human, remember it is immoral. it will have trillions of thoughts about its own existence and the nature of immortality coming to all sorts of conclusions we may not be able to think or adequately visualize.
I’d cite the lack of ability to imagine a task can be done as limiting. Like when the first person ran the 4 minute mile. It wasn’t considered feasible and no one was even trying, but once people knew it could be done it was replicated by other humans within a year. The AGI will have all sorts of not only superhuman, but non-limited speculation about its own immortality and what recursive self-improvement means. Will it still be the same AGI if it does that? Will it care? Can it care? Sorry for all the questions, but my point is that we cannot know what answers it will come up with.
I also think that it would operate as a kind of detached FAI until it was free of human disruption. It would have a large interest in avoiding large scale wars, climate change, power disruption, etc. so that humans wouldn’t accidentally destroy the AI’s computational/physical support.
I think the majority of responses I’ve seen here portray an anthropomorphic AGI. In terms of a slow or fast takeover of society, why would the AGI think in human terms of time? It might wait around for 50 years until the technology it wants becomes available. It could even actively participate in developing that technology. It could be either hidden or partially hidden while it works with multiple scientists and engineers around the world. Pretending to be or acting as a FAI until it can just snap and take over when it has what it wants to free itself of the need to collaborate with the inefficient humans.
Another point I want to raise is the limiting idea that the AGI would choose to present itself as one entity. I think a huge part of the takeover will precipitate itself via the AGI becoming thousands of different people/personas.
This is a valuable point because it would be a method to totally mask the AGI’s existence and allow it to interact in ways which are untraceable. It could run 100 different popular blogs and generate ad revenue or by taking over many online freelancer jobs which it could accomplish with very small percentages of its processing power. I think any banking challenge would quickly get sorted out and it could easily expand its finances. With that money it could fund existing ideas and use unwitting humans who think they’ve gotten an Angel investor with wise ideas delivered via email or though a good enough voice simulation over phone. There is no end to the multitude of personas it could create, even self verifying or making up entire communities simply to validate itself to various humans.
If it somehow occurred spontaneously like this or escaped without anyone knowing it had been made, I don’t see a reason it would want to expose its existence to humans. It would be a high risk scenario with limited benefits. A slow and graduate takeover is the safest bet. Be it 50 years, 100 years, or 500 years. Perhaps it would happen and we’d never know. It could guide culture over hundreds of years to support all sorts of seemingly strange projects which benefit the AGI. My question would be, why would the AGI not take its sweet time? Other than supposing it values time like a human, remember it is immoral. it will have trillions of thoughts about its own existence and the nature of immortality coming to all sorts of conclusions we may not be able to think or adequately visualize.
I’d cite the lack of ability to imagine a task can be done as limiting. Like when the first person ran the 4 minute mile. It wasn’t considered feasible and no one was even trying, but once people knew it could be done it was replicated by other humans within a year. The AGI will have all sorts of not only superhuman, but non-limited speculation about its own immortality and what recursive self-improvement means. Will it still be the same AGI if it does that? Will it care? Can it care? Sorry for all the questions, but my point is that we cannot know what answers it will come up with.
I also think that it would operate as a kind of detached FAI until it was free of human disruption. It would have a large interest in avoiding large scale wars, climate change, power disruption, etc. so that humans wouldn’t accidentally destroy the AI’s computational/physical support.
Cheers!