The AI could gain control by demonstrating it had hidden pathogens that if released would kill almost everyone. As Paul Atreides said “He who can destroy a thing, controls a thing.” As the technology to make such pathogens probably already exists the AI could hack into various labs and give instructions to people or machines to make the pathogens, then send orders for the pathogens to be delivered to various places, and then erase records of where most of the pathogens were. The AI then blackmails mankind into subservience. Alternatively, the AI could first develop a treatment for the pathogens, then release the pathogens, and then give the treatment only to people who submit to the AI. The treatment would have to be regularly taken and difficult to copy.
More benevolently, the AI makes a huge amount of money off of financial markets, uses the resources to start its own country, runs the country really, really well and expands citizenship to anyone who joins. Eventually, when the country is strong enough, the AI (with the genuine support of most people) uses military force to take over the world, giving us an AI monarchy.
Or, the AI freely gives advice to anyone who asks. The advice is so good that most people follow it. Organizations and people that follow the advice do much better (and get far more power) than those that don’t. The AI effectively gains control of the world. If the AI wants to speed up the process, it only gives advice to people who refuse to interact with organizations that don’t listen to the AI.
The AI identifies a group of extremely smart people and tricks them into answering the “hypothetical” question “how could an AI take over the world?”
More benevolently, the AI makes a huge amount of money off of financial markets, uses the resources to start its own country, runs the country really, really well and expands citizenship to anyone who joins. Eventually, when the country is strong enough, the AI (with the genuine support of most people) uses military force to take over the world, giving us an AI monarchy.
Seems unlikely. Sure, it could be done, but it would waste a lot of time. I doubt a typical superintelligent agent would do that.
The AI identifies a group of extremely smart people and tricks them into answering the “hypothetical” question “how could an AI take over the world?”
I suspect this was meant as a joke, but while a superintelligent AI wouldn’t need to do such a thing, a human looking for ways to destroy the world could use suggestions, so it might be a bad idea to give nonobvious suggestions that humans could implement.
The AI could gain control by demonstrating it had hidden pathogens that if released would kill almost everyone. As Paul Atreides said “He who can destroy a thing, controls a thing.” As the technology to make such pathogens probably already exists the AI could hack into various labs and give instructions to people or machines to make the pathogens, then send orders for the pathogens to be delivered to various places, and then erase records of where most of the pathogens were. The AI then blackmails mankind into subservience. Alternatively, the AI could first develop a treatment for the pathogens, then release the pathogens, and then give the treatment only to people who submit to the AI. The treatment would have to be regularly taken and difficult to copy.
More benevolently, the AI makes a huge amount of money off of financial markets, uses the resources to start its own country, runs the country really, really well and expands citizenship to anyone who joins. Eventually, when the country is strong enough, the AI (with the genuine support of most people) uses military force to take over the world, giving us an AI monarchy.
Or, the AI freely gives advice to anyone who asks. The advice is so good that most people follow it. Organizations and people that follow the advice do much better (and get far more power) than those that don’t. The AI effectively gains control of the world. If the AI wants to speed up the process, it only gives advice to people who refuse to interact with organizations that don’t listen to the AI.
The AI identifies a group of extremely smart people and tricks them into answering the “hypothetical” question “how could an AI take over the world?”
Seems unlikely. Sure, it could be done, but it would waste a lot of time. I doubt a typical superintelligent agent would do that.
I suspect this was meant as a joke, but while a superintelligent AI wouldn’t need to do such a thing, a human looking for ways to destroy the world could use suggestions, so it might be a bad idea to give nonobvious suggestions that humans could implement.
Upvoted for only this sentence fragment: “More benevolently, the AI makes a huge amount of money off of financial markets [...]”.