Humans are not good at coordinating and rebelling. Currently there are plenty of people who dislike how our current corporations are run and who would like to rebel against them but can’t. If all corporations are run by superintelligent AI that does not get easier.
Anyone who challenges the status quo will just branded a conspiracy theorist and isolated.
For that, you would need to have large nuclear countries to opt-out from using AI which puts them at a huge disadvantage compared to other large nuclear countries.
While it would be possible for a large nuclear country to do so, the large nuclear countries that make heavy use of AI would outmaneuver them.
In the US, the more powerful AI gets and the more essential its capabilities become to US companies the harder it will be to use legal action to shut down AI.
If you can’t find the political majorities today to restrict AI, why do you think you would get them if AI becomes more economically important and powerful?
Moratorium and air strike on those who violates? :)
But actually here is important the difference between just working on AI research and being in the process of enslaving by misaligned AI. US can continue to work on AI but if it sees signs that non-aligned AI started to take over another country, the all-out strike may be an only option.
You don’t know whether or not AI is aligned when it takes over decision-making.
AI overtakes by being able to make better decisions. If an AI CEO can make better decisions than the human CEO, a company benefits from letting the AI CEO make the decisions.
Imagine that you have a secretive hedge fund that’s run by an AI. It buys up companies, and votes for moving more and more decision-making of the companies it buys to be AI-based. Then the decisions at the companies become much better and their stock market price rise.
Do you think that some lawmaker will step up and try to pass a law that stops the increased economic competitiveness of those companies?
If you can’t get a moratorium and air strikes today, why do you think you would get it when AI provided much more economic benefits and becomes more important for the economy to function.
I agree that advance AI will find the ways to ascend without triggering (dangerous to it) airstrikes.
It also means that it will look like cooperating with humans until it has deseasive strategic advantage. Thus its robotic infrastructure has to look peaceful, maybe effective manufacturing robots for factories, like Optimus.
Today, when the police check someone’s ID and the system tells them that there’s an arrest warrant for the person, they just arrest the person because the computer tells them to do so. There’s no need for any robots to make the arrest.
Yes, AI can rule without killing humans but just paying them for tasks. But given recent discussion that AI will kill everyone, I assume here that AI actually is going to do this and look at how it can happen.
Humans are not good at coordinating and rebelling. Currently there are plenty of people who dislike how our current corporations are run and who would like to rebel against them but can’t. If all corporations are run by superintelligent AI that does not get easier.
Anyone who challenges the status quo will just branded a conspiracy theorist and isolated.
Individual humans unlikely to rebel, but large nuclear countries may oppose if they see that some countries or regions are taken by AI.
It will be more like Allies against Nazis than John Konnor against AI.
For that, you would need to have large nuclear countries to opt-out from using AI which puts them at a huge disadvantage compared to other large nuclear countries.
While it would be possible for a large nuclear country to do so, the large nuclear countries that make heavy use of AI would outmaneuver them.
In the US, the more powerful AI gets and the more essential its capabilities become to US companies the harder it will be to use legal action to shut down AI.
If you can’t find the political majorities today to restrict AI, why do you think you would get them if AI becomes more economically important and powerful?
Moratorium and air strike on those who violates? :)
But actually here is important the difference between just working on AI research and being in the process of enslaving by misaligned AI. US can continue to work on AI but if it sees signs that non-aligned AI started to take over another country, the all-out strike may be an only option.
You don’t know whether or not AI is aligned when it takes over decision-making.
AI overtakes by being able to make better decisions. If an AI CEO can make better decisions than the human CEO, a company benefits from letting the AI CEO make the decisions.
Imagine that you have a secretive hedge fund that’s run by an AI. It buys up companies, and votes for moving more and more decision-making of the companies it buys to be AI-based. Then the decisions at the companies become much better and their stock market price rise.
Do you think that some lawmaker will step up and try to pass a law that stops the increased economic competitiveness of those companies?
If you can’t get a moratorium and air strikes today, why do you think you would get it when AI provided much more economic benefits and becomes more important for the economy to function.
I agree that advance AI will find the ways to ascend without triggering (dangerous to it) airstrikes.
It also means that it will look like cooperating with humans until it has deseasive strategic advantage. Thus its robotic infrastructure has to look peaceful, maybe effective manufacturing robots for factories, like Optimus.
Today, when the police check someone’s ID and the system tells them that there’s an arrest warrant for the person, they just arrest the person because the computer tells them to do so. There’s no need for any robots to make the arrest.
Most humans who have a job do what they are told.
Yes, AI can rule without killing humans but just paying them for tasks. But given recent discussion that AI will kill everyone, I assume here that AI actually is going to do this and look at how it can happen.