AI will be designed to try to seize absolute power, and take every possible measure to prevent humans from creating another AI. If your name appears on this website, you’re already on its list of people whose continued existence will be risky.
Not much risk. Hunting down irrelevant blog commenters is a greater risk than leaving them be. There isn’t much of a window during which any human is a slightest threat and during that window going around killing people is just going to enhance the risk to it.
The window is presumably between now and when the winner is obvious—assuming we make it that far.
IMO, there’s plenty of scope for paranoia in the interim. Looking at the logic so far some teams will reason that unless their chosen values get implemented, much of value is likely to be lost. They will then mulitiply that by a billion years and a billion planets—and conclude that their competitors might really matter.
Killing people might indeed backfire—but that still leaves plenty of scope for dirty play.
Uh huh. So: world view difference. Corps and orgs will most likely go from 90% human to 90% machine through the well-known and gradual process of automation, gaining power as they go—and the threats from bad organisations are unlikely to be something that will appear suddenly at some point.
Not much risk. Hunting down irrelevant blog commenters is a greater risk than leaving them be. There isn’t much of a window during which any human is a slightest threat and during that window going around killing people is just going to enhance the risk to it.
The window is presumably between now and when the winner is obvious—assuming we make it that far.
IMO, there’s plenty of scope for paranoia in the interim. Looking at the logic so far some teams will reason that unless their chosen values get implemented, much of value is likely to be lost. They will then mulitiply that by a billion years and a billion planets—and conclude that their competitors might really matter.
Killing people might indeed backfire—but that still leaves plenty of scope for dirty play.
No. Reread the context. This is the threat from “F”AI, not from designers. The window opens when someone clicks ‘run’.
Uh huh. So: world view difference. Corps and orgs will most likely go from 90% human to 90% machine through the well-known and gradual process of automation, gaining power as they go—and the threats from bad organisations are unlikely to be something that will appear suddenly at some point.