It will be the goverment(s) who decides how AGI is used, not a benevolent coalition of utilitarian rationalists.
Somebody is going to make AGI and thereby control it (in the likely event it’s intent-aligned—see below). And the goverment that asserts control over that company is probably going to seize effective control of that project as soon as they realize its potential.
National security critical technologies are the the domain of the goverment and always have been. And AGI is the most security-relevant technology in history. Finally, politicians often don’t understand new technologies, but the national security apparatus is not composed entirely of idiots.
On the economic side: We’re likely to see a somewhat slow takeoff on the current trajectory. That’s enough time for everyone to starve if they’re all out of work before an ASI can just make technologies that make food and housing out of nothing—if its controllers want it to.
It will be the goverment(s) who decides how AGI is used, not a benevolent coalition of utilitarian rationalists.
Even so, the government still needs to weigh up opposing concerns, maintain ownership of the AGI, set up the system in such a way that they have trust in it and gain some degree of buy-in from society for the plan[1].
This seems unrealistically idealistic to me.
It will be the goverment(s) who decides how AGI is used, not a benevolent coalition of utilitarian rationalists.
Somebody is going to make AGI and thereby control it (in the likely event it’s intent-aligned—see below). And the goverment that asserts control over that company is probably going to seize effective control of that project as soon as they realize its potential.
National security critical technologies are the the domain of the goverment and always have been. And AGI is the most security-relevant technology in history. Finally, politicians often don’t understand new technologies, but the national security apparatus is not composed entirely of idiots.
On the economic side: We’re likely to see a somewhat slow takeoff on the current trajectory. That’s enough time for everyone to starve if they’re all out of work before an ASI can just make technologies that make food and housing out of nothing—if its controllers want it to.
Thanks for the care and possible nod to not Conflating value alignment and intent alignment! The poster seems to be assuming intent alignment, which I think is very likely right because Instruction-following AGI is easier and more likely than value aligned AGI
See my other comment with links to related discussions.
Even so, the government still needs to weigh up opposing concerns, maintain ownership of the AGI, set up the system in such a way that they have trust in it and gain some degree of buy-in from society for the plan[1].
Unless their plan is to use the AGI to enforce their will