This is heavily cultural, and Elon’s proposal (let everyone grid-link themselves to their own all-powerful AI) is in line with culturally Protestant values, while the LW proposal (appoint an all-powerful council of elders who decree who is and is not worthy to use AI technology, based on their own research into the doctrine) is in line with culturally Catholic values.
Deciding based on the two approaches based on which values they align with misunderstands the problem. A good strategy depends on what’s actually possible.
The idea that human/AI hybrids are competitive at requiring resources in an enviroment with strong AGIs is doubtful. That means that over time all the resources and power go to the AGIs.
Deciding based on the two approaches based on which values they align with misunderstands the problem. A good strategy depends on what’s actually possible.
The idea that human/AI hybrids are competitive at requiring resources in an enviroment with strong AGIs is doubtful. That means that over time all the resources and power go to the AGIs.
Human nature suggests that an all-powerful council-of-elders always becomes corrupt, so that approach might not be possible either.
Human nature is relatively irrelevant to the behavior of AIs. At the same time that’s basically saying that the alignment is a hard problem.
The alignment problem is one of the key AI safety problems.