My concerns are more that it will not be possible to adequately define “human”, especially as, transhuman tech develops, and that there might not be a good enough way to define what’s good for people.
As I understand it, the modest goal of building an FAI is that of giving an AGI a push in the “right” direction, what EY refers to as the initial dynamics. After that, all bets are off.
My concerns are more that it will not be possible to adequately define “human”, especially as, transhuman tech develops, and that there might not be a good enough way to define what’s good for people.
As I understand it, the modest goal of building an FAI is that of giving an AGI a push in the “right” direction, what EY refers to as the initial dynamics. After that, all bets are off.