I don’t quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival.
By winning the war before it starts or solving cooperation problems.
The competition you refer to isn’t prevented by proposing an especially egalitarian. Being included in part of the Coherent Extrapolated Volition equation is not sufficient reason to stand down in a fight for FAI creation.
But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.
CEV would give that result. The ‘coherence’ thing isn’t about sharing. CEV may well decide to give all the mass of the universe to C purely because they can’t stand each other while if C was included in the same evaluation CEV they may well decide to do something entirely different. Sure, at least one of those agents is clearly insane but the point is being ‘included’ is not intrinsically important.
By winning the war before it starts or solving cooperation problems.
The competition you refer to isn’t prevented by proposing an especially egalitarian. Being included in part of the Coherent Extrapolated Volition equation is not sufficient reason to stand down in a fight for FAI creation.
CEV would give that result. The ‘coherence’ thing isn’t about sharing. CEV may well decide to give all the mass of the universe to C purely because they can’t stand each other while if C was included in the same evaluation CEV they may well decide to do something entirely different. Sure, at least one of those agents is clearly insane but the point is being ‘included’ is not intrinsically important.