CEV assumes that things can be made coherent. This ignores on how tendensies of pulling in opposite directions play out in practise. This might be a feature and not a bug. The extra polation part is like predicting growth without actual growing. It will lead to another place than natural growth would result with. It also assumes that humans will on the whole cooperate. How should the AI approach conflicts of interest against the constituents? If there is another cold war style scenario does it mean most of the AI power is wasted on being neutral? Or does the AI just empower indiviudal symmetry breakingt decisions to bgi scales, in that the AI isn’t the one that f up everything it just supplies the one at fault with tools to make it big.
Organization tend to be able to form stances that are way more narrower than individual living philosphies. Humanity can’t settle on a volition that would could be broad enough to serve as a personal moral action guiding priniciple. If a dictator forces a “balanced” view why not just go all the way and make a Imposed Corrective Values.
Coherent Extapolated Volition is a bad way of approaching friendliness.
CEV assumes that things can be made coherent. This ignores on how tendensies of pulling in opposite directions play out in practise. This might be a feature and not a bug. The extra polation part is like predicting growth without actual growing. It will lead to another place than natural growth would result with. It also assumes that humans will on the whole cooperate. How should the AI approach conflicts of interest against the constituents? If there is another cold war style scenario does it mean most of the AI power is wasted on being neutral? Or does the AI just empower indiviudal symmetry breakingt decisions to bgi scales, in that the AI isn’t the one that f up everything it just supplies the one at fault with tools to make it big.
Organization tend to be able to form stances that are way more narrower than individual living philosphies. Humanity can’t settle on a volition that would could be broad enough to serve as a personal moral action guiding priniciple. If a dictator forces a “balanced” view why not just go all the way and make a Imposed Corrective Values.
opposed utility functions can be sharded into separate light cones.