CEV is a sketch of operationalization of carefully deciding which goals end up being pursued, an alignment target. Its content doesn’t depend on philosophical status of such goals or on how CEV gets instantiated, such as whether it gets to be used directly in 21st century by first AGIs or if it comes about later, when we need to get serious about making use of the cosmic endowment.
My preferred implementation of CEV (in the spirit of exploratory engineering) looks like a large collection of mostly isolated simulated human civilizations, where AGIs individually assigned to them perform prediction of CEV in many different value-laden ways (current understanding of values influences which details are predicted with morally relevant accuracy) and use it to guide their civilizations, depending on what is allowed by the rules of setting up a particular civilization. This as a whole gives a picture of path-dependency and tests prediction of CEV within CEV, so that it becomes possible to make more informed decisions on aggregation of results of different initial conditions (seeking coherence), and on choice of initial conditions.
The primary issue with this implementation is potential mindcrime, though it might be possible to selectively modulate the precision used to simulate specific parts of these civilizations to reduce moral weight of simulated undesirable events, or for the civilization-guiding AGIs to intervene where necessary.
The basic problem is it assumed that there was a objective moral reality, and we have little evidence of that. It’s very possible morals are subjective, which outright makes CEV non-viable.
Do you mean by “objective moral reality” and morals “being subjective” something that interacts (at all) with the above description of CEV? Are you thinking of a very different meaning of CEV?
CEV is a sketch of operationalization of carefully deciding which goals end up being pursued, an alignment target. Its content doesn’t depend on philosophical status of such goals or on how CEV gets instantiated, such as whether it gets to be used directly in 21st century by first AGIs or if it comes about later, when we need to get serious about making use of the cosmic endowment.
My preferred implementation of CEV (in the spirit of exploratory engineering) looks like a large collection of mostly isolated simulated human civilizations, where AGIs individually assigned to them perform prediction of CEV in many different value-laden ways (current understanding of values influences which details are predicted with morally relevant accuracy) and use it to guide their civilizations, depending on what is allowed by the rules of setting up a particular civilization. This as a whole gives a picture of path-dependency and tests prediction of CEV within CEV, so that it becomes possible to make more informed decisions on aggregation of results of different initial conditions (seeking coherence), and on choice of initial conditions.
The primary issue with this implementation is potential mindcrime, though it might be possible to selectively modulate the precision used to simulate specific parts of these civilizations to reduce moral weight of simulated undesirable events, or for the civilization-guiding AGIs to intervene where necessary.
Do you mean by “objective moral reality” and morals “being subjective” something that interacts (at all) with the above description of CEV? Are you thinking of a very different meaning of CEV?
I think I might be thinking of a very different kind of CEV.