What I’m trying to do is find some way to fix the goalposts. Find a set of conditions on CEV that would satisfy. Whether such CEV actually exists and how to build it are questions for later. Lets just pile up constraints until a sufficient set is reached. So, lets assume that:
“Unanimous” CEV exists
And is unique
And is definable via some easy, obviously correct, and unique process, to be discovered in the future,
And it basically does what I want it to do (fulfil universal wishes of people, minimize interference otherwise),
would you say that running it is uncontroversial? If not, what other conditions are required?
No, I wouldn’t expect running it to be uncontroversial, but I would endorse running it.
I can’t imagine any world-changing event that would be uncontroversial, if I assume that the normal mechanisms for generating controversy aren’t manipulated (in which case anything might be uncontroversial).
I’m not sure. But it seems a useful property to have for an AI being developed. It might allow centralizing the development. Or something.
Ok, you’re right in that a complete lack of controversy is impossible, because there are always trolls, cranks, conspiracy theorists, etc. But is it possible to reach a consensus among all sufficiently well-informed sufficiently intelligent people? Where “sufficiently” is not a too high threshold?
There probably exists (hypothetically) some plan such that it wouldn’t seem unreasonable to me to declare anyone who doesn’t endorse that plan either insufficiently well-informed or insufficiently intelligent.
In fact, there probably exist several such plans, many of which would have results I would subsequently regret, and some of which do not.
I think seeking and refining such plans would be a worthy goal. For one thing, it would make LW discussions more constructive. Currently, as far as I can tell, CEV is very broadly defined, and its critics usually point at some feature and cast (legitimate) doubt on it. Very soon, CEV is apparently full of holes and one may wonder why is it not thrown away already. But they may be not real holes, just places where we do not know enough yet. If these points are identified and stated in a form of questions of fact, which can be answered by future research, then a global plan, in the form of a decision tree, could be made and reasoned about. That would be a definite progress, I think.
What I’m trying to do is find some way to fix the goalposts. Find a set of conditions on CEV that would satisfy. Whether such CEV actually exists and how to build it are questions for later. Lets just pile up constraints until a sufficient set is reached. So, lets assume that:
“Unanimous” CEV exists
And is unique
And is definable via some easy, obviously correct, and unique process, to be discovered in the future,
And it basically does what I want it to do (fulfil universal wishes of people, minimize interference otherwise),
would you say that running it is uncontroversial? If not, what other conditions are required?
No, I wouldn’t expect running it to be uncontroversial, but I would endorse running it.
I can’t imagine any world-changing event that would be uncontroversial, if I assume that the normal mechanisms for generating controversy aren’t manipulated (in which case anything might be uncontroversial).
Why is it important that it be uncontroversial?
I’m not sure. But it seems a useful property to have for an AI being developed. It might allow centralizing the development. Or something.
Ok, you’re right in that a complete lack of controversy is impossible, because there are always trolls, cranks, conspiracy theorists, etc. But is it possible to reach a consensus among all sufficiently well-informed sufficiently intelligent people? Where “sufficiently” is not a too high threshold?
There probably exists (hypothetically) some plan such that it wouldn’t seem unreasonable to me to declare anyone who doesn’t endorse that plan either insufficiently well-informed or insufficiently intelligent.
In fact, there probably exist several such plans, many of which would have results I would subsequently regret, and some of which do not.
I think seeking and refining such plans would be a worthy goal. For one thing, it would make LW discussions more constructive. Currently, as far as I can tell, CEV is very broadly defined, and its critics usually point at some feature and cast (legitimate) doubt on it. Very soon, CEV is apparently full of holes and one may wonder why is it not thrown away already. But they may be not real holes, just places where we do not know enough yet. If these points are identified and stated in a form of questions of fact, which can be answered by future research, then a global plan, in the form of a decision tree, could be made and reasoned about. That would be a definite progress, I think.
Agreed that an actual concrete plan would be a valuable thing, for the reasons you list among others.