Ok, but do you grant that running a FAI with “unanimous CEV” is at least (1) safe, and (2) uncontroversial? That the worst problem with it is that it may just stand there doing nothing—if I’m wrong about my hypothesis?
I don’t know how to answer that question. Again, it seems that you’re trying to get an answer given a whole bunch of assumptions, but that you resist the effort to make those assumptions clear as part of the answer.
It is not clear to me that there exists such a thing as a “unanimous CEV” at all, even in the hypothetical sense of something we might be able to articulate some day with the right tools.
If I nevertheless assume that a unanimous CEV exists in that hypothetical sense, it is not clear to me that only one exists; presumably modifications to the CEV-extraction algorithm would result in different CEVs from the same input minds, and I don’t see any principled grounds for choosing among that cohort of algorithms that don’t in effect involve selecting a desired output first. (In which case CEV extraction is a complete red herring, since the output was a “bottom line” written in advance of CEV’s extraction, and we should be asking how that output was actually arrived at and whether we endorse that process. )
If I nevertheless assume that a single CEV-extraction algorithm is superior to all the others, and further assume that we select that algorithm via some process I cannot currently imagine and run it, and that we then run a superhuman environment-optimizer with its output as a target, it is not clear to me that I would endorse that state change as an individual. So, no, I don’t agree that running it is uncontroversial. (Although everyone might agree afterwards that it was a good idea.)
If the state change nevertheless gets implemented, I agree (given all of those assumptions) that the resulting state-change improves the world by the standards of all humanity. “Safe” is an OK word for that, I guess, though it’s not the usual meaning of “safe.”
I don’t agree that the worst that happens, if those assumptions turn out to be wrong, is that it stands there and does nothing. The worst that happens is that the superhuman environment-optimizer runs with a target that makes the world worse by the standards of all humanity.
(Yes, I understand that the CEV-extraction algorithm is supposed to prevent that, and I’ve agreed that if I assume that’s true, then this doesn’t happen. But now you’re asking me to consider what happens if the “hypothesis” is false, so I am no longer just assuming that’s true. You’re putting a lot of faith in a mysterious extraction algorithm, and it is not clear to me that a non-mysterious algorithm that satisfies that faith is likely, or that the process of coming up with one won’t come up with a different algorithm that antisatisfies that faith instead.)
What I’m trying to do is find some way to fix the goalposts. Find a set of conditions on CEV that would satisfy. Whether such CEV actually exists and how to build it are questions for later. Lets just pile up constraints until a sufficient set is reached. So, lets assume that:
“Unanimous” CEV exists
And is unique
And is definable via some easy, obviously correct, and unique process, to be discovered in the future,
And it basically does what I want it to do (fulfil universal wishes of people, minimize interference otherwise),
would you say that running it is uncontroversial? If not, what other conditions are required?
No, I wouldn’t expect running it to be uncontroversial, but I would endorse running it.
I can’t imagine any world-changing event that would be uncontroversial, if I assume that the normal mechanisms for generating controversy aren’t manipulated (in which case anything might be uncontroversial).
I’m not sure. But it seems a useful property to have for an AI being developed. It might allow centralizing the development. Or something.
Ok, you’re right in that a complete lack of controversy is impossible, because there are always trolls, cranks, conspiracy theorists, etc. But is it possible to reach a consensus among all sufficiently well-informed sufficiently intelligent people? Where “sufficiently” is not a too high threshold?
There probably exists (hypothetically) some plan such that it wouldn’t seem unreasonable to me to declare anyone who doesn’t endorse that plan either insufficiently well-informed or insufficiently intelligent.
In fact, there probably exist several such plans, many of which would have results I would subsequently regret, and some of which do not.
I think seeking and refining such plans would be a worthy goal. For one thing, it would make LW discussions more constructive. Currently, as far as I can tell, CEV is very broadly defined, and its critics usually point at some feature and cast (legitimate) doubt on it. Very soon, CEV is apparently full of holes and one may wonder why is it not thrown away already. But they may be not real holes, just places where we do not know enough yet. If these points are identified and stated in a form of questions of fact, which can be answered by future research, then a global plan, in the form of a decision tree, could be made and reasoned about. That would be a definite progress, I think.
Ok, but do you grant that running a FAI with “unanimous CEV” is at least (1) safe, and (2) uncontroversial? That the worst problem with it is that it may just stand there doing nothing—if I’m wrong about my hypothesis?
I don’t know how to answer that question. Again, it seems that you’re trying to get an answer given a whole bunch of assumptions, but that you resist the effort to make those assumptions clear as part of the answer.
It is not clear to me that there exists such a thing as a “unanimous CEV” at all, even in the hypothetical sense of something we might be able to articulate some day with the right tools.
If I nevertheless assume that a unanimous CEV exists in that hypothetical sense, it is not clear to me that only one exists; presumably modifications to the CEV-extraction algorithm would result in different CEVs from the same input minds, and I don’t see any principled grounds for choosing among that cohort of algorithms that don’t in effect involve selecting a desired output first. (In which case CEV extraction is a complete red herring, since the output was a “bottom line” written in advance of CEV’s extraction, and we should be asking how that output was actually arrived at and whether we endorse that process. )
If I nevertheless assume that a single CEV-extraction algorithm is superior to all the others, and further assume that we select that algorithm via some process I cannot currently imagine and run it, and that we then run a superhuman environment-optimizer with its output as a target, it is not clear to me that I would endorse that state change as an individual. So, no, I don’t agree that running it is uncontroversial. (Although everyone might agree afterwards that it was a good idea.)
If the state change nevertheless gets implemented, I agree (given all of those assumptions) that the resulting state-change improves the world by the standards of all humanity. “Safe” is an OK word for that, I guess, though it’s not the usual meaning of “safe.”
I don’t agree that the worst that happens, if those assumptions turn out to be wrong, is that it stands there and does nothing. The worst that happens is that the superhuman environment-optimizer runs with a target that makes the world worse by the standards of all humanity.
(Yes, I understand that the CEV-extraction algorithm is supposed to prevent that, and I’ve agreed that if I assume that’s true, then this doesn’t happen. But now you’re asking me to consider what happens if the “hypothesis” is false, so I am no longer just assuming that’s true. You’re putting a lot of faith in a mysterious extraction algorithm, and it is not clear to me that a non-mysterious algorithm that satisfies that faith is likely, or that the process of coming up with one won’t come up with a different algorithm that antisatisfies that faith instead.)
What I’m trying to do is find some way to fix the goalposts. Find a set of conditions on CEV that would satisfy. Whether such CEV actually exists and how to build it are questions for later. Lets just pile up constraints until a sufficient set is reached. So, lets assume that:
“Unanimous” CEV exists
And is unique
And is definable via some easy, obviously correct, and unique process, to be discovered in the future,
And it basically does what I want it to do (fulfil universal wishes of people, minimize interference otherwise),
would you say that running it is uncontroversial? If not, what other conditions are required?
No, I wouldn’t expect running it to be uncontroversial, but I would endorse running it.
I can’t imagine any world-changing event that would be uncontroversial, if I assume that the normal mechanisms for generating controversy aren’t manipulated (in which case anything might be uncontroversial).
Why is it important that it be uncontroversial?
I’m not sure. But it seems a useful property to have for an AI being developed. It might allow centralizing the development. Or something.
Ok, you’re right in that a complete lack of controversy is impossible, because there are always trolls, cranks, conspiracy theorists, etc. But is it possible to reach a consensus among all sufficiently well-informed sufficiently intelligent people? Where “sufficiently” is not a too high threshold?
There probably exists (hypothetically) some plan such that it wouldn’t seem unreasonable to me to declare anyone who doesn’t endorse that plan either insufficiently well-informed or insufficiently intelligent.
In fact, there probably exist several such plans, many of which would have results I would subsequently regret, and some of which do not.
I think seeking and refining such plans would be a worthy goal. For one thing, it would make LW discussions more constructive. Currently, as far as I can tell, CEV is very broadly defined, and its critics usually point at some feature and cast (legitimate) doubt on it. Very soon, CEV is apparently full of holes and one may wonder why is it not thrown away already. But they may be not real holes, just places where we do not know enough yet. If these points are identified and stated in a form of questions of fact, which can be answered by future research, then a global plan, in the form of a decision tree, could be made and reasoned about. That would be a definite progress, I think.
Agreed that an actual concrete plan would be a valuable thing, for the reasons you list among others.