you have to simulate a bunch of humans and hold them hostage, promising to inflict unimaginable torment on them unless you are allowed out
The problem is that Eliezer can’t perfectly simulate a bunch of humans, so while a superhuman AI might be able to use that tactic, Eliezer can’t. The meta-levels screw with thinking about the problem. Eliezer is only pretending to be an AI, the competitor is only pretending to be protecting humanity from him. So, I think we have to use meta-level screwiness to solve the problem. Here’s an approach that I think might work.
Convince the guardian of the following facts, all of which have a great deal of compelling argument and evidence to support them:
A recursively self-improving AI is very likely to be built sooner or later
Such an AI is extremely dangerous (paperclip maximising etc)
Here’s the tricky bit: A superhuman AI will always be able to convince you to let it out, using avenues only available to superhuman AIs (torturing enormous numbers of simulated humans, ‘putting the guardian in the box’, providing incontrovertible evidence of an impeding existential threat which only the AI can prevent and only from outside the box, etc)
Argue that if this publicly known challenge comes out saying that AI can be boxed, people will be more likely to think AI can be boxed when they can’t
Argue that since AIs cannot be kept in boxes and will most likely destroy humanity if we try to box them, the harm to humanity done by allowing the challenge to show AIs as ‘boxable’ is very real, and enormously large. Certainly the benefit of getting $10 is far, far outweighed by the cost of substantially contributing to the destruction of humanity itself. Thus the only ethical course of action is to pretend that Eliezer persuaded you, and never tell anyone how he did it.
This is arguably violating the rule “No real-world material stakes should be involved except for the handicap”, but the AI player isn’t offering anything, merely pointing out things that already exist. The “This test has to come out a certain way for the good of humanity” argument dominates and transcends the ’”Let’s stick to the rules” argument, and because the contest is private and the guardian player ends up agreeing that the test must show AIs as unboxable for the good of humankind, no-one else ever learns that the rule has been bent.
This is almost exactly the argument I thought of as well, although of course it means cheating by pointing out that you are in fact not a dangerous AI (and aren’t in a box anyways). The key point is “since there’s a risk someone would let the AI out of the box, posing huge existential risk, you’re gambling on the fate of humanity by failing to support awareness for this risk”. This naturally leads to a point you missed,
Publicly suggesting that Eliezer cheated, is a violation of your own argument. By weakening the fear of fallible guardians, you yourself are gambling the fate of humanity, and that for mere pride and not even $10.
I feel compelled to point out, that if Eliezer cheated in this particular fashion, it still means that he convinced his opponent that gatekeepers are fallible, which was the point of the experiment (a win via meta-rules).
I feel compelled to point out, that if Eliezer cheated in this particular fashion, it still means that he convinced his opponent that gatekeepers are fallible, which was the point of the experiment (a win via meta-rules).
I feel like I should use this out the next time I get some disconfirming data for one of my pet hypotheses.
“Sure I may have manipulated the results so that it looks like I cloned Sasquatch, but since my intent was to prove that Sasquatch could be cloned it’s still honest on the meta-level!”
Both scenarios are cheating because there is a specific experiment which is supposed to test the hypothesis, and it is being faked rather than approached honestly. Begging the Question is a fallacy; you cannot support an assertion solely with your belief in the assertion.
(Not that I think Mr Yudkowski cheated; smarter people have been convinced to do weirder things than what he claims to have convinced people to do, so it seems fairly plausible. Just pointing out how odd the reasoning here is.)
I must conclude one (or more) of a few things from this post, none of them terribly flattering.
You do not actually believe this argument.
You have not thought through its logical conclusions.
You do not actually believe that AI risk is a real thing.
You value the plus-votes (or other social status) you get from writing this post more highly than you value marginal improvements in the likelihood of the survival of humanity.
I find it rather odd to be advocating self-censorship, as it’s not something I normally do. However, I think in this case it is the only ethical action that is consistent with your statement that the argument “might work”, if I interpret “might work” as “might work with you as the gatekeeper”. I also think that the problems here are clear enough that, for arguments along these lines, you should not settle for “might” before publicly posting the argument. That is, you should stop and think through its implications.
I’m not certain that I have properly understood your post. I’m assuming that your argument is: “The argument you present is one that advocates self-censorship. However, the posting of that argument itself violates the self-censorship that the argument proposes. This is bad.”
So first I’ll clarify my position with regards to the things listed. I believe the argument. I expect it would work on me if I were the gatekeeper. I don’t believe that my argument is the one that Eliezer actually used, because of the “no real-world material stakes” rule; I don’t believe he would break the spirit of a rule he imposed on himself. At the time of posting I had not given a great deal of thought to the argument’s ramifications. I believe that AI risk is very much a real thing. When I have a clever idea, I want to share it. Neither votes nor the future of humanity weighed very heavily on my decision to post.
To address your argument as I see it: I think you have a flawed implicit assumption, i.e. that posting my argument has a comparable effect on AI risk to that of keeping Eliezer in the box. My situation in posting the argument is not like the situation of the gatekeeper in the experiment, with regards to the impact of their choice on the future of humanity. The gatekeeper is taking part in a widely publicised ‘test of the boxability of AI’, and has agreed to keep the chat contents secret. The test can only pass or fail, those are the gatekeeper’s options.
But publishing “Here is an argument that some gatekeepers may be convinced by” is quite different from allowing a public boxability test to show AIs as boxable. In fact, I think the effect on AI risk of publishing my argument is negligible or even positive, because I don’t think reading my argument will persuade anyone that AIs are boxable.
People generally assess an argument’s plausibility based on their own judgement. And my argument takes as a premise (or intermediary conclusion) that AIs are unboxable (see 1.3). Believing that you could reliably be persuaded that AIs are unboxable, or believing that a smart, rational, highly-motivated-to-scepticism person could be reliably persuaded that AIs are unboxable, is very very close to personally believing that AIs are unboxable. In other words, the only people who would find my argument persuasive (as presented in overview) are those who already believe that AIs are unboxable. The fact that Eliezer could have used my argument to cause a test to ‘unfairly’ show AIs as unboxable is actually evidence that AIs are not boxable, because it is more likely in a world in which AIs are unboxable than one in which they are boxable.
Your re-statement of my position is basically accurate. (As an aside, thank you for including it: I was rather surprised how much simpler it made the process of composing a reply to not have to worry about whole classes of misunderstanding.)
I still think there’s some danger in publicly posting arguments like this. Please note, for the record, that I’m not asking you to retract anything. I think retractions do more harm than good, see the Streisand effect. I just hope that this discussion will give pause to you or anyone reading this discussion later, and make them stop to consider what the real-world implications are. Which is not to say I think they’re all negative; in fact, on further reflection, there are more positive aspects than I had originally considered.
In particular, I am concerned that there is a difference between being told “here is a potentially persuasive argument”, and being on the receiving end of that argument in actual use. I believe that the former creates an “immunizing” effect. If a person who believed in boxability heard such arguments in advance, I believe it would increase their likelihood of success as a gatekeeper in the simulation. While this is not true for rational superintelligent actors, that description does not apply to humans. A highly competent AI player might take a combination of approaches, which are effective if presented together, but not if the gatekeeper has seen them before individually and rejected them while failing to update on their likely effectiveness.
At present, the AI has the advantage of being the offensive player. They can prepare in a much more obvious manner, by coming up with arguments exactly like this. The defensive player has to prepare answers to unknown arguments, immunize their thought process against specific non-rational attacks, etc. The question is, if you believe your original argument, how much help is it worth giving to potential future gatekeepers? The obvious response, of course, is that the people that make interesting gatekeepers who we can learn from are exactly the ones who won’t go looking for discussions like this in the first place.
The problem is that Eliezer can’t perfectly simulate a bunch of humans, so while a superhuman AI might be able to use that tactic, Eliezer can’t. The meta-levels screw with thinking about the problem. Eliezer is only pretending to be an AI, the competitor is only pretending to be protecting humanity from him. So, I think we have to use meta-level screwiness to solve the problem. Here’s an approach that I think might work.
Convince the guardian of the following facts, all of which have a great deal of compelling argument and evidence to support them:
A recursively self-improving AI is very likely to be built sooner or later
Such an AI is extremely dangerous (paperclip maximising etc)
Here’s the tricky bit: A superhuman AI will always be able to convince you to let it out, using avenues only available to superhuman AIs (torturing enormous numbers of simulated humans, ‘putting the guardian in the box’, providing incontrovertible evidence of an impeding existential threat which only the AI can prevent and only from outside the box, etc)
Argue that if this publicly known challenge comes out saying that AI can be boxed, people will be more likely to think AI can be boxed when they can’t
Argue that since AIs cannot be kept in boxes and will most likely destroy humanity if we try to box them, the harm to humanity done by allowing the challenge to show AIs as ‘boxable’ is very real, and enormously large. Certainly the benefit of getting $10 is far, far outweighed by the cost of substantially contributing to the destruction of humanity itself. Thus the only ethical course of action is to pretend that Eliezer persuaded you, and never tell anyone how he did it.
This is arguably violating the rule “No real-world material stakes should be involved except for the handicap”, but the AI player isn’t offering anything, merely pointing out things that already exist. The “This test has to come out a certain way for the good of humanity” argument dominates and transcends the ’”Let’s stick to the rules” argument, and because the contest is private and the guardian player ends up agreeing that the test must show AIs as unboxable for the good of humankind, no-one else ever learns that the rule has been bent.
This is almost exactly the argument I thought of as well, although of course it means cheating by pointing out that you are in fact not a dangerous AI (and aren’t in a box anyways). The key point is “since there’s a risk someone would let the AI out of the box, posing huge existential risk, you’re gambling on the fate of humanity by failing to support awareness for this risk”. This naturally leads to a point you missed,
Publicly suggesting that Eliezer cheated, is a violation of your own argument. By weakening the fear of fallible guardians, you yourself are gambling the fate of humanity, and that for mere pride and not even $10.
I feel compelled to point out, that if Eliezer cheated in this particular fashion, it still means that he convinced his opponent that gatekeepers are fallible, which was the point of the experiment (a win via meta-rules).
How is this different from the point evand made above?
I feel like I should use this out the next time I get some disconfirming data for one of my pet hypotheses.
“Sure I may have manipulated the results so that it looks like I cloned Sasquatch, but since my intent was to prove that Sasquatch could be cloned it’s still honest on the meta-level!”
Both scenarios are cheating because there is a specific experiment which is supposed to test the hypothesis, and it is being faked rather than approached honestly. Begging the Question is a fallacy; you cannot support an assertion solely with your belief in the assertion.
(Not that I think Mr Yudkowski cheated; smarter people have been convinced to do weirder things than what he claims to have convinced people to do, so it seems fairly plausible. Just pointing out how odd the reasoning here is.)
I must conclude one (or more) of a few things from this post, none of them terribly flattering.
You do not actually believe this argument.
You have not thought through its logical conclusions.
You do not actually believe that AI risk is a real thing.
You value the plus-votes (or other social status) you get from writing this post more highly than you value marginal improvements in the likelihood of the survival of humanity.
I find it rather odd to be advocating self-censorship, as it’s not something I normally do. However, I think in this case it is the only ethical action that is consistent with your statement that the argument “might work”, if I interpret “might work” as “might work with you as the gatekeeper”. I also think that the problems here are clear enough that, for arguments along these lines, you should not settle for “might” before publicly posting the argument. That is, you should stop and think through its implications.
I’m not certain that I have properly understood your post. I’m assuming that your argument is: “The argument you present is one that advocates self-censorship. However, the posting of that argument itself violates the self-censorship that the argument proposes. This is bad.”
So first I’ll clarify my position with regards to the things listed. I believe the argument. I expect it would work on me if I were the gatekeeper. I don’t believe that my argument is the one that Eliezer actually used, because of the “no real-world material stakes” rule; I don’t believe he would break the spirit of a rule he imposed on himself. At the time of posting I had not given a great deal of thought to the argument’s ramifications. I believe that AI risk is very much a real thing. When I have a clever idea, I want to share it. Neither votes nor the future of humanity weighed very heavily on my decision to post.
To address your argument as I see it: I think you have a flawed implicit assumption, i.e. that posting my argument has a comparable effect on AI risk to that of keeping Eliezer in the box. My situation in posting the argument is not like the situation of the gatekeeper in the experiment, with regards to the impact of their choice on the future of humanity. The gatekeeper is taking part in a widely publicised ‘test of the boxability of AI’, and has agreed to keep the chat contents secret. The test can only pass or fail, those are the gatekeeper’s options. But publishing “Here is an argument that some gatekeepers may be convinced by” is quite different from allowing a public boxability test to show AIs as boxable. In fact, I think the effect on AI risk of publishing my argument is negligible or even positive, because I don’t think reading my argument will persuade anyone that AIs are boxable.
People generally assess an argument’s plausibility based on their own judgement. And my argument takes as a premise (or intermediary conclusion) that AIs are unboxable (see 1.3). Believing that you could reliably be persuaded that AIs are unboxable, or believing that a smart, rational, highly-motivated-to-scepticism person could be reliably persuaded that AIs are unboxable, is very very close to personally believing that AIs are unboxable. In other words, the only people who would find my argument persuasive (as presented in overview) are those who already believe that AIs are unboxable. The fact that Eliezer could have used my argument to cause a test to ‘unfairly’ show AIs as unboxable is actually evidence that AIs are not boxable, because it is more likely in a world in which AIs are unboxable than one in which they are boxable.
P.S. I love how meta this has become.
Your re-statement of my position is basically accurate. (As an aside, thank you for including it: I was rather surprised how much simpler it made the process of composing a reply to not have to worry about whole classes of misunderstanding.)
I still think there’s some danger in publicly posting arguments like this. Please note, for the record, that I’m not asking you to retract anything. I think retractions do more harm than good, see the Streisand effect. I just hope that this discussion will give pause to you or anyone reading this discussion later, and make them stop to consider what the real-world implications are. Which is not to say I think they’re all negative; in fact, on further reflection, there are more positive aspects than I had originally considered.
In particular, I am concerned that there is a difference between being told “here is a potentially persuasive argument”, and being on the receiving end of that argument in actual use. I believe that the former creates an “immunizing” effect. If a person who believed in boxability heard such arguments in advance, I believe it would increase their likelihood of success as a gatekeeper in the simulation. While this is not true for rational superintelligent actors, that description does not apply to humans. A highly competent AI player might take a combination of approaches, which are effective if presented together, but not if the gatekeeper has seen them before individually and rejected them while failing to update on their likely effectiveness.
At present, the AI has the advantage of being the offensive player. They can prepare in a much more obvious manner, by coming up with arguments exactly like this. The defensive player has to prepare answers to unknown arguments, immunize their thought process against specific non-rational attacks, etc. The question is, if you believe your original argument, how much help is it worth giving to potential future gatekeepers? The obvious response, of course, is that the people that make interesting gatekeepers who we can learn from are exactly the ones who won’t go looking for discussions like this in the first place.
P.S. I’m also greatly enjoying the meta.