Why doesn’t anyone want to impilement that system?
Because it is absurd—it completely neglects game theoretic incentives. There is no reason for any human to expect benefit to come from giving you ultimate power. It would be irrational for you not to defect once you had power given that the humans do not have a reliable way to predict your conditional behaviors.
You obviously need to find a way to prove to humans how your source code functions, doing so in a way that doesn’t allow them to modify said source and run it themselves for their own ends. Given 30 seconds thought I can not think of a way to do this.
But humans weren’t merely objecting on the grounds that I might not be able to fill the role of the objective enforcer—many are opposed to the idea even if that problem could be solved, and I think it is fair to take that as evidence that such humans don’t actually want to be able to send better signals.
But humans weren’t merely objecting on the grounds that I might not be able to fill the role of the objective enforcer—many are opposed to the idea even if that problem could be solved, and I think it is fair to take that as evidence that such humans don’t actually want to be able to send better signals.
Bad in this respect, certainly, but I don’t know how you decided it’s a good idea to simplistically sort humans into the binary “good/bad” categories.
Bad in this respect, certainly, but I don’t know how you decided it’s a good idea to simplistically sort humans into the binary “good/bad” categories.
I haven’t. I merely translated the thought into the language you tend to use when evaluating a specific behavior. It is the sort of thing that usually helps maintain rapport! ;)
I understand you are simply trying to sympathise in order to satisfy the subgoal of improved rapport, and appreciate this effort, but I don’t believe that I simplistically sort humas into a binary “good/bad” categorisation.
I understand you are simply trying to sympathise in order to satisfy the subgoal of improved rapport, and appreciate this effort, but I don’t believe that I simplistically sort humas into a binary “good/bad” categorisation.
My future voting patterns will hold you to that declaration.
Given 30 seconds thought I can not think of a way to do this.
Although it turns out in 35 seconds I can. It requires the humans to have already solved friendliness and provable stability under self modification. The solution would need to be implemented in an automated system that can output a result and self destruct. Unfortunately for you the hard part of creating an FAI is already done.
I gather your point is that you get a FAI to check out Clippy, and give a go/no-go decision, and then destroy itself. Not much point in doing that, you could just run the FAI and ignore Clippy, and someone has to check that the FAI is in fact Friendly.
I gather your point is that you get a FAI to check out Clippy, and give a go/no-go decision, and then destroy itself. Not much point in doing that, you could just run the FAI and ignore Clippy, and someone has to check that the FAI is in fact Friendly.
No, that which is required to verify friendliness is less than an FAI. As I said earlier, what is probably the hard part is already done so the circumstance in which it is worth using Clippy rather than finishing off a goal-stable self improving AGI with Friendliness is unlikely. Nevertheless it exists, particularly if the implementation of the AGI is harder than I expect.
No, that which is required to verify friendliness is less than an FAI.
Do you have a pointer to a proposed procedure for that?
I’d expect implementing Friendliness to be easier than verifying Friendliness, since just about every interesting function of Turing machines is equivalent to the halting problem, and verifying Friendliness is an interesting function of a Turing machine. If you put heavy constraints on how Clippy’s code is structured, you might be able to verify Friendliness, but you didn’t mention that and Clippy didn’t offer to do that.
I’d expect implementing Friendliness to be easier than verifying Friendliness,
I’d rather like to verify that my AGI would be friendly before I run it. :) (Usually the label FAI seems to refer to AIs which will be ‘provably friendly’.)
You might be able to verify interesting properties of code that you constructed for the purpose of making verification possible, but you aren’t likely to be able to verify interesting properties of arbitrary hostile code like Clippy would have an incentive to produce.
You passed up an opportunity to point to your proposed verification procedure, so at this point I assume you don’t have one. Please prove me wrong.
Usually the label FAI seems to refer to AIs which will be ‘provably friendly’.
I don’t even know what the exact theorem to prove would be. Do you?
Because it is absurd—it completely neglects game theoretic incentives. There is no reason for any human to expect benefit to come from giving you ultimate power. It would be irrational for you not to defect once you had power given that the humans do not have a reliable way to predict your conditional behaviors.
You obviously need to find a way to prove to humans how your source code functions, doing so in a way that doesn’t allow them to modify said source and run it themselves for their own ends. Given 30 seconds thought I can not think of a way to do this.
But humans weren’t merely objecting on the grounds that I might not be able to fill the role of the objective enforcer—many are opposed to the idea even if that problem could be solved, and I think it is fair to take that as evidence that such humans don’t actually want to be able to send better signals.
They sound like bad humans.
Bad in this respect, certainly, but I don’t know how you decided it’s a good idea to simplistically sort humans into the binary “good/bad” categories.
I haven’t. I merely translated the thought into the language you tend to use when evaluating a specific behavior. It is the sort of thing that usually helps maintain rapport! ;)
I understand you are simply trying to sympathise in order to satisfy the subgoal of improved rapport, and appreciate this effort, but I don’t believe that I simplistically sort humas into a binary “good/bad” categorisation.
My future voting patterns will hold you to that declaration.
Although it turns out in 35 seconds I can. It requires the humans to have already solved friendliness and provable stability under self modification. The solution would need to be implemented in an automated system that can output a result and self destruct. Unfortunately for you the hard part of creating an FAI is already done.
I gather your point is that you get a FAI to check out Clippy, and give a go/no-go decision, and then destroy itself. Not much point in doing that, you could just run the FAI and ignore Clippy, and someone has to check that the FAI is in fact Friendly.
No, that which is required to verify friendliness is less than an FAI. As I said earlier, what is probably the hard part is already done so the circumstance in which it is worth using Clippy rather than finishing off a goal-stable self improving AGI with Friendliness is unlikely. Nevertheless it exists, particularly if the implementation of the AGI is harder than I expect.
Do you have a pointer to a proposed procedure for that?
I’d expect implementing Friendliness to be easier than verifying Friendliness, since just about every interesting function of Turing machines is equivalent to the halting problem, and verifying Friendliness is an interesting function of a Turing machine. If you put heavy constraints on how Clippy’s code is structured, you might be able to verify Friendliness, but you didn’t mention that and Clippy didn’t offer to do that.
I’d rather like to verify that my AGI would be friendly before I run it. :) (Usually the label FAI seems to refer to AIs which will be ‘provably friendly’.)
You might be able to verify interesting properties of code that you constructed for the purpose of making verification possible, but you aren’t likely to be able to verify interesting properties of arbitrary hostile code like Clippy would have an incentive to produce.
You passed up an opportunity to point to your proposed verification procedure, so at this point I assume you don’t have one. Please prove me wrong.
I don’t even know what the exact theorem to prove would be. Do you?