But when you are considering a person who agrees with that theory, and makes a decision based on it, agreement with the theory fully explains that decision, this is a much better explanation than most of the stuff people are circulating here.
You appear to be arguing that a bad decision is somehow a less bad decision if the reasoning used to get to it was consistent (“carefully, correctly wrong”).
Here, I’m talking about factual explanation, not normative estimation. The actions are explained by holding a certain belief, better than by alternative hypotheses. Whether they were correct is a separate question.
At that point, disagreement about the decision must be resolved by arguing about the theory, but that’s not easy.
No, because the decision is tested against reality.
You’d need to explain this step in more detail. I was discussing a communication protocol, where does “testing against reality” enter that topic?
Ah, I thought you were talking about whether the decision solved the problem, not whether the failed decision was justifiable in terms of the theory.
I do think that if a decision theory leads to quite as spectacular a failure in practice as this one did, then the decision theory is strongly suspect.
As such, whether the decision was justifiable is less interesting except in terms of revealing the thinking processes of the person doing the justification (clinginess to pet decision theory, etc).
I do think that if a decision theory leads to quite as spectacular a failure in practice as this one did, then the decision theory is strongly suspect.
“Belief in the decision being a failure is an argument against adequacy of the decision theory”, is simply a dual restatement of “Belief in the adequacy of the decision theory is an argument for the decision being correct”.
“Belief in the decision being a failure is an argument against adequacy of the decision theory”, is simply a dual restatement of “Belief in the adequacy of the decision theory is an argument for the decision being correct”.
This statement appears confusing to me: you appear to be saying that if I believe strongly enough in the forbidden post having been successfully suppressed, then censoring it will not have in fact caused it to propagated widely, nor will it have become an object of fascination and caused a reputational hit to LessWrong and hence SIAI. This, of course, makes no sense.
I do not understand how this matches with the effects observable in reality, where these things do in fact appear to have happened. Could you please explain how one tests this result of the decision theory, if not by matching it against what actually happened? That being what I’m using to decide whether the decision worked or not.
Keep in mind that I’m talking about an actual decision and its actual results here. That’s the important bit.
If you believe that “decision is a failure” is evidence that the decision theory is not adequate, you believe that “decision is a success” is evidence that the decision theory is adequate.
Since a decision theory’s adequacy is determined by how successful its decisions are, you appear to be saying “if a decision theory makes a bad decision, it is a bad decision theory” which is tautologically true.
Correct me if I’m wrong, but Vladimir_Nesov is not interested in whether the the decision theory is good or bad, so restating an axiom of decision theory evaluation is irrelevant.
The decision was made by a certain decision theory. The factual question “was the decision-maker holding to this decision theory in making this decision?” is entirely unrelated to the question “should the decision-maker hold to this decision theory given that it makes bad decisions?”. To suggest otherwise blurs the prescriptive/descriptive divide, which is what Vladimir_Nesov is referring to when he says
Here, I’m talking about factual explanation, not normative estimation.
If you believe that “decision is a failure” is evidence that the decision theory is not adequate, you believe that “decision is a success” is evidence that the decision theory is adequate.
I believe that if the decision theory clearly led to an incorrect result (which it clearly did in this case, despite Vladimir Nesov’s energetic equivocation), then it is important to examine the limits of the decision theory.
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously.
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously.
The decision you refer to here… I’m assuming it is this still the Eliezer->Roko decision? (This discussion is not the most clearly presented.) If so for your purposes you can safely consider ‘TDT/CDT’ irrelevant. While acausal (TDTish) reasoning is at play in establishing a couple of the important premises, they are not relevant to the reasoning that you actually seem to be criticising.
ie. The problems you refer to here are not the fault of TDT or of abstract reasoning at all—just plain old human screw ups with hasty reactions.
I’m assuming it is this still the Eliezer->Roko decision? (This discussion is not the most clearly presented.)
That’s the one, that being the one specific thing I’ve been talking about all the way through.
Vladimir Nesov cited acausal decision theories as the reasoning here and here—if not TDT, then a similar local decision theory. If that is not the case, I’m sure he’ll be along shortly to clarify.
(I stress “local” to note that they suffer a lack of outside review or even notice. A lack of these things tends not to work out well in engineering or science either.)
That’s the one, that being the one specific thing I’ve been talking about all the way through.
Good, that had been my impression.
Independently of anything that Vladmir may have written it is my observation that the ‘TDT-like’ stuff was mostly relevant to the question “is it dangerous for people to think X?” Once that has been established the rest of the decision making, what to do after already having reached that conclusion, was for most part just standard unadorned human thinking. From what I have seen (including your references to reputation self sabotage by SIAI) you were more troubled by the the latter parts than the former.
Even if you do care about the more esoteric question “is it dangerous for people to think X?” I note that ‘garbage in, garbage out’ applies here as it does elsewhere.
(I just don’t like to see TDT unfairly maligned. Tarnished by association as it were.)
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust
See section 7 of the TDT paper (you’ll probably have to read from the beginning to familiarize yourself with concepts). It doesn’t take Omega to demonstrate that CDT errs, it takes mere ability to predict dispositions of agents to any small extent to get out of CDT’s domain, and humans do that all the time. From the paper:
The argument under consideration is that I should adopt a decision theory in which my decision takes general account of dilemmas whose mechanism is influenced by “the sort of decision I make, being the person that I am” and not just the direct causal effects of my action. It should be clear that any dispositional influence on the dilemma’s mechanism is sufficient to carry the force of this argument. There is no minimum influence, no threshold value.
I wouldn’t use this situation as evidence for any outside conclusions. Right or wrong, the belief that it’s right to suppress discussion of the topic entails also believing that it’s wrong to participate in that discussion or to introduce certain kinds of evidence. So while you may believe that it was wrong to censor, you should also expect a high probability of unknown unknowns that would mess up your reasoning if you tried to take inferential steps from that conclusion to somewhere else.
I haven’t been saying I believed it was wrong to censor (although I do think that it’s a bad idea in general). I have been saying I believe it was stupid and counterproductive to censor, and that this is not only clearly evident from the results, but should have been trivially predictable (certainly to anyone who’d been on the Internet for a few years) before the action was taken. And if the LW-homebrewed, lacking in outside review, Timeless Decision Theory was used to reach this bad decision, then TDT was disastrously inadequate (not just slightly inadequate) for application to a non-hypothetical situation and it lessens the expectation that TDT will be adequate for future non-hypothetical situations. And that this should also be obvious.
Yes, the attempt to censor was botched and I regret the botchery. In retrospect I should have not commented or explained anything, just PM’d Roko and asked him to take down the post without explaining himself.
This is actually quite comforting to know. Thank you.
(I still wonder WHAT ON EARTH WERE YOU THINKING at the time, but you’ll answer as and when you think it’s a good idea to, and that’s fine.)
(I was down the pub with ciphergoth just now and this topic came up … I said the Very Bad Idea sounded silly as an idea, he said it wasn’t as silly as it sounded to me with my knowledge. I can accept that. Then we tried to make sense of the idea of CEV as a practical and useful thing. I fear if I want a CEV process applicable by humans I’m going to have to invent it. Oh well.)
I wouldn’t use this situation as evidence for any outside conclusions.
It is evidence for said conclusions. Do you mean, perhaps, that it isn’t evidence that is strong enough to draw confident conclusions on its own?
Right or wrong, the belief that it’s right to suppress discussion of the topic entails also believing that it’s wrong to participate in that discussion or to introduce certain kinds of evidence. So while you may believe that it was wrong to censor, you should also expect a high probability of unknown unknowns that would mess up your reasoning if you tried to take inferential steps from that conclusion to somewhere else.
To follow from the reasoning the embedded conclusion must be ‘you should expect a higher probability’. The extent to which David should expect higher probability of unknown unknowns is dependent on the deference David gives to the judgement of the conscientious non-participants when it comes to the particular kind of risk assessment and decision making—ie. probably less than Jim does.
(With those two corrections in place the argument is reasonable.)
failure in a non-hypothetical situation suggests its robustness is in doubt and it should be regarded as less reliable than it may have been regarded previously.
I agree, and in this comment I remarked that we were assuming this statement all along, albeit in a dual presentation.
Here, I’m talking about factual explanation, not normative estimation. The actions are explained by holding a certain belief, better than by alternative hypotheses. Whether they were correct is a separate question.
You’d need to explain this step in more detail. I was discussing a communication protocol, where does “testing against reality” enter that topic?
Ah, I thought you were talking about whether the decision solved the problem, not whether the failed decision was justifiable in terms of the theory.
I do think that if a decision theory leads to quite as spectacular a failure in practice as this one did, then the decision theory is strongly suspect.
As such, whether the decision was justifiable is less interesting except in terms of revealing the thinking processes of the person doing the justification (clinginess to pet decision theory, etc).
“Belief in the decision being a failure is an argument against adequacy of the decision theory”, is simply a dual restatement of “Belief in the adequacy of the decision theory is an argument for the decision being correct”.
This statement appears confusing to me: you appear to be saying that if I believe strongly enough in the forbidden post having been successfully suppressed, then censoring it will not have in fact caused it to propagated widely, nor will it have become an object of fascination and caused a reputational hit to LessWrong and hence SIAI. This, of course, makes no sense.
I do not understand how this matches with the effects observable in reality, where these things do in fact appear to have happened. Could you please explain how one tests this result of the decision theory, if not by matching it against what actually happened? That being what I’m using to decide whether the decision worked or not.
Keep in mind that I’m talking about an actual decision and its actual results here. That’s the important bit.
If you believe that “decision is a failure” is evidence that the decision theory is not adequate, you believe that “decision is a success” is evidence that the decision theory is adequate.
Since a decision theory’s adequacy is determined by how successful its decisions are, you appear to be saying “if a decision theory makes a bad decision, it is a bad decision theory” which is tautologically true.
Correct me if I’m wrong, but Vladimir_Nesov is not interested in whether the the decision theory is good or bad, so restating an axiom of decision theory evaluation is irrelevant.
The decision was made by a certain decision theory. The factual question “was the decision-maker holding to this decision theory in making this decision?” is entirely unrelated to the question “should the decision-maker hold to this decision theory given that it makes bad decisions?”. To suggest otherwise blurs the prescriptive/descriptive divide, which is what Vladimir_Nesov is referring to when he says
I believe that if the decision theory clearly led to an incorrect result (which it clearly did in this case, despite Vladimir Nesov’s energetic equivocation), then it is important to examine the limits of the decision theory.
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously.
Assuming the decision was made by robust TDT.
The decision you refer to here… I’m assuming it is this still the Eliezer->Roko decision? (This discussion is not the most clearly presented.) If so for your purposes you can safely consider ‘TDT/CDT’ irrelevant. While acausal (TDTish) reasoning is at play in establishing a couple of the important premises, they are not relevant to the reasoning that you actually seem to be criticising.
ie. The problems you refer to here are not the fault of TDT or of abstract reasoning at all—just plain old human screw ups with hasty reactions.
That’s the one, that being the one specific thing I’ve been talking about all the way through.
Vladimir Nesov cited acausal decision theories as the reasoning here and here—if not TDT, then a similar local decision theory. If that is not the case, I’m sure he’ll be along shortly to clarify.
(I stress “local” to note that they suffer a lack of outside review or even notice. A lack of these things tends not to work out well in engineering or science either.)
Good, that had been my impression.
Independently of anything that Vladmir may have written it is my observation that the ‘TDT-like’ stuff was mostly relevant to the question “is it dangerous for people to think X?” Once that has been established the rest of the decision making, what to do after already having reached that conclusion, was for most part just standard unadorned human thinking. From what I have seen (including your references to reputation self sabotage by SIAI) you were more troubled by the the latter parts than the former.
Even if you do care about the more esoteric question “is it dangerous for people to think X?” I note that ‘garbage in, garbage out’ applies here as it does elsewhere.
(I just don’t like to see TDT unfairly maligned. Tarnished by association as it were.)
See section 7 of the TDT paper (you’ll probably have to read from the beginning to familiarize yourself with concepts). It doesn’t take Omega to demonstrate that CDT errs, it takes mere ability to predict dispositions of agents to any small extent to get out of CDT’s domain, and humans do that all the time. From the paper:
I wouldn’t use this situation as evidence for any outside conclusions. Right or wrong, the belief that it’s right to suppress discussion of the topic entails also believing that it’s wrong to participate in that discussion or to introduce certain kinds of evidence. So while you may believe that it was wrong to censor, you should also expect a high probability of unknown unknowns that would mess up your reasoning if you tried to take inferential steps from that conclusion to somewhere else.
I haven’t been saying I believed it was wrong to censor (although I do think that it’s a bad idea in general). I have been saying I believe it was stupid and counterproductive to censor, and that this is not only clearly evident from the results, but should have been trivially predictable (certainly to anyone who’d been on the Internet for a few years) before the action was taken. And if the LW-homebrewed, lacking in outside review, Timeless Decision Theory was used to reach this bad decision, then TDT was disastrously inadequate (not just slightly inadequate) for application to a non-hypothetical situation and it lessens the expectation that TDT will be adequate for future non-hypothetical situations. And that this should also be obvious.
Yes, the attempt to censor was botched and I regret the botchery. In retrospect I should have not commented or explained anything, just PM’d Roko and asked him to take down the post without explaining himself.
This is actually quite comforting to know. Thank you.
(I still wonder WHAT ON EARTH WERE YOU THINKING at the time, but you’ll answer as and when you think it’s a good idea to, and that’s fine.)
(I was down the pub with ciphergoth just now and this topic came up … I said the Very Bad Idea sounded silly as an idea, he said it wasn’t as silly as it sounded to me with my knowledge. I can accept that. Then we tried to make sense of the idea of CEV as a practical and useful thing. I fear if I want a CEV process applicable by humans I’m going to have to invent it. Oh well.)
And I would have taken it down. My bad for not asking first most importantly.
It is evidence for said conclusions. Do you mean, perhaps, that it isn’t evidence that is strong enough to draw confident conclusions on its own?
To follow from the reasoning the embedded conclusion must be ‘you should expect a higher probability’. The extent to which David should expect higher probability of unknown unknowns is dependent on the deference David gives to the judgement of the conscientious non-participants when it comes to the particular kind of risk assessment and decision making—ie. probably less than Jim does.
(With those two corrections in place the argument is reasonable.)
I agree, and in this comment I remarked that we were assuming this statement all along, albeit in a dual presentation.