How is this different than positing a very small chance that a future dictator will nuke the planet unless I mail a $10 donation to green peace?
I agree that you are not justified in seeing a difference, unless you understand the theory of acausal control to some extent and agree with it. But when you are considering a person who agrees with that theory, and makes a decision based on it, agreement with the theory fully explains that decision, this is a much better explanation than most of the stuff people are circulating here. At that point, disagreement about the decision must be resolved by arguing about the theory, but that’s not easy.
But when you are considering a person who agrees with that theory, and makes a decision based on it, agreement with the theory fully explains that decision, this is a much better explanation than most of the stuff people are circulating here.
You appear to be arguing that a bad decision is somehow a less bad decision if the reasoning used to get to it was consistent (“carefully, correctly wrong”).
At that point, disagreement about the decision must be resolved by arguing about the theory, but that’s not easy.
No, because the decision is tested against reality. Being internally consistent may be a reason for doing something that it is obvious to others is just going to be counterproductive—as in the present case—but it doesn’t grant a forgiveness pass from reality.
That is: in practical effects, sincere stupidity and insincere stupidity are both stupidity.
You even say this above (“There is only one proper criterion to anyone’s actions, goodness of consequences”), making your post here even stranger.
(In fact, sincere stupidity can be more damaging, as in my experience it’s much harder to get the person to change their behaviour or the reasoning that led to it—they tend to cling to it and justify it when the bad effects are pointed out to them, with more justifications in response to more detail on the consequences of the error.)
Think of it as a trolley problem. Leaving the post is a bad option, the consequences of removing it are then the question: which is actually worse and results in the idea propagating further? If you can prove in detail that a decision theory considers removing it will make it propagate less, you’ve just found where the decision theory fails.
Removing the forbidden post propagated it further, and made both the post itself and the circumstances of its removal objects of fascination. It has also diminished the perceived integrity of LessWrong, as we can no longer be sure posts are not being quietly removed as well as loudly; this also diminished the reputation of SIAI. It is difficult to see either of these as working to suppress the bad idea.
It has also diminished the perceived integrity of LessWrong, as we can no longer be sure posts are not being quietly removed as well as loudly; this also diminished the reputation of SIAI.
More importantly it removed lesswrong as a place where FAI and decision theory can be discussed in any depth beyond superficial advocacy.
More importantly it removed lesswrong as a place where FAI and decision theory can be discussed in any depth beyond superficial advocacy.
The problem is more than the notion that secret knowledge is bad—it’s that secret knowledge increasingly isn’t possible, and increasingly isn’t knowledge.
If it’s science, you almost can’t do it on your own and you almost can’t do it as a secret. If it’s engineering, your DRM or other constraints will last precisely as long as no-one is interested in breaking them. If it’s politics, your conspiracy will last as long as you aren’t found out and can insulate yourself from the effects … that one works a bit better, actually.
More importantly it removed lesswrong as a place where FAI and decision theory can be discussed in any depth beyond superficial advocacy.
I don’t believe this is true to any significant extent. Why do you believe that? What kind of questions are not actually discussed that could’ve been discussed otherwise?
If someone is about to execute an AI running ‘CEV’ should I push a fat man on top of him and save five people from torture? What about an acausal fat man? :)
(How) can acausal trade be used to solve the cooperation problem inherent in funding FAI development? If I recall this topic was one that was explicitly deleted. Torture was mostly just a superficial detail.
… just from a few seconds brainstorming. These are the kinds of questions that can not be discussed without, at the very least, significant bias due to the threat of personal abuse and censorship if you are not careful. I am extremely wary of even trivial inconveniences.
This doesn’t seem like an interesting question, where it intersects the forbidden topic. We don’t understand decision theory well enough to begin usefully discussing this. Most directions of discussion about this useless question are not in fact forbidden and the discussion goes on.
(How) can acausal trade be used to solve the cooperation problem inherent in funding FAI development?
We don’t formally understand even the usual game theory, let alone acausal trade. It’s far too early to discuss its applications.
It wasn’t Vladimir_Nesov’s interest that you feigned curiosity in and nor is it your place to decide what things others are interested in discussing. They are topics that are at least as relevant as such things as ‘Sleeping Beauty’ that people have merrily prattled on about for decades.
That you support a censorship of certain ideas by no means requires you to exhaustively challenge every possible downside to said censorship. Even if the decision were wise and necessary there is allowed to be disappointing consequences. That’s just how things are sometimes.
It wasn’t Vladimir_Nesov’s interest that you feigned curiosity in and nor is it your place to decide what things others are interested in discussing.
What do you mean by “decide”? Whether they are interested in that isn’t influenced by my decisions, and I can well think about whether they are, or whether they should be (i.e. whether there is any good to be derived from that interest).
I opened this thread by asking,
What kind of questions are not actually discussed that could’ve been discussed otherwise?
You answered this question, and then I said what I think about that kind of questions. It wasn’t obvious to me that you didn’t think of some other kind of questions that I find important, so I asked first, not just rhetorically.
What you implied in this comment seems very serious, and it was not my impression that something serious was taking place as a result of the banning incident, so of course I asked. My evaluation of whether the topics excluded (that you’ve named) are important is directly relevant to the reason your comment drew my attention.
On downvoting of parent comment: I’m actually surprised this comment got downvoted. It’s not as long inferential depth as this one that got downvoted worse, and it looks to me quite correct. Help me improve, say what’s wrong.
That you support a censorship of certain ideas by no means requires you to exhaustively challenge every possible downside to said censorship.
The other way around. I don’t “support censorship”, instead I don’t see that there are downsides worth mentioning (besides the PR hit), and as a result I disagree that censorship is important. Of course this indicates that I generally disagree with arguments for the harm of the censorship (that I so far understood), and so I argue with them (just as with any other arguments I disagree with that are on topic I’m interested in).
The zeal here is troubling.
No zeal, just expressing my state of belief, and not willing to yield for reasons other than agreement (which is true in general, the censorship topic or not).
No zeal, just expressing my state of belief, and not willing to yield for reasons other than agreement (which is true in general, the censorship topic or not).
No, yielding and the lack thereof is not the indicator of zeal of which I speak. It is the sending out of your soldiers so universally that they reach even into the territory of other’s preferences. That critical line between advocation of policy and the presumption that others must justify their very thoughts (what topics interests them and how their thoughts are affected by the threat of public shaming and censorship) is crossed.
The lack of boundaries is a telling sign according to my model of social dynamics.
It is the sending out of your soldiers so universally that they reach even into the territory of other’s preferences. That critical line between advocation of policy and the presumption that others must justify their very thoughts (what topics interests them and how their thoughts are affected by the threat of public shaming and censorship) is crossed.
It was not my intention to discuss whether something is interesting to others. If it wasn’t clear, I do state so here explicitly. You were probably misled by the first part of this comment, where I objected to your statement that I shouldn’t speculate about what others are interested in. I don’t see why not, so I objected, but I didn’t mean to imply that I did speculate about that in the relevant comment. What I did state is that I myself don’t believe that conversational topic important, and motivation for that remark is discussed in the second part of the same comment.
Besides, asserting that the topic is not interesting to others is false as a point of simple fact, and that would be the problem, not the pattern of its alignment with other assertions. Are there any other statements that you believe I endorse (“in support of censorship”) and that you believe are mistaken?
(Should I lump everything in one comment, or is the present way better? I find it more clear if different concerns are extracted as separate sub-threads.)
Steven beat me to it—this way works well. Bear in mind though that I wasn’t planning to engage in this subject too deeply. Simply because it furthers no goal that I am committed to and is interesting only in as much as it can spawn loosely related tangents.
That some topics are excluded is tautological, so it’s important what kind of topics were. Thus, stating “nor is it your place to decide what things others are interested in discussing” seems to be equivalent with stating “censorship (of any kind) is bad!”, which is not very helpful in the discussion of whether it’s in fact bad. What’s the difference you intended?
Would you have censored the information? If not, do you think it would be a good idea to discuss the subject matter on an external (public) forum? Would you be interested to discuss it?
No, for several reasons. I have made no secret of the fact that I don’t think Eliezer processes perceived risks rationally and I think this applies in this instance.
This is not a claim that censorship is always a bad idea—there are other obvious cases where it would be vital. Information is power, after all.
If not, do you think it would be a good idea to discuss the subject matter on an external (public) forum?
Only if there is something interesting to say on the subject. Or any interesting conversations to be had on the various related subjects that the political bias would interfere with. But the mere fact that Eliezer forbids it doesn’t make it more interesting to me. In fact, the parts of Roko’s posts that were most interesting to me were not even the same parts that Eliezer threw a tantrum over. As far as I know Roko has been bullied out of engaging in such conversation even elsewhere and he would have been the person most worth talking to about that kind of counterfactual.
Bear in mind that the topic has moved from the realm of abstract philosophy to politics. If you make any mistakes, demonstrate any ignorance or even say things that can be conceivably twisted to appear as such then expect that will be used against you here to undermine your credibility on the subject. People like Nesov and and jimrandom care, and care aggressively.
Would you be interested to discuss it?
Post away, if I have something to add then I’ll jump in. But warily.
Post away, if I have something to add then I’ll jump in. But warily.
I am not sure if I understand the issue and if it is as serious as some people obviously perceive it to be. Because if I indeed understand it, then it isn’t as dangerous to talk about it in public as it is portrayed to be. But that would mean that there is something wrong with otherwise smart people, which is unlikely? So should I conclude that it is more likely that I simply do not understand it?
What irritates me is that people like Nesov are saying that “we don’t formally understand even the usual game theory, let alone acausal trade”. Yet they care aggressively to censor the topic. I’ve been told before that it is due to people getting nightmares from it. If that is the reason then I do not think censorship is justified at all.
How about the possibility that you do not understand it and that they are not silly? Do you think it could be serious enough to have nightmares about it and to censor it as far as possible, but that you simply don’t get it? How likely is that possibility?
Why would you even ask me that? Clearly I have considered the possibility (given that I am not a three year old) and equally clearly me answering you would not make much sense. :)
But the questioning of trusting people’s nightmares is an interesting one. I tend to be of the mind that if someone has that much of an anxiety problem prompted by a simple abstract thought then it is best to see that they receive the appropriate medication and therapy. After that has been taken care of I may consider their advice.
Why would you even ask me that? Clearly I have considered the possibility...
I wasn’t quite sure. I don’t know how to conclude that they are silly and you are not. I’m not just talking about Nesov but also Yudkowsky. You concluded that they are all wrong about their risk estimations and act silly. Yudkowsky explicitly stated that he does know more. But you conclude that they don’t know more, that they are silly.
I tend to be of the mind that if someone has that much of an anxiety problem prompted by a simple abstract thought then it is best to see that they receive the appropriate medication and therapy after that consider their advice.
Yes, I commented before saying that it is not the right move to truncate your child’s bed so that monsters won’t fit under it but rather explain that it is very unlikely for monsters to hide under the bed.
I wasn’t quite sure. I don’t know how to conclude that they are silly and you are not.
You can’t. Given the information you have available it would be a mistake for you to make such a conclusion. Particularly given that I have not even presented arguments or reasoning on the core of the subject, what, with the censorship and all. :)
Yudkowsky explicitly stated that he does know more.
Indeed. Which means that not taking his word for it constitutes disrespect.
Yes, I commented before saying that it is not the right move to truncate your child’s bed so that monsters won’t fit under it but rather explain that it is very unlikely for monsters to hide under the bed.
Once the child grows up a bit you can go on to explain to them that even though there are monsters out in the world being hysterical doesn’t help either in detecting monsters or fighting them. :)
As I noted, it’s a trolley problem: you have the bad alternative of doing nothing, and then there’s the alternative of doing something that may be better and may be worse. This case observably came out worse, and that should have been trivially predictable by anyone who’d been on the net a few years.
So the thinking involved in the decision, and the ongoing attempts at suppression, admits of investigation.
But yes, it could all be a plot to get as many people as possible thinking really hard about the “forbidden” idea, with this being such an important goal as to be worth throwing LW’s intellectual integrity in front of the trolley for.
What irritates me is that people like Nesov are saying that “we don’t formally understand even the usual game theory, let alone acausal trade”. Yet they care aggressively to censor the topic.
Caring “to censor the topic” doesn’t make sense, it’s already censored, and already in the open, and I’m not making any actions regarding the censorship. You’d need to be more accurate in what exactly you believe, instead of reasoning in terms of vague affect.
Regarding lack of formal understanding, see this comment: the decision to not discuss the topic, if at all possible, follows from a very weak belief, not from certainty. Lack of formal understanding expresses lack of certainty, but not lack of very weak beliefs.
If an organisation, that is working on a binding procedure for a all-powerful dictator to implement it on the scale of the observable universe, tried to censor information, that could directly affect me for the rest of time in the worst possible manner, I got a very weak belief that their causal control is much more dangerous than the acausal control between me and their future dictator.
Caring “to censor the topic” doesn’t make sense...
So you don’t care if I post it everywhere and send it to everyone I can?
For what it’s worth, I’ve given up on participating in these arguments. My position hasn’t changed, but arguing it was counterproductive, and extremely frustrating, which led to me saying some stupid things.
People like Nesov and and jimrandom care, and care aggressively.
No, I don’t (or, alternatively, you possibly could unpack this in a non-obvious way to make it hold).
I suppose it just so happens it was the topic I engaged yesterday, and similar “care aggressively” characteristic can probably be seen in any other discussion I engage.
and similar “care aggressively” characteristic can probably be seen in any other discussion I engage.
I don’t dispute that, and this was part of what prompted the warning to XiXi. When a subject is political and your opponents are known to use aggressive argumentative styles it is important to take a lot of care with your words—give nothing that could potentially be used against you.
The situation is analogous to recent discussion of refraining to respond to the trolley problem. If there is the possibility that people may use your words against you in the future STFU unless you know exactly what you are doing!
But when you are considering a person who agrees with that theory, and makes a decision based on it, agreement with the theory fully explains that decision, this is a much better explanation than most of the stuff people are circulating here.
You appear to be arguing that a bad decision is somehow a less bad decision if the reasoning used to get to it was consistent (“carefully, correctly wrong”).
Here, I’m talking about factual explanation, not normative estimation. The actions are explained by holding a certain belief, better than by alternative hypotheses. Whether they were correct is a separate question.
At that point, disagreement about the decision must be resolved by arguing about the theory, but that’s not easy.
No, because the decision is tested against reality.
You’d need to explain this step in more detail. I was discussing a communication protocol, where does “testing against reality” enter that topic?
Ah, I thought you were talking about whether the decision solved the problem, not whether the failed decision was justifiable in terms of the theory.
I do think that if a decision theory leads to quite as spectacular a failure in practice as this one did, then the decision theory is strongly suspect.
As such, whether the decision was justifiable is less interesting except in terms of revealing the thinking processes of the person doing the justification (clinginess to pet decision theory, etc).
I do think that if a decision theory leads to quite as spectacular a failure in practice as this one did, then the decision theory is strongly suspect.
“Belief in the decision being a failure is an argument against adequacy of the decision theory”, is simply a dual restatement of “Belief in the adequacy of the decision theory is an argument for the decision being correct”.
“Belief in the decision being a failure is an argument against adequacy of the decision theory”, is simply a dual restatement of “Belief in the adequacy of the decision theory is an argument for the decision being correct”.
This statement appears confusing to me: you appear to be saying that if I believe strongly enough in the forbidden post having been successfully suppressed, then censoring it will not have in fact caused it to propagated widely, nor will it have become an object of fascination and caused a reputational hit to LessWrong and hence SIAI. This, of course, makes no sense.
I do not understand how this matches with the effects observable in reality, where these things do in fact appear to have happened. Could you please explain how one tests this result of the decision theory, if not by matching it against what actually happened? That being what I’m using to decide whether the decision worked or not.
Keep in mind that I’m talking about an actual decision and its actual results here. That’s the important bit.
If you believe that “decision is a failure” is evidence that the decision theory is not adequate, you believe that “decision is a success” is evidence that the decision theory is adequate.
Since a decision theory’s adequacy is determined by how successful its decisions are, you appear to be saying “if a decision theory makes a bad decision, it is a bad decision theory” which is tautologically true.
Correct me if I’m wrong, but Vladimir_Nesov is not interested in whether the the decision theory is good or bad, so restating an axiom of decision theory evaluation is irrelevant.
The decision was made by a certain decision theory. The factual question “was the decision-maker holding to this decision theory in making this decision?” is entirely unrelated to the question “should the decision-maker hold to this decision theory given that it makes bad decisions?”. To suggest otherwise blurs the prescriptive/descriptive divide, which is what Vladimir_Nesov is referring to when he says
Here, I’m talking about factual explanation, not normative estimation.
If you believe that “decision is a failure” is evidence that the decision theory is not adequate, you believe that “decision is a success” is evidence that the decision theory is adequate.
I believe that if the decision theory clearly led to an incorrect result (which it clearly did in this case, despite Vladimir Nesov’s energetic equivocation), then it is important to examine the limits of the decision theory.
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously.
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously.
The decision you refer to here… I’m assuming it is this still the Eliezer->Roko decision? (This discussion is not the most clearly presented.) If so for your purposes you can safely consider ‘TDT/CDT’ irrelevant. While acausal (TDTish) reasoning is at play in establishing a couple of the important premises, they are not relevant to the reasoning that you actually seem to be criticising.
ie. The problems you refer to here are not the fault of TDT or of abstract reasoning at all—just plain old human screw ups with hasty reactions.
I’m assuming it is this still the Eliezer->Roko decision? (This discussion is not the most clearly presented.)
That’s the one, that being the one specific thing I’ve been talking about all the way through.
Vladimir Nesov cited acausal decision theories as the reasoning here and here—if not TDT, then a similar local decision theory. If that is not the case, I’m sure he’ll be along shortly to clarify.
(I stress “local” to note that they suffer a lack of outside review or even notice. A lack of these things tends not to work out well in engineering or science either.)
That’s the one, that being the one specific thing I’ve been talking about all the way through.
Good, that had been my impression.
Independently of anything that Vladmir may have written it is my observation that the ‘TDT-like’ stuff was mostly relevant to the question “is it dangerous for people to think X?” Once that has been established the rest of the decision making, what to do after already having reached that conclusion, was for most part just standard unadorned human thinking. From what I have seen (including your references to reputation self sabotage by SIAI) you were more troubled by the the latter parts than the former.
Even if you do care about the more esoteric question “is it dangerous for people to think X?” I note that ‘garbage in, garbage out’ applies here as it does elsewhere.
(I just don’t like to see TDT unfairly maligned. Tarnished by association as it were.)
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust
See section 7 of the TDT paper (you’ll probably have to read from the beginning to familiarize yourself with concepts). It doesn’t take Omega to demonstrate that CDT errs, it takes mere ability to predict dispositions of agents to any small extent to get out of CDT’s domain, and humans do that all the time. From the paper:
The argument under consideration is that I should adopt a decision theory in which my decision takes general account of dilemmas whose mechanism is influenced by “the sort of decision I make, being the person that I am” and not just the direct causal effects of my action. It should be clear that any dispositional influence on the dilemma’s mechanism is sufficient to carry the force of this argument. There is no minimum influence, no threshold value.
I wouldn’t use this situation as evidence for any outside conclusions. Right or wrong, the belief that it’s right to suppress discussion of the topic entails also believing that it’s wrong to participate in that discussion or to introduce certain kinds of evidence. So while you may believe that it was wrong to censor, you should also expect a high probability of unknown unknowns that would mess up your reasoning if you tried to take inferential steps from that conclusion to somewhere else.
I haven’t been saying I believed it was wrong to censor (although I do think that it’s a bad idea in general). I have been saying I believe it was stupid and counterproductive to censor, and that this is not only clearly evident from the results, but should have been trivially predictable (certainly to anyone who’d been on the Internet for a few years) before the action was taken. And if the LW-homebrewed, lacking in outside review, Timeless Decision Theory was used to reach this bad decision, then TDT was disastrously inadequate (not just slightly inadequate) for application to a non-hypothetical situation and it lessens the expectation that TDT will be adequate for future non-hypothetical situations. And that this should also be obvious.
Yes, the attempt to censor was botched and I regret the botchery. In retrospect I should have not commented or explained anything, just PM’d Roko and asked him to take down the post without explaining himself.
This is actually quite comforting to know. Thank you.
(I still wonder WHAT ON EARTH WERE YOU THINKING at the time, but you’ll answer as and when you think it’s a good idea to, and that’s fine.)
(I was down the pub with ciphergoth just now and this topic came up … I said the Very Bad Idea sounded silly as an idea, he said it wasn’t as silly as it sounded to me with my knowledge. I can accept that. Then we tried to make sense of the idea of CEV as a practical and useful thing. I fear if I want a CEV process applicable by humans I’m going to have to invent it. Oh well.)
I wouldn’t use this situation as evidence for any outside conclusions.
It is evidence for said conclusions. Do you mean, perhaps, that it isn’t evidence that is strong enough to draw confident conclusions on its own?
Right or wrong, the belief that it’s right to suppress discussion of the topic entails also believing that it’s wrong to participate in that discussion or to introduce certain kinds of evidence. So while you may believe that it was wrong to censor, you should also expect a high probability of unknown unknowns that would mess up your reasoning if you tried to take inferential steps from that conclusion to somewhere else.
To follow from the reasoning the embedded conclusion must be ‘you should expect a higher probability’. The extent to which David should expect higher probability of unknown unknowns is dependent on the deference David gives to the judgement of the conscientious non-participants when it comes to the particular kind of risk assessment and decision making—ie. probably less than Jim does.
(With those two corrections in place the argument is reasonable.)
failure in a non-hypothetical situation suggests its robustness is in doubt and it should be regarded as less reliable than it may have been regarded previously.
I agree, and in this comment I remarked that we were assuming this statement all along, albeit in a dual presentation.
If you’re interested, we can also move forward as I did over here
It’s not “moving forward”, it’s moving to a separate question. That question might be worth considering, but isn’t generally related to the original one.
simply assuming EY is right,
Why would the assumption that EY was right be necessary to consider that question?
and then seeing if banning the post was net positive
I agree that it was net negative, specifically because the idea is still circulating, probably with more attention drawn to it than would happen otherwise. Which is why I started commenting on my hypothesis about the reasons for EY’s actions, in an attempt to alleviate the damage, after I myself figured it out. But that it was in fact net negative doesn’t directly argue that given the information at hand when the decision was made, it had net negative expectation, and so that the decision was incorrect (which is why it’s a separate question, not a step forward on the original one).
It’s not “moving forward”, it’s moving to a separate question.
I like the precision of your thought.
All this time I thought we were discussing if blocking future censorship by EY was a rational thing to do—but it’s not what we were discussing at all.
You really are in it for the details—if we could find a way of estimating around hard problems to solve the above question, that’s only vaguely interesting to you—you want to know the answers to these questions.
At least that’s what I’m hearing.
It sounds like the above was your way of saying you’re in favor of blocking future EY censorship, which gratifies me.
I’m going to do the following things in the hope of gratifying you:
Writing up a post on less wrong for developing political muscles. I’ve noticed several other posters seem less than savvy about social dynamics, so perhaps a crash course is in order. (I know that there are certainly several in the archives, I guarantee I’ll bring several new insights [with references] to the table).
Reread all your comments, and come back at these issues tomorrow night with a more exact approach. Please accept my apology for what I assume seemed a bizarre discussion, and thanks for thinking like that.
But that it was in fact net negative doesn’t directly argue that given the information at hand when the decision was made, it had net negative expectation, and so that the decision was incorrect.
More than enough information about human behavior was available at the time. Negative consequences of the kind observed were not remotely hard to predict.
Yes, quite likely. I didn’t argue with this point, though I myself don’t understand human behavior enough for that expectation to be obvious. I only argued that the actual outcome isn’t a strong reason to conclude that it was expected.
I agree that you are not justified in seeing a difference, unless you understand the theory of acausal control to some extent and agree with it. But when you are considering a person who agrees with that theory, and makes a decision based on it, agreement with the theory fully explains that decision, this is a much better explanation than most of the stuff people are circulating here. At that point, disagreement about the decision must be resolved by arguing about the theory, but that’s not easy.
You appear to be arguing that a bad decision is somehow a less bad decision if the reasoning used to get to it was consistent (“carefully, correctly wrong”).
No, because the decision is tested against reality. Being internally consistent may be a reason for doing something that it is obvious to others is just going to be counterproductive—as in the present case—but it doesn’t grant a forgiveness pass from reality.
That is: in practical effects, sincere stupidity and insincere stupidity are both stupidity.
You even say this above (“There is only one proper criterion to anyone’s actions, goodness of consequences”), making your post here even stranger.
(In fact, sincere stupidity can be more damaging, as in my experience it’s much harder to get the person to change their behaviour or the reasoning that led to it—they tend to cling to it and justify it when the bad effects are pointed out to them, with more justifications in response to more detail on the consequences of the error.)
Think of it as a trolley problem. Leaving the post is a bad option, the consequences of removing it are then the question: which is actually worse and results in the idea propagating further? If you can prove in detail that a decision theory considers removing it will make it propagate less, you’ve just found where the decision theory fails.
Removing the forbidden post propagated it further, and made both the post itself and the circumstances of its removal objects of fascination. It has also diminished the perceived integrity of LessWrong, as we can no longer be sure posts are not being quietly removed as well as loudly; this also diminished the reputation of SIAI. It is difficult to see either of these as working to suppress the bad idea.
More importantly it removed lesswrong as a place where FAI and decision theory can be discussed in any depth beyond superficial advocacy.
The problem is more than the notion that secret knowledge is bad—it’s that secret knowledge increasingly isn’t possible, and increasingly isn’t knowledge.
If it’s science, you almost can’t do it on your own and you almost can’t do it as a secret. If it’s engineering, your DRM or other constraints will last precisely as long as no-one is interested in breaking them. If it’s politics, your conspiracy will last as long as you aren’t found out and can insulate yourself from the effects … that one works a bit better, actually.
The forbidden topic can be tackled with math.
I don’t believe this is true to any significant extent. Why do you believe that? What kind of questions are not actually discussed that could’ve been discussed otherwise?
You are serious?
What qualifies as a ‘Friendly’ AI?
If someone is about to execute an AI running ‘CEV’ should I push a fat man on top of him and save five people from torture? What about an acausal fat man? :)
(How) can acausal trade be used to solve the cooperation problem inherent in funding FAI development? If I recall this topic was one that was explicitly deleted. Torture was mostly just a superficial detail.
… just from a few seconds brainstorming. These are the kinds of questions that can not be discussed without, at the very least, significant bias due to the threat of personal abuse and censorship if you are not careful. I am extremely wary of even trivial inconveniences.
Yes.
This doesn’t seem like an interesting question, where it intersects the forbidden topic. We don’t understand decision theory well enough to begin usefully discussing this. Most directions of discussion about this useless question are not in fact forbidden and the discussion goes on.
We don’t formally understand even the usual game theory, let alone acausal trade. It’s far too early to discuss its applications.
It wasn’t Vladimir_Nesov’s interest that you feigned curiosity in and nor is it your place to decide what things others are interested in discussing. They are topics that are at least as relevant as such things as ‘Sleeping Beauty’ that people have merrily prattled on about for decades.
That you support a censorship of certain ideas by no means requires you to exhaustively challenge every possible downside to said censorship. Even if the decision were wise and necessary there is allowed to be disappointing consequences. That’s just how things are sometimes.
The zeal here is troubling.
What do you mean by “decide”? Whether they are interested in that isn’t influenced by my decisions, and I can well think about whether they are, or whether they should be (i.e. whether there is any good to be derived from that interest).
I opened this thread by asking,
You answered this question, and then I said what I think about that kind of questions. It wasn’t obvious to me that you didn’t think of some other kind of questions that I find important, so I asked first, not just rhetorically.
What you implied in this comment seems very serious, and it was not my impression that something serious was taking place as a result of the banning incident, so of course I asked. My evaluation of whether the topics excluded (that you’ve named) are important is directly relevant to the reason your comment drew my attention.
On downvoting of parent comment: I’m actually surprised this comment got downvoted. It’s not as long inferential depth as this one that got downvoted worse, and it looks to me quite correct. Help me improve, say what’s wrong.
The other way around. I don’t “support censorship”, instead I don’t see that there are downsides worth mentioning (besides the PR hit), and as a result I disagree that censorship is important. Of course this indicates that I generally disagree with arguments for the harm of the censorship (that I so far understood), and so I argue with them (just as with any other arguments I disagree with that are on topic I’m interested in).
No zeal, just expressing my state of belief, and not willing to yield for reasons other than agreement (which is true in general, the censorship topic or not).
No, yielding and the lack thereof is not the indicator of zeal of which I speak. It is the sending out of your soldiers so universally that they reach even into the territory of other’s preferences. That critical line between advocation of policy and the presumption that others must justify their very thoughts (what topics interests them and how their thoughts are affected by the threat of public shaming and censorship) is crossed.
The lack of boundaries is a telling sign according to my model of social dynamics.
It was not my intention to discuss whether something is interesting to others. If it wasn’t clear, I do state so here explicitly. You were probably misled by the first part of this comment, where I objected to your statement that I shouldn’t speculate about what others are interested in. I don’t see why not, so I objected, but I didn’t mean to imply that I did speculate about that in the relevant comment. What I did state is that I myself don’t believe that conversational topic important, and motivation for that remark is discussed in the second part of the same comment.
Besides, asserting that the topic is not interesting to others is false as a point of simple fact, and that would be the problem, not the pattern of its alignment with other assertions. Are there any other statements that you believe I endorse (“in support of censorship”) and that you believe are mistaken?
On severe downvoting of the parent: What are that comment’s flaws? Tell me, I’ll try to correct them. (Must be obvious to warrant a −4.)
(Should I lump everything in one comment, or is the present way better? I find it more clear if different concerns are extracted as separate sub-threads.)
It’s not just more clear, it allows for better credit assignment in cases where both good and bad points are made.
Steven beat me to it—this way works well. Bear in mind though that I wasn’t planning to engage in this subject too deeply. Simply because it furthers no goal that I am committed to and is interesting only in as much as it can spawn loosely related tangents.
That some topics are excluded is tautological, so it’s important what kind of topics were. Thus, stating “nor is it your place to decide what things others are interested in discussing” seems to be equivalent with stating “censorship (of any kind) is bad!”, which is not very helpful in the discussion of whether it’s in fact bad. What’s the difference you intended?
You do see the irony there I hope...
Would you have censored the information? If not, do you think it would be a good idea to discuss the subject matter on an external (public) forum? Would you be interested to discuss it?
No, for several reasons. I have made no secret of the fact that I don’t think Eliezer processes perceived risks rationally and I think this applies in this instance.
This is not a claim that censorship is always a bad idea—there are other obvious cases where it would be vital. Information is power, after all.
Only if there is something interesting to say on the subject. Or any interesting conversations to be had on the various related subjects that the political bias would interfere with. But the mere fact that Eliezer forbids it doesn’t make it more interesting to me. In fact, the parts of Roko’s posts that were most interesting to me were not even the same parts that Eliezer threw a tantrum over. As far as I know Roko has been bullied out of engaging in such conversation even elsewhere and he would have been the person most worth talking to about that kind of counterfactual.
Bear in mind that the topic has moved from the realm of abstract philosophy to politics. If you make any mistakes, demonstrate any ignorance or even say things that can be conceivably twisted to appear as such then expect that will be used against you here to undermine your credibility on the subject. People like Nesov and and jimrandom care, and care aggressively.
Post away, if I have something to add then I’ll jump in. But warily.
I am not sure if I understand the issue and if it is as serious as some people obviously perceive it to be. Because if I indeed understand it, then it isn’t as dangerous to talk about it in public as it is portrayed to be. But that would mean that there is something wrong with otherwise smart people, which is unlikely? So should I conclude that it is more likely that I simply do not understand it?
What irritates me is that people like Nesov are saying that “we don’t formally understand even the usual game theory, let alone acausal trade”. Yet they care aggressively to censor the topic. I’ve been told before that it is due to people getting nightmares from it. If that is the reason then I do not think censorship is justified at all.
I wouldn’t rule out the possibility that you do not fully understand it and they are still being silly. ;)
How about the possibility that you do not understand it and that they are not silly? Do you think it could be serious enough to have nightmares about it and to censor it as far as possible, but that you simply don’t get it? How likely is that possibility?
Why would you even ask me that? Clearly I have considered the possibility (given that I am not a three year old) and equally clearly me answering you would not make much sense. :)
But the questioning of trusting people’s nightmares is an interesting one. I tend to be of the mind that if someone has that much of an anxiety problem prompted by a simple abstract thought then it is best to see that they receive the appropriate medication and therapy. After that has been taken care of I may consider their advice.
I wasn’t quite sure. I don’t know how to conclude that they are silly and you are not. I’m not just talking about Nesov but also Yudkowsky. You concluded that they are all wrong about their risk estimations and act silly. Yudkowsky explicitly stated that he does know more. But you conclude that they don’t know more, that they are silly.
Yes, I commented before saying that it is not the right move to truncate your child’s bed so that monsters won’t fit under it but rather explain that it is very unlikely for monsters to hide under the bed.
You can’t. Given the information you have available it would be a mistake for you to make such a conclusion. Particularly given that I have not even presented arguments or reasoning on the core of the subject, what, with the censorship and all. :)
Indeed. Which means that not taking his word for it constitutes disrespect.
Once the child grows up a bit you can go on to explain to them that even though there are monsters out in the world being hysterical doesn’t help either in detecting monsters or fighting them. :)
As I noted, it’s a trolley problem: you have the bad alternative of doing nothing, and then there’s the alternative of doing something that may be better and may be worse. This case observably came out worse, and that should have been trivially predictable by anyone who’d been on the net a few years.
So the thinking involved in the decision, and the ongoing attempts at suppression, admits of investigation.
But yes, it could all be a plot to get as many people as possible thinking really hard about the “forbidden” idea, with this being such an important goal as to be worth throwing LW’s intellectual integrity in front of the trolley for.
Caring “to censor the topic” doesn’t make sense, it’s already censored, and already in the open, and I’m not making any actions regarding the censorship. You’d need to be more accurate in what exactly you believe, instead of reasoning in terms of vague affect.
Regarding lack of formal understanding, see this comment: the decision to not discuss the topic, if at all possible, follows from a very weak belief, not from certainty. Lack of formal understanding expresses lack of certainty, but not lack of very weak beliefs.
If an organisation, that is working on a binding procedure for a all-powerful dictator to implement it on the scale of the observable universe, tried to censor information, that could directly affect me for the rest of time in the worst possible manner, I got a very weak belief that their causal control is much more dangerous than the acausal control between me and their future dictator.
So you don’t care if I post it everywhere and send it to everyone I can?
For what it’s worth, I’ve given up on participating in these arguments. My position hasn’t changed, but arguing it was counterproductive, and extremely frustrating, which led to me saying some stupid things.
No, I don’t (or, alternatively, you possibly could unpack this in a non-obvious way to make it hold).
I suppose it just so happens it was the topic I engaged yesterday, and similar “care aggressively” characteristic can probably be seen in any other discussion I engage.
I don’t dispute that, and this was part of what prompted the warning to XiXi. When a subject is political and your opponents are known to use aggressive argumentative styles it is important to take a lot of care with your words—give nothing that could potentially be used against you.
The situation is analogous to recent discussion of refraining to respond to the trolley problem. If there is the possibility that people may use your words against you in the future STFU unless you know exactly what you are doing!
No irony. You don’t construct complex machinery out of very weak beliefs, but caution requires taking very weak beliefs into account.
The irony is present and complex machinery is a red herring.
Well then, I don’t see the irony, show it to me.
Here, I’m talking about factual explanation, not normative estimation. The actions are explained by holding a certain belief, better than by alternative hypotheses. Whether they were correct is a separate question.
You’d need to explain this step in more detail. I was discussing a communication protocol, where does “testing against reality” enter that topic?
Ah, I thought you were talking about whether the decision solved the problem, not whether the failed decision was justifiable in terms of the theory.
I do think that if a decision theory leads to quite as spectacular a failure in practice as this one did, then the decision theory is strongly suspect.
As such, whether the decision was justifiable is less interesting except in terms of revealing the thinking processes of the person doing the justification (clinginess to pet decision theory, etc).
“Belief in the decision being a failure is an argument against adequacy of the decision theory”, is simply a dual restatement of “Belief in the adequacy of the decision theory is an argument for the decision being correct”.
This statement appears confusing to me: you appear to be saying that if I believe strongly enough in the forbidden post having been successfully suppressed, then censoring it will not have in fact caused it to propagated widely, nor will it have become an object of fascination and caused a reputational hit to LessWrong and hence SIAI. This, of course, makes no sense.
I do not understand how this matches with the effects observable in reality, where these things do in fact appear to have happened. Could you please explain how one tests this result of the decision theory, if not by matching it against what actually happened? That being what I’m using to decide whether the decision worked or not.
Keep in mind that I’m talking about an actual decision and its actual results here. That’s the important bit.
If you believe that “decision is a failure” is evidence that the decision theory is not adequate, you believe that “decision is a success” is evidence that the decision theory is adequate.
Since a decision theory’s adequacy is determined by how successful its decisions are, you appear to be saying “if a decision theory makes a bad decision, it is a bad decision theory” which is tautologically true.
Correct me if I’m wrong, but Vladimir_Nesov is not interested in whether the the decision theory is good or bad, so restating an axiom of decision theory evaluation is irrelevant.
The decision was made by a certain decision theory. The factual question “was the decision-maker holding to this decision theory in making this decision?” is entirely unrelated to the question “should the decision-maker hold to this decision theory given that it makes bad decisions?”. To suggest otherwise blurs the prescriptive/descriptive divide, which is what Vladimir_Nesov is referring to when he says
I believe that if the decision theory clearly led to an incorrect result (which it clearly did in this case, despite Vladimir Nesov’s energetic equivocation), then it is important to examine the limits of the decision theory.
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously.
Assuming the decision was made by robust TDT.
The decision you refer to here… I’m assuming it is this still the Eliezer->Roko decision? (This discussion is not the most clearly presented.) If so for your purposes you can safely consider ‘TDT/CDT’ irrelevant. While acausal (TDTish) reasoning is at play in establishing a couple of the important premises, they are not relevant to the reasoning that you actually seem to be criticising.
ie. The problems you refer to here are not the fault of TDT or of abstract reasoning at all—just plain old human screw ups with hasty reactions.
That’s the one, that being the one specific thing I’ve been talking about all the way through.
Vladimir Nesov cited acausal decision theories as the reasoning here and here—if not TDT, then a similar local decision theory. If that is not the case, I’m sure he’ll be along shortly to clarify.
(I stress “local” to note that they suffer a lack of outside review or even notice. A lack of these things tends not to work out well in engineering or science either.)
Good, that had been my impression.
Independently of anything that Vladmir may have written it is my observation that the ‘TDT-like’ stuff was mostly relevant to the question “is it dangerous for people to think X?” Once that has been established the rest of the decision making, what to do after already having reached that conclusion, was for most part just standard unadorned human thinking. From what I have seen (including your references to reputation self sabotage by SIAI) you were more troubled by the the latter parts than the former.
Even if you do care about the more esoteric question “is it dangerous for people to think X?” I note that ‘garbage in, garbage out’ applies here as it does elsewhere.
(I just don’t like to see TDT unfairly maligned. Tarnished by association as it were.)
See section 7 of the TDT paper (you’ll probably have to read from the beginning to familiarize yourself with concepts). It doesn’t take Omega to demonstrate that CDT errs, it takes mere ability to predict dispositions of agents to any small extent to get out of CDT’s domain, and humans do that all the time. From the paper:
I wouldn’t use this situation as evidence for any outside conclusions. Right or wrong, the belief that it’s right to suppress discussion of the topic entails also believing that it’s wrong to participate in that discussion or to introduce certain kinds of evidence. So while you may believe that it was wrong to censor, you should also expect a high probability of unknown unknowns that would mess up your reasoning if you tried to take inferential steps from that conclusion to somewhere else.
I haven’t been saying I believed it was wrong to censor (although I do think that it’s a bad idea in general). I have been saying I believe it was stupid and counterproductive to censor, and that this is not only clearly evident from the results, but should have been trivially predictable (certainly to anyone who’d been on the Internet for a few years) before the action was taken. And if the LW-homebrewed, lacking in outside review, Timeless Decision Theory was used to reach this bad decision, then TDT was disastrously inadequate (not just slightly inadequate) for application to a non-hypothetical situation and it lessens the expectation that TDT will be adequate for future non-hypothetical situations. And that this should also be obvious.
Yes, the attempt to censor was botched and I regret the botchery. In retrospect I should have not commented or explained anything, just PM’d Roko and asked him to take down the post without explaining himself.
This is actually quite comforting to know. Thank you.
(I still wonder WHAT ON EARTH WERE YOU THINKING at the time, but you’ll answer as and when you think it’s a good idea to, and that’s fine.)
(I was down the pub with ciphergoth just now and this topic came up … I said the Very Bad Idea sounded silly as an idea, he said it wasn’t as silly as it sounded to me with my knowledge. I can accept that. Then we tried to make sense of the idea of CEV as a practical and useful thing. I fear if I want a CEV process applicable by humans I’m going to have to invent it. Oh well.)
And I would have taken it down. My bad for not asking first most importantly.
It is evidence for said conclusions. Do you mean, perhaps, that it isn’t evidence that is strong enough to draw confident conclusions on its own?
To follow from the reasoning the embedded conclusion must be ‘you should expect a higher probability’. The extent to which David should expect higher probability of unknown unknowns is dependent on the deference David gives to the judgement of the conscientious non-participants when it comes to the particular kind of risk assessment and decision making—ie. probably less than Jim does.
(With those two corrections in place the argument is reasonable.)
I agree, and in this comment I remarked that we were assuming this statement all along, albeit in a dual presentation.
If you’re interested, we can also move forward as I did over here by simply assuming EY is right, and then seeing if banning the post was net positive
It’s not “moving forward”, it’s moving to a separate question. That question might be worth considering, but isn’t generally related to the original one.
Why would the assumption that EY was right be necessary to consider that question?
I agree that it was net negative, specifically because the idea is still circulating, probably with more attention drawn to it than would happen otherwise. Which is why I started commenting on my hypothesis about the reasons for EY’s actions, in an attempt to alleviate the damage, after I myself figured it out. But that it was in fact net negative doesn’t directly argue that given the information at hand when the decision was made, it had net negative expectation, and so that the decision was incorrect (which is why it’s a separate question, not a step forward on the original one).
I like the precision of your thought.
All this time I thought we were discussing if blocking future censorship by EY was a rational thing to do—but it’s not what we were discussing at all.
You really are in it for the details—if we could find a way of estimating around hard problems to solve the above question, that’s only vaguely interesting to you—you want to know the answers to these questions.
At least that’s what I’m hearing.
It sounds like the above was your way of saying you’re in favor of blocking future EY censorship, which gratifies me.
I’m going to do the following things in the hope of gratifying you:
Writing up a post on less wrong for developing political muscles. I’ve noticed several other posters seem less than savvy about social dynamics, so perhaps a crash course is in order. (I know that there are certainly several in the archives, I guarantee I’ll bring several new insights [with references] to the table).
Reread all your comments, and come back at these issues tomorrow night with a more exact approach. Please accept my apology for what I assume seemed a bizarre discussion, and thanks for thinking like that.
Night!
I didn’t address that question at all, and in fact I’m not in favor of blocking anything. I came closest to that topic in this comment.
More than enough information about human behavior was available at the time. Negative consequences of the kind observed were not remotely hard to predict.
Yes, quite likely. I didn’t argue with this point, though I myself don’t understand human behavior enough for that expectation to be obvious. I only argued that the actual outcome isn’t a strong reason to conclude that it was expected.