If someone is about to execute an AI running ‘CEV’ should I push a fat man on top of him and save five people from torture? What about an acausal fat man? :)
(How) can acausal trade be used to solve the cooperation problem inherent in funding FAI development? If I recall this topic was one that was explicitly deleted. Torture was mostly just a superficial detail.
… just from a few seconds brainstorming. These are the kinds of questions that can not be discussed without, at the very least, significant bias due to the threat of personal abuse and censorship if you are not careful. I am extremely wary of even trivial inconveniences.
This doesn’t seem like an interesting question, where it intersects the forbidden topic. We don’t understand decision theory well enough to begin usefully discussing this. Most directions of discussion about this useless question are not in fact forbidden and the discussion goes on.
(How) can acausal trade be used to solve the cooperation problem inherent in funding FAI development?
We don’t formally understand even the usual game theory, let alone acausal trade. It’s far too early to discuss its applications.
It wasn’t Vladimir_Nesov’s interest that you feigned curiosity in and nor is it your place to decide what things others are interested in discussing. They are topics that are at least as relevant as such things as ‘Sleeping Beauty’ that people have merrily prattled on about for decades.
That you support a censorship of certain ideas by no means requires you to exhaustively challenge every possible downside to said censorship. Even if the decision were wise and necessary there is allowed to be disappointing consequences. That’s just how things are sometimes.
It wasn’t Vladimir_Nesov’s interest that you feigned curiosity in and nor is it your place to decide what things others are interested in discussing.
What do you mean by “decide”? Whether they are interested in that isn’t influenced by my decisions, and I can well think about whether they are, or whether they should be (i.e. whether there is any good to be derived from that interest).
I opened this thread by asking,
What kind of questions are not actually discussed that could’ve been discussed otherwise?
You answered this question, and then I said what I think about that kind of questions. It wasn’t obvious to me that you didn’t think of some other kind of questions that I find important, so I asked first, not just rhetorically.
What you implied in this comment seems very serious, and it was not my impression that something serious was taking place as a result of the banning incident, so of course I asked. My evaluation of whether the topics excluded (that you’ve named) are important is directly relevant to the reason your comment drew my attention.
On downvoting of parent comment: I’m actually surprised this comment got downvoted. It’s not as long inferential depth as this one that got downvoted worse, and it looks to me quite correct. Help me improve, say what’s wrong.
That you support a censorship of certain ideas by no means requires you to exhaustively challenge every possible downside to said censorship.
The other way around. I don’t “support censorship”, instead I don’t see that there are downsides worth mentioning (besides the PR hit), and as a result I disagree that censorship is important. Of course this indicates that I generally disagree with arguments for the harm of the censorship (that I so far understood), and so I argue with them (just as with any other arguments I disagree with that are on topic I’m interested in).
The zeal here is troubling.
No zeal, just expressing my state of belief, and not willing to yield for reasons other than agreement (which is true in general, the censorship topic or not).
No zeal, just expressing my state of belief, and not willing to yield for reasons other than agreement (which is true in general, the censorship topic or not).
No, yielding and the lack thereof is not the indicator of zeal of which I speak. It is the sending out of your soldiers so universally that they reach even into the territory of other’s preferences. That critical line between advocation of policy and the presumption that others must justify their very thoughts (what topics interests them and how their thoughts are affected by the threat of public shaming and censorship) is crossed.
The lack of boundaries is a telling sign according to my model of social dynamics.
It is the sending out of your soldiers so universally that they reach even into the territory of other’s preferences. That critical line between advocation of policy and the presumption that others must justify their very thoughts (what topics interests them and how their thoughts are affected by the threat of public shaming and censorship) is crossed.
It was not my intention to discuss whether something is interesting to others. If it wasn’t clear, I do state so here explicitly. You were probably misled by the first part of this comment, where I objected to your statement that I shouldn’t speculate about what others are interested in. I don’t see why not, so I objected, but I didn’t mean to imply that I did speculate about that in the relevant comment. What I did state is that I myself don’t believe that conversational topic important, and motivation for that remark is discussed in the second part of the same comment.
Besides, asserting that the topic is not interesting to others is false as a point of simple fact, and that would be the problem, not the pattern of its alignment with other assertions. Are there any other statements that you believe I endorse (“in support of censorship”) and that you believe are mistaken?
(Should I lump everything in one comment, or is the present way better? I find it more clear if different concerns are extracted as separate sub-threads.)
Steven beat me to it—this way works well. Bear in mind though that I wasn’t planning to engage in this subject too deeply. Simply because it furthers no goal that I am committed to and is interesting only in as much as it can spawn loosely related tangents.
That some topics are excluded is tautological, so it’s important what kind of topics were. Thus, stating “nor is it your place to decide what things others are interested in discussing” seems to be equivalent with stating “censorship (of any kind) is bad!”, which is not very helpful in the discussion of whether it’s in fact bad. What’s the difference you intended?
Would you have censored the information? If not, do you think it would be a good idea to discuss the subject matter on an external (public) forum? Would you be interested to discuss it?
No, for several reasons. I have made no secret of the fact that I don’t think Eliezer processes perceived risks rationally and I think this applies in this instance.
This is not a claim that censorship is always a bad idea—there are other obvious cases where it would be vital. Information is power, after all.
If not, do you think it would be a good idea to discuss the subject matter on an external (public) forum?
Only if there is something interesting to say on the subject. Or any interesting conversations to be had on the various related subjects that the political bias would interfere with. But the mere fact that Eliezer forbids it doesn’t make it more interesting to me. In fact, the parts of Roko’s posts that were most interesting to me were not even the same parts that Eliezer threw a tantrum over. As far as I know Roko has been bullied out of engaging in such conversation even elsewhere and he would have been the person most worth talking to about that kind of counterfactual.
Bear in mind that the topic has moved from the realm of abstract philosophy to politics. If you make any mistakes, demonstrate any ignorance or even say things that can be conceivably twisted to appear as such then expect that will be used against you here to undermine your credibility on the subject. People like Nesov and and jimrandom care, and care aggressively.
Would you be interested to discuss it?
Post away, if I have something to add then I’ll jump in. But warily.
Post away, if I have something to add then I’ll jump in. But warily.
I am not sure if I understand the issue and if it is as serious as some people obviously perceive it to be. Because if I indeed understand it, then it isn’t as dangerous to talk about it in public as it is portrayed to be. But that would mean that there is something wrong with otherwise smart people, which is unlikely? So should I conclude that it is more likely that I simply do not understand it?
What irritates me is that people like Nesov are saying that “we don’t formally understand even the usual game theory, let alone acausal trade”. Yet they care aggressively to censor the topic. I’ve been told before that it is due to people getting nightmares from it. If that is the reason then I do not think censorship is justified at all.
How about the possibility that you do not understand it and that they are not silly? Do you think it could be serious enough to have nightmares about it and to censor it as far as possible, but that you simply don’t get it? How likely is that possibility?
Why would you even ask me that? Clearly I have considered the possibility (given that I am not a three year old) and equally clearly me answering you would not make much sense. :)
But the questioning of trusting people’s nightmares is an interesting one. I tend to be of the mind that if someone has that much of an anxiety problem prompted by a simple abstract thought then it is best to see that they receive the appropriate medication and therapy. After that has been taken care of I may consider their advice.
Why would you even ask me that? Clearly I have considered the possibility...
I wasn’t quite sure. I don’t know how to conclude that they are silly and you are not. I’m not just talking about Nesov but also Yudkowsky. You concluded that they are all wrong about their risk estimations and act silly. Yudkowsky explicitly stated that he does know more. But you conclude that they don’t know more, that they are silly.
I tend to be of the mind that if someone has that much of an anxiety problem prompted by a simple abstract thought then it is best to see that they receive the appropriate medication and therapy after that consider their advice.
Yes, I commented before saying that it is not the right move to truncate your child’s bed so that monsters won’t fit under it but rather explain that it is very unlikely for monsters to hide under the bed.
I wasn’t quite sure. I don’t know how to conclude that they are silly and you are not.
You can’t. Given the information you have available it would be a mistake for you to make such a conclusion. Particularly given that I have not even presented arguments or reasoning on the core of the subject, what, with the censorship and all. :)
Yudkowsky explicitly stated that he does know more.
Indeed. Which means that not taking his word for it constitutes disrespect.
Yes, I commented before saying that it is not the right move to truncate your child’s bed so that monsters won’t fit under it but rather explain that it is very unlikely for monsters to hide under the bed.
Once the child grows up a bit you can go on to explain to them that even though there are monsters out in the world being hysterical doesn’t help either in detecting monsters or fighting them. :)
As I noted, it’s a trolley problem: you have the bad alternative of doing nothing, and then there’s the alternative of doing something that may be better and may be worse. This case observably came out worse, and that should have been trivially predictable by anyone who’d been on the net a few years.
So the thinking involved in the decision, and the ongoing attempts at suppression, admits of investigation.
But yes, it could all be a plot to get as many people as possible thinking really hard about the “forbidden” idea, with this being such an important goal as to be worth throwing LW’s intellectual integrity in front of the trolley for.
What irritates me is that people like Nesov are saying that “we don’t formally understand even the usual game theory, let alone acausal trade”. Yet they care aggressively to censor the topic.
Caring “to censor the topic” doesn’t make sense, it’s already censored, and already in the open, and I’m not making any actions regarding the censorship. You’d need to be more accurate in what exactly you believe, instead of reasoning in terms of vague affect.
Regarding lack of formal understanding, see this comment: the decision to not discuss the topic, if at all possible, follows from a very weak belief, not from certainty. Lack of formal understanding expresses lack of certainty, but not lack of very weak beliefs.
If an organisation, that is working on a binding procedure for a all-powerful dictator to implement it on the scale of the observable universe, tried to censor information, that could directly affect me for the rest of time in the worst possible manner, I got a very weak belief that their causal control is much more dangerous than the acausal control between me and their future dictator.
Caring “to censor the topic” doesn’t make sense...
So you don’t care if I post it everywhere and send it to everyone I can?
For what it’s worth, I’ve given up on participating in these arguments. My position hasn’t changed, but arguing it was counterproductive, and extremely frustrating, which led to me saying some stupid things.
People like Nesov and and jimrandom care, and care aggressively.
No, I don’t (or, alternatively, you possibly could unpack this in a non-obvious way to make it hold).
I suppose it just so happens it was the topic I engaged yesterday, and similar “care aggressively” characteristic can probably be seen in any other discussion I engage.
and similar “care aggressively” characteristic can probably be seen in any other discussion I engage.
I don’t dispute that, and this was part of what prompted the warning to XiXi. When a subject is political and your opponents are known to use aggressive argumentative styles it is important to take a lot of care with your words—give nothing that could potentially be used against you.
The situation is analogous to recent discussion of refraining to respond to the trolley problem. If there is the possibility that people may use your words against you in the future STFU unless you know exactly what you are doing!
You are serious?
What qualifies as a ‘Friendly’ AI?
If someone is about to execute an AI running ‘CEV’ should I push a fat man on top of him and save five people from torture? What about an acausal fat man? :)
(How) can acausal trade be used to solve the cooperation problem inherent in funding FAI development? If I recall this topic was one that was explicitly deleted. Torture was mostly just a superficial detail.
… just from a few seconds brainstorming. These are the kinds of questions that can not be discussed without, at the very least, significant bias due to the threat of personal abuse and censorship if you are not careful. I am extremely wary of even trivial inconveniences.
Yes.
This doesn’t seem like an interesting question, where it intersects the forbidden topic. We don’t understand decision theory well enough to begin usefully discussing this. Most directions of discussion about this useless question are not in fact forbidden and the discussion goes on.
We don’t formally understand even the usual game theory, let alone acausal trade. It’s far too early to discuss its applications.
It wasn’t Vladimir_Nesov’s interest that you feigned curiosity in and nor is it your place to decide what things others are interested in discussing. They are topics that are at least as relevant as such things as ‘Sleeping Beauty’ that people have merrily prattled on about for decades.
That you support a censorship of certain ideas by no means requires you to exhaustively challenge every possible downside to said censorship. Even if the decision were wise and necessary there is allowed to be disappointing consequences. That’s just how things are sometimes.
The zeal here is troubling.
What do you mean by “decide”? Whether they are interested in that isn’t influenced by my decisions, and I can well think about whether they are, or whether they should be (i.e. whether there is any good to be derived from that interest).
I opened this thread by asking,
You answered this question, and then I said what I think about that kind of questions. It wasn’t obvious to me that you didn’t think of some other kind of questions that I find important, so I asked first, not just rhetorically.
What you implied in this comment seems very serious, and it was not my impression that something serious was taking place as a result of the banning incident, so of course I asked. My evaluation of whether the topics excluded (that you’ve named) are important is directly relevant to the reason your comment drew my attention.
On downvoting of parent comment: I’m actually surprised this comment got downvoted. It’s not as long inferential depth as this one that got downvoted worse, and it looks to me quite correct. Help me improve, say what’s wrong.
The other way around. I don’t “support censorship”, instead I don’t see that there are downsides worth mentioning (besides the PR hit), and as a result I disagree that censorship is important. Of course this indicates that I generally disagree with arguments for the harm of the censorship (that I so far understood), and so I argue with them (just as with any other arguments I disagree with that are on topic I’m interested in).
No zeal, just expressing my state of belief, and not willing to yield for reasons other than agreement (which is true in general, the censorship topic or not).
No, yielding and the lack thereof is not the indicator of zeal of which I speak. It is the sending out of your soldiers so universally that they reach even into the territory of other’s preferences. That critical line between advocation of policy and the presumption that others must justify their very thoughts (what topics interests them and how their thoughts are affected by the threat of public shaming and censorship) is crossed.
The lack of boundaries is a telling sign according to my model of social dynamics.
It was not my intention to discuss whether something is interesting to others. If it wasn’t clear, I do state so here explicitly. You were probably misled by the first part of this comment, where I objected to your statement that I shouldn’t speculate about what others are interested in. I don’t see why not, so I objected, but I didn’t mean to imply that I did speculate about that in the relevant comment. What I did state is that I myself don’t believe that conversational topic important, and motivation for that remark is discussed in the second part of the same comment.
Besides, asserting that the topic is not interesting to others is false as a point of simple fact, and that would be the problem, not the pattern of its alignment with other assertions. Are there any other statements that you believe I endorse (“in support of censorship”) and that you believe are mistaken?
On severe downvoting of the parent: What are that comment’s flaws? Tell me, I’ll try to correct them. (Must be obvious to warrant a −4.)
(Should I lump everything in one comment, or is the present way better? I find it more clear if different concerns are extracted as separate sub-threads.)
It’s not just more clear, it allows for better credit assignment in cases where both good and bad points are made.
Steven beat me to it—this way works well. Bear in mind though that I wasn’t planning to engage in this subject too deeply. Simply because it furthers no goal that I am committed to and is interesting only in as much as it can spawn loosely related tangents.
That some topics are excluded is tautological, so it’s important what kind of topics were. Thus, stating “nor is it your place to decide what things others are interested in discussing” seems to be equivalent with stating “censorship (of any kind) is bad!”, which is not very helpful in the discussion of whether it’s in fact bad. What’s the difference you intended?
You do see the irony there I hope...
Would you have censored the information? If not, do you think it would be a good idea to discuss the subject matter on an external (public) forum? Would you be interested to discuss it?
No, for several reasons. I have made no secret of the fact that I don’t think Eliezer processes perceived risks rationally and I think this applies in this instance.
This is not a claim that censorship is always a bad idea—there are other obvious cases where it would be vital. Information is power, after all.
Only if there is something interesting to say on the subject. Or any interesting conversations to be had on the various related subjects that the political bias would interfere with. But the mere fact that Eliezer forbids it doesn’t make it more interesting to me. In fact, the parts of Roko’s posts that were most interesting to me were not even the same parts that Eliezer threw a tantrum over. As far as I know Roko has been bullied out of engaging in such conversation even elsewhere and he would have been the person most worth talking to about that kind of counterfactual.
Bear in mind that the topic has moved from the realm of abstract philosophy to politics. If you make any mistakes, demonstrate any ignorance or even say things that can be conceivably twisted to appear as such then expect that will be used against you here to undermine your credibility on the subject. People like Nesov and and jimrandom care, and care aggressively.
Post away, if I have something to add then I’ll jump in. But warily.
I am not sure if I understand the issue and if it is as serious as some people obviously perceive it to be. Because if I indeed understand it, then it isn’t as dangerous to talk about it in public as it is portrayed to be. But that would mean that there is something wrong with otherwise smart people, which is unlikely? So should I conclude that it is more likely that I simply do not understand it?
What irritates me is that people like Nesov are saying that “we don’t formally understand even the usual game theory, let alone acausal trade”. Yet they care aggressively to censor the topic. I’ve been told before that it is due to people getting nightmares from it. If that is the reason then I do not think censorship is justified at all.
I wouldn’t rule out the possibility that you do not fully understand it and they are still being silly. ;)
How about the possibility that you do not understand it and that they are not silly? Do you think it could be serious enough to have nightmares about it and to censor it as far as possible, but that you simply don’t get it? How likely is that possibility?
Why would you even ask me that? Clearly I have considered the possibility (given that I am not a three year old) and equally clearly me answering you would not make much sense. :)
But the questioning of trusting people’s nightmares is an interesting one. I tend to be of the mind that if someone has that much of an anxiety problem prompted by a simple abstract thought then it is best to see that they receive the appropriate medication and therapy. After that has been taken care of I may consider their advice.
I wasn’t quite sure. I don’t know how to conclude that they are silly and you are not. I’m not just talking about Nesov but also Yudkowsky. You concluded that they are all wrong about their risk estimations and act silly. Yudkowsky explicitly stated that he does know more. But you conclude that they don’t know more, that they are silly.
Yes, I commented before saying that it is not the right move to truncate your child’s bed so that monsters won’t fit under it but rather explain that it is very unlikely for monsters to hide under the bed.
You can’t. Given the information you have available it would be a mistake for you to make such a conclusion. Particularly given that I have not even presented arguments or reasoning on the core of the subject, what, with the censorship and all. :)
Indeed. Which means that not taking his word for it constitutes disrespect.
Once the child grows up a bit you can go on to explain to them that even though there are monsters out in the world being hysterical doesn’t help either in detecting monsters or fighting them. :)
As I noted, it’s a trolley problem: you have the bad alternative of doing nothing, and then there’s the alternative of doing something that may be better and may be worse. This case observably came out worse, and that should have been trivially predictable by anyone who’d been on the net a few years.
So the thinking involved in the decision, and the ongoing attempts at suppression, admits of investigation.
But yes, it could all be a plot to get as many people as possible thinking really hard about the “forbidden” idea, with this being such an important goal as to be worth throwing LW’s intellectual integrity in front of the trolley for.
Caring “to censor the topic” doesn’t make sense, it’s already censored, and already in the open, and I’m not making any actions regarding the censorship. You’d need to be more accurate in what exactly you believe, instead of reasoning in terms of vague affect.
Regarding lack of formal understanding, see this comment: the decision to not discuss the topic, if at all possible, follows from a very weak belief, not from certainty. Lack of formal understanding expresses lack of certainty, but not lack of very weak beliefs.
If an organisation, that is working on a binding procedure for a all-powerful dictator to implement it on the scale of the observable universe, tried to censor information, that could directly affect me for the rest of time in the worst possible manner, I got a very weak belief that their causal control is much more dangerous than the acausal control between me and their future dictator.
So you don’t care if I post it everywhere and send it to everyone I can?
For what it’s worth, I’ve given up on participating in these arguments. My position hasn’t changed, but arguing it was counterproductive, and extremely frustrating, which led to me saying some stupid things.
No, I don’t (or, alternatively, you possibly could unpack this in a non-obvious way to make it hold).
I suppose it just so happens it was the topic I engaged yesterday, and similar “care aggressively” characteristic can probably be seen in any other discussion I engage.
I don’t dispute that, and this was part of what prompted the warning to XiXi. When a subject is political and your opponents are known to use aggressive argumentative styles it is important to take a lot of care with your words—give nothing that could potentially be used against you.
The situation is analogous to recent discussion of refraining to respond to the trolley problem. If there is the possibility that people may use your words against you in the future STFU unless you know exactly what you are doing!
No irony. You don’t construct complex machinery out of very weak beliefs, but caution requires taking very weak beliefs into account.
The irony is present and complex machinery is a red herring.
Well then, I don’t see the irony, show it to me.