I don’t want a revolution, and don’t believe I’ll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.
I think Roko got a pretty clear explanation of why his post was deleted. I don’t think I did. I think everyone should. I suspect there may be others like me.
I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven’t, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is “non-danger” and “ineffectiveness”, and the truth will tend to win the argument over time, I think that would be a good thing.
I don’t think I was being sarcastic. I won’t take the juices out of the comment by analysing it too completely—but a good part of it was the joke of comparing Less Wrong with Fight Club.
We can’t tell you what materials are classified—that information is classified.
The thing I’m trying to drum up support for is an incremental change in current policy; for instance, a safe and useful version of the policy being publicly available. I believe that’s possible, and I believe it is more appropriate to discuss this in public.
(Actually, since I’ve been making noise about this, and since I’ve promised not to reveal it, I now know the secret. No, I won’t tell you, I promised that. I won’t even tell who told me, even though I didn’t promise not to, because they’d just get too many requests to reveal it. But I can say that I don’t believe in it, and also that I think [though others might disagree] that a public policy could be crafted which dealt with the issue without exacerbating it, even if it were real.)
Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.
Look, my post addressed these issues, and I’d be happy to discuss them further, if the ground rules were clear. Right now, we’re not having that discussion; we’re talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you’re right, you’ll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.
I don’t want a revolution, and don’t believe I’ll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.
I think Roko got a pretty clear explanation of why his post was deleted. I don’t think I did. I think everyone should. I suspect there may be others like me.
I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven’t, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is “non-danger” and “ineffectiveness”, and the truth will tend to win the argument over time, I think that would be a good thing.
The second rule of Less Wrong is, you DO NOT talk about Forbidden Topics.
Your sarcasm would not be obvious if I didn’t recognize your username.
Hmm—I added a link to the source, which hopefully helps to explain.
Quotes can be used sarcastically or not.
I don’t think I was being sarcastic. I won’t take the juices out of the comment by analysing it too completely—but a good part of it was the joke of comparing Less Wrong with Fight Club.
We can’t tell you what materials are classified—that information is classified.
It’s probably better to solve this by private conversation with Eliezer, than by trying to drum up support in an open thread.
Too much meta discussion is bad for a community.
The thing I’m trying to drum up support for is an incremental change in current policy; for instance, a safe and useful version of the policy being publicly available. I believe that’s possible, and I believe it is more appropriate to discuss this in public.
(Actually, since I’ve been making noise about this, and since I’ve promised not to reveal it, I now know the secret. No, I won’t tell you, I promised that. I won’t even tell who told me, even though I didn’t promise not to, because they’d just get too many requests to reveal it. But I can say that I don’t believe in it, and also that I think [though others might disagree] that a public policy could be crafted which dealt with the issue without exacerbating it, even if it were real.)
How much evidence for the existence of a textual Langford Basilisk would you require before considering it a bad idea to write about it in detail?
Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.
Look, my post addressed these issues, and I’d be happy to discuss them further, if the ground rules were clear. Right now, we’re not having that discussion; we’re talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you’re right, you’ll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.
Really? Go read the sequences! ;)
Hell? That’s it?