I’m not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don’t think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don’t see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.
As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn’t be taken to reflect anything about anyone else.
I know, it wasn’t my intention to discredit Peer, I quite like his ideas. I’m probably more crazy than him anyway.
But if I can come up with such conclusions, who else will? Also, why isn’t anyone out to kill people, or will be? I’m serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon ‘mere’ probability estimates, how wouldn’t it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn’t this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.
This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I’m wondering about the possible reaction to a imminent and tangible danger.
I’m not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don’t think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don’t see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.
As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn’t be taken to reflect anything about anyone else.
I know, it wasn’t my intention to discredit Peer, I quite like his ideas. I’m probably more crazy than him anyway.
But if I can come up with such conclusions, who else will? Also, why isn’t anyone out to kill people, or will be? I’m serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon ‘mere’ probability estimates, how wouldn’t it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn’t this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.
This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I’m wondering about the possible reaction to a imminent and tangible danger.