You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.
The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?
If you read this comment thread you’ll see what I mean and what danger there might be posed by this movement, ‘follow Eliezer’, ‘donating as much as possible to SIAI’, ‘kill a whole planet’, ‘afford to leave one planet’s worth’, ‘maybe we could even afford to leave their brains unmodified’...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.
Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.
I’m not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don’t think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don’t see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.
As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn’t be taken to reflect anything about anyone else.
I know, it wasn’t my intention to discredit Peer, I quite like his ideas. I’m probably more crazy than him anyway.
But if I can come up with such conclusions, who else will? Also, why isn’t anyone out to kill people, or will be? I’m serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon ‘mere’ probability estimates, how wouldn’t it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn’t this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.
This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I’m wondering about the possible reaction to a imminent and tangible danger.
Before someone accuses me of it, I want to issue the point of people suffering psychological malaise by related information.
Is active denial of information an appropriate handling of serious
personal problems? If there are people who suffer from mental illness due to mere thought-experiments, I’m sorry but I think along the lines of the proponents that support the deletion of information for reasons of increasing the chance of an AI going wrong. Namely, as you abandon freedom of expression to an extent, I’m advocating to draw the line between the balance of freedom of information and protection of individual well-being at this point. That is not to say that, for example, I’d go all the way and advocate the depiction of cruelty to children.
A delicate issue indeed, but what one has to keep care of is not to slide into extremism that causes a relinquishment of values it is meant to serve and protect.
I don’t consider frogs to be objects of moral worth. -- Eliezer Yudkowsky
Yeah ok, frogs...but wait! This is the person who’s going to design the moral seed of our coming god-emperor. I’m not sure if everyone here is aware of the range of consequences while using the same as corroboration of the correctness pursuing this route. That is, are we going to replace unfriendly AI with unknown EY? Are we yet at the point that we can tell EY is THE master who’ll decide upon what’s reasonable to say in public and what should be deleted?
Ask yourself if you really, seriously believe into the ideas posed on LW, enough to follow into the realms of radical oppression in the name of good and evil.
Something really crazy is going on here.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, “it could increase the chance of an AI going wrong...”.
“I deleted my comment because it was maybe going to increase the chance of an AI going wrong...”
“Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid...”
“Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong.”
I’m beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.
The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?
If you read this comment thread you’ll see what I mean and what danger there might be posed by this movement, ‘follow Eliezer’, ‘donating as much as possible to SIAI’, ‘kill a whole planet’, ‘afford to leave one planet’s worth’, ‘maybe we could even afford to leave their brains unmodified’...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.
Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.
I’m not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don’t think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don’t see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.
As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn’t be taken to reflect anything about anyone else.
I know, it wasn’t my intention to discredit Peer, I quite like his ideas. I’m probably more crazy than him anyway.
But if I can come up with such conclusions, who else will? Also, why isn’t anyone out to kill people, or will be? I’m serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon ‘mere’ probability estimates, how wouldn’t it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn’t this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.
This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I’m wondering about the possible reaction to a imminent and tangible danger.
Before someone accuses me of it, I want to issue the point of people suffering psychological malaise by related information.
Is active denial of information an appropriate handling of serious personal problems? If there are people who suffer from mental illness due to mere thought-experiments, I’m sorry but I think along the lines of the proponents that support the deletion of information for reasons of increasing the chance of an AI going wrong. Namely, as you abandon freedom of expression to an extent, I’m advocating to draw the line between the balance of freedom of information and protection of individual well-being at this point. That is not to say that, for example, I’d go all the way and advocate the depiction of cruelty to children.
A delicate issue indeed, but what one has to keep care of is not to slide into extremism that causes a relinquishment of values it is meant to serve and protect.
Maybe you should read the comments in question before you make this sort of post?
This really isn’t worth arguing and there isn’t any reason to be angry...
You are wrong on both. There is strong signalling going on that gives good evidence regarding both Eliezer’s intent and his competence.
What Roko said matters little, what Eliezer said (and did) matters far more. He is the one trying to take over the world.
I don’t consider frogs to be objects of moral worth. -- Eliezer Yudkowsky
Yeah ok, frogs...but wait! This is the person who’s going to design the moral seed of our coming god-emperor. I’m not sure if everyone here is aware of the range of consequences while using the same as corroboration of the correctness pursuing this route. That is, are we going to replace unfriendly AI with unknown EY? Are we yet at the point that we can tell EY is THE master who’ll decide upon what’s reasonable to say in public and what should be deleted?
Ask yourself if you really, seriously believe into the ideas posed on LW, enough to follow into the realms of radical oppression in the name of good and evil.
There are some good questions buried in there that may be worth discussing in more detail at some point.
I am vaguely confused by your question and am going to stop having this discussion.
Before getting angry, it’s always a good idea to check whether you’re confused. And you are.