It starts rather well—discussing an interesting study by Galton. High brow, sophisticated style, almost convincing impression of an upper class liberal person, up until he gets to the issue that for some reason actually interests him—rationalizing the views of PUA community on women. I say rationalizing because, of course, mind projection fallacy would affect opinions of PUA on women just as much as it affects opinions of women on women, but of course it is only the latter in which the fallacy is noticed.
This by the way is a great example of how cognitive fallacies are typically used here.
I’m not the least bit surprised that he would also support eugenics via sterilization. edit: or express sympathy towards it, or the like.
Could you do me a BIG FAVOR and every time you write “Yvain says...” or “Yvain believes...” in the future, follow it with ”...according to my interpretation of him, which has been consistently wrong every time I’ve tried to use it before”? I am getting really tired of having to clean up after your constant malicious misinterpretations of me.
So everyone should be aware that whenever Dmytry/private_messaging claims Yvain said something, that’s almost always wrong according to Yvain’s own view of what Yvain said.
I suppose the difference is whether you’re doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we’re talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea.”
Emphasis mine. In this original quote, in the hypothetical future, where Intel is building brain simulations that seem likely to become artificial general intelligence, he supports violence. As clear as it can be.
His subsequent re-formulation to make himself look less bad was:
Even Yvain supports violence of AI seems imminent”. No, I might support violence if an obviously hostile unstoppable SKYNET-style AI seemed clearly imminent
Now, the caveat here is that he would use brain simulators built in the hypothetical future by Intel to be an example of “an obviously hostile unstoppable SKYNET-style AI” , a clear contradiction (if it was so obvious Intel wouldn’t be making those brain emulations)
Hmm. In all fairness I’m not quite sure what he means by eugenics. Historically, the term is virtually never applied to non-coercive measures (such as e.g. IQ cut-off at sperm banks).
“Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together – making the income conditional upon sterilization is a little too close to coercion for my purposes. Still, probably better than what we have right now.”
There’s one of his best articles:
http://lesswrong.com/lw/dr/generalizing_from_one_example/
It starts rather well—discussing an interesting study by Galton. High brow, sophisticated style, almost convincing impression of an upper class liberal person, up until he gets to the issue that for some reason actually interests him—rationalizing the views of PUA community on women. I say rationalizing because, of course, mind projection fallacy would affect opinions of PUA on women just as much as it affects opinions of women on women, but of course it is only the latter in which the fallacy is noticed.
This by the way is a great example of how cognitive fallacies are typically used here.
I’m not the least bit surprised that he would also support eugenics via sterilization. edit: or express sympathy towards it, or the like.
Yvain has told you in the past the following:
So everyone should be aware that whenever Dmytry/private_messaging claims Yvain said something, that’s almost always wrong according to Yvain’s own view of what Yvain said.
The original quote from Yvain was
Emphasis mine. In this original quote, in the hypothetical future, where Intel is building brain simulations that seem likely to become artificial general intelligence, he supports violence. As clear as it can be.
His subsequent re-formulation to make himself look less bad was:
Now, the caveat here is that he would use brain simulators built in the hypothetical future by Intel to be an example of “an obviously hostile unstoppable SKYNET-style AI” , a clear contradiction (if it was so obvious Intel wouldn’t be making those brain emulations)
I don’t think he does...
Hmm. In all fairness I’m not quite sure what he means by eugenics. Historically, the term is virtually never applied to non-coercive measures (such as e.g. IQ cut-off at sperm banks).
From this comment:
“Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together – making the income conditional upon sterilization is a little too close to coercion for my purposes. Still, probably better than what we have right now.”