If your arguments and conclusions are actually correct, why haven’t other people discovered them independently and either made them public (due to less concern about causing controversy) or made similar claims (about having private arguments)?
To offer another data point in addition to Konkvistador’s, HughRistik made similar claims to me. We had a brief private exchange, the contents of which I promised to keep private. However, I think that I can say, without breach of promise, that the examples he offered in private did not seem to me to be as poisonous to public discourse as he believed.
On the other hand, I could see that the arguments he gave where for controversial positions, and anyone arguing for those positions would have to make some cognitively demanding efforts to word their arguments so as to avoid poisoning the discourse. I can see that someone might want to avoid this effort. But, on the whole, the level of effort that would be required didn’t seem to me to be that high. I think that it would be easy enough (not easy, but easy enough) for Vladimir_M to make these arguments publicly and productively that he should want to do this for the reasons you give.
(I’ll also add that the evidence HughRistik offered was serious and deserved respectful consideration, but it did not move me much from my previous mainstream-liberal views on the issues in question.)
anyone arguing for those positions would have to make some cognitively demanding efforts to word their arguments so as to avoid poisoning the discourse.
Merely expressing certain thoughts in a clear way is deemed to poison the discourse on this forum, whereas expressing certain other thoughts, no matter how rudely, aggressively, childishly, and offensively, is not deemed to poison the discourse. The only way to get away with expressing these thoughts on this forum is to express them as Vlad does, in code that is largely impenetrable except to those that already share those ideas.
And as evidence for this proposition, observe that no one does express these thoughts plainly on this forum, not even me, while they are routinely expressed on other forums.
Lots of people argue that we are heading not for a technological singularity, but for a left political singularity, that will likely result in the collapse of western civilization. You could not possibly argue that on this forum.
Indeed it is arguably inadvisable to argue that even on a website located on a server within the USA or Europe, though Mencius Moldbug did.
This post doesn’t deserve the down votes it got. Up voted.
And as evidence for this proposition, observe that no one does express these thoughts plainly on this forum, not even me, while they are routinely expressed on other forums.
Urban Future is a rather interesting blog, just read his Dark Enlightenment series and found it a good overview and synthesis of recent reactionary thought. I also liked some of his technology and transhumanist posts.
Lots of people argue that we are heading not for a technological singularity, but for a left political singularity, that will likely result in the collapse of western civilization. You could not possibly argue that on this forum.
It is probably true that we couldn’t discuss this regardless of how much evidence existed for it. Ever since I’ve started my investigation of how and why values change, the process we’ve decided to label “moral progress” in the last 250 years, I’ve been concerned about social phenomena like the one described in the post seriously harming mankind. To quote my comment on the blog post:
I sometimes wonder whether that is an illusion. What if we are that lucky branch of the multiverse where, looking just at it it looks like a Maxwell’s demon is putting society back into working order?
This would also explain the Fermi Paradox. If all intelligent life in our universe tends to eventually spirals into perfect leftism as described in the OP… if so building self-improving AI designed to extrapolate human ethics like the folks at SIAI hope to do may be an incredibly bad idea.
“If it did not end, the final outcome, infinite leftism in finite time, would be that everyone is tortured to death for insufficient leftism…”
I hope this model of the universe is as unlikely as I think it is!
I’d rather you refer to Three Worlds Collide than discuss such morbid fantasies! (I’ve read Land and he makes H.L. Mencken look kind and cheerful by comparison.)
One (overly narrow) ideology-related interpretation possible is that of a Space-Liberal humanity having Space Liberalism forcefully imposed on the Babyeaters but resisting the imposition of Space Communism upon itself, despite the relative positions being identical in both cases. In which case… was the Normal Ending really so awful? :)
No, but seriously. Consider it. I mean, the Superhappies are a highly egalitarian, collectivist, expansionist, technology-focused, peace- and compromise-loving culture with universalist ideals that they want to spread everywhere.
Aside from the different biology, that sounds like the Communist sci-fi utopias I’ve read of, like Banks’ Culture and the Strugatsky brothers’ Noon Universe. All three are a proper subset of “Near-Maximum Leftism” in my opinion. And I would hardly be terrified if offered to live in either one—or even a downgraded version of one, with a little Space Bureaucracy. Frankly, I wouldn’t even mind a Space Brezhnev, as long as he behaved. I can name a dozen much worse (non-socialist) rulers than the real Brezhnev!
(Can you imagine tentacle sex being plagued by bureaucracy? “Sorry, comrade, you’ll need a stamp before I can give you an orgasm, and the stamp window doesn’t work today.”)
To offer another data point in addition to Konkvistador’s, HughRistik made similar claims to me. We had a brief private exchange, the contents of which I promised to keep private. However, I think that I can say, without breach of promise, that the examples he offered in private did not seem to me to be as poisonous to public discourse as he believed.
On the other hand, I could see that the arguments he gave where for controversial positions, and anyone arguing for those positions would have to make some cognitively demanding efforts to word their arguments so as to avoid poisoning the discourse. I can see that someone might want to avoid this effort. But, on the whole, the level of effort that would be required didn’t seem to me to be that high. I think that it would be easy enough (not easy, but easy enough) for Vladimir_M to make these arguments publicly and productively that he should want to do this for the reasons you give.
(I’ll also add that the evidence HughRistik offered was serious and deserved respectful consideration, but it did not move me much from my previous mainstream-liberal views on the issues in question.)
Merely expressing certain thoughts in a clear way is deemed to poison the discourse on this forum, whereas expressing certain other thoughts, no matter how rudely, aggressively, childishly, and offensively, is not deemed to poison the discourse. The only way to get away with expressing these thoughts on this forum is to express them as Vlad does, in code that is largely impenetrable except to those that already share those ideas.
And as evidence for this proposition, observe that no one does express these thoughts plainly on this forum, not even me, while they are routinely expressed on other forums.
Lots of people argue that we are heading not for a technological singularity, but for a left political singularity, that will likely result in the collapse of western civilization. You could not possibly argue that on this forum.
Indeed it is arguably inadvisable to argue that even on a website located on a server within the USA or Europe, though Mencius Moldbug did.
This post doesn’t deserve the down votes it got. Up voted.
Urban Future is a rather interesting blog, just read his Dark Enlightenment series and found it a good overview and synthesis of recent reactionary thought. I also liked some of his technology and transhumanist posts.
It is probably true that we couldn’t discuss this regardless of how much evidence existed for it. Ever since I’ve started my investigation of how and why values change, the process we’ve decided to label “moral progress” in the last 250 years, I’ve been concerned about social phenomena like the one described in the post seriously harming mankind. To quote my comment on the blog post:
I’d rather you refer to Three Worlds Collide than discuss such morbid fantasies! (I’ve read Land and he makes H.L. Mencken look kind and cheerful by comparison.)
One (overly narrow) ideology-related interpretation possible is that of a Space-Liberal humanity having Space Liberalism forcefully imposed on the Babyeaters but resisting the imposition of Space Communism upon itself, despite the relative positions being identical in both cases. In which case… was the Normal Ending really so awful? :)
Space Communism is infinite sex with everything? People are right space makes everything better.
No, but seriously. Consider it. I mean, the Superhappies are a highly egalitarian, collectivist, expansionist, technology-focused, peace- and compromise-loving culture with universalist ideals that they want to spread everywhere.
Aside from the different biology, that sounds like the Communist sci-fi utopias I’ve read of, like Banks’ Culture and the Strugatsky brothers’ Noon Universe. All three are a proper subset of “Near-Maximum Leftism” in my opinion. And I would hardly be terrified if offered to live in either one—or even a downgraded version of one, with a little Space Bureaucracy. Frankly, I wouldn’t even mind a Space Brezhnev, as long as he behaved. I can name a dozen much worse (non-socialist) rulers than the real Brezhnev!
(Can you imagine tentacle sex being plagued by bureaucracy? “Sorry, comrade, you’ll need a stamp before I can give you an orgasm, and the stamp window doesn’t work today.”)