In which way would the infection-resistant body or the lightcone destiny-setting world government pose limits to evolution via variation and selection?
To me it seems that the alternative can only ever be homeostasis—of the radical, lukewarm-helium-ion-soup kind.
lumpenspace
When I say:
You state Pythia mind experiment. And then react to it
I imply that in doing so you are citing Land.
er—this defeats all rules of conversational pragmatics but look, i concede if it stops further more preposterous rebuttals.
More importantly this is completely irrelevant to the substance of the discussion. My good faith doesn’t depend in the slightest on whether you’re citing Land or writing things yourself.
of course it doesn’t. my opinion on your good faith depends on whether you are able to admit having deeply misunderstood the post.
saying something of substance: i did, in the post. id respond to object-level criticism if you provided some—i just see status-jousting, formal pedantry, and random fnords.have you read The Obliqueness Thesis btw? as i mentioned above, that’s a gloss on the same texts that you might find more accessible—per editor’s note, i contributed this to help those who’d want to check the sources upon reading it, so im not really sure how writing my own arguments would help.
Look friend.
You said you understood from the beginning that the text in question was Land’s.
In your first comment, though, you clearly show that not to be the case:
> I do not see how you are doing that. You state Pythia mind experiment. And then react to it: “You go girl!”. I suppose both the description of the mind experiment and the reaction are faithful. But there is no actual engagement between orthogonality thesis and Land’s ideas.
This clearly marks me as the author, as separated from Land.I find it hard to keep engaging under an assumption of good faith on these premises.
the purpose of any test is to measure something. in this case, the ability to simulate other views. let’s not be overly pedantic.
anyway, you failed the Turing test with your dialogue, which surprises me source the crucial points recovered right above. maybe @jessicata’s The Obliqueness Thesis can help—it’s written in High Lesswrongian, which I assume is the register most likely to trigger some interpretative charity
uh I see—I’ve put the editors note in blockquote; hope that helps at least to make its meta- character clearer (:
sure? that would blickauote 75% of the article
perhaps I could block quote the editors note instead?
I stand corrected. What do you suggest? See other comment
My bad, I didn’t check and was tricked by the timing. Sincere apoloigies.
How would you suggest the thing could be improved? (the TeX version in the PDF contains Nick Land only).
I was thinking perhaps to add a link to each XS item, but wasnt really looking forward to rehashing comments of what has probably been the nadir in r/acc / LW diplomatic relations- Feb 8, 2025, 7:45 AM; 1 point) 's comment on Nick Land: Orthogonality by (
the editor’s note, mine, is marked with the helpful title “editor’s note”, while the xenosystem pieces about orthogonality are marked with “xenosystems: orthogonality”.
you seem to be the only user, although not the only account, who experienced this problem.
propaganda of nick land’s idea
wait—are you aware that the texts in question are nick land’s? i think it should be pretty clear from the editor’s note.
besides, in the first extract, the labels part was entirely incidental—and has literally no import to any of the rest. it was an historical artefact; the meat of the first section was, well, the thing indicated by its title and its text. i definitely see the issue of fixating on labels, now, tho—and i thank you for providing an object lesson.ideological turing test
the purpose of the idelogical turing test is to represent the opposing views in ways that your opponent would find satisfactory. I have it from reliable sources that Bostrom found the opening paragraphs, until “sun’s eventual expansion”, satisfactory.
i really cannot shake the feeling that you hadn’t read the post to begin with, and that now you are simply scanning it in order to find rebuttals to my comments. your grasp of basic, factual statements seems to falter, to the point of suggesting that my engagement with what purport to be more fundamental points might be a suboptimal allocation of resources.
how is meditations on moloch a better explanation of the will-to-think, or a better rejection of orthogonality, than the above?
I think the argument is stated as clearly as it’s appropriate under the assumption of a minimally charitable audience; in particular, I am puzzled at the accusations of “propaganda”. propaganda of what? Darwin? intelligence? Gnon?
I cannot shake the feeling that the commenter might have only read the first extract and either fell victim of fnords or found it expedient to leave a couple of them for the benefit of less sophisticated leaders—in particular, has the commenter not noticed that the whole first part of Pythia unbound is an ideological Turing test, passed with flying colours?
wait—do you consider that an insult? i snuggled with the best of them
[curious about the downvotes—there’s usually much /acc criticising around these parts, I thought having the arguments in question available in a clear and faithful rendition would be considered an unalloyed good from all camps? but i’ve not poasted here since 2018, will go read the rules in case something changed]
Nick Land: Orthogonality
So, something like “quiet quitting”?
Well, no—not necessarily. And with all the epistemic charity in the world, I am starting to suspect you might benefit from actually reading the review at this point, just to have more of an idea of what we’re talking about.
Funny, I see “exit” as. more or less the opposite of the thing you are arguing against. Land (and Moldbug) refer to this book by Hirschman, where “exit” is contrasted with “voice”—the other way to counter institutional/organisational decay. In such model, exit is individual and aims to carve a space for a different way of doing things, while voice is collective, and aims to steer the system towards change.
Balaji’s network state, cryptocurrency, etc are all examples. Many can run parallel to existing institutions, working along different dimensions, and testing configurations which might one day end up being more effective than the legacy institutions themselves.
I’m trying to understand where the source of disagreement lies, since I don’t really see much “overconfidence”—ie, i don’t see much of a probabilistic claim at all. Let me know if one of these suggestion points somewhere close to the right direction:
The texts cited were mostly a response to the putative inevitability of orthogonalism. Once that was (i think effectively) dispatched, one might consider that part of the argument closed.
After that, one could excuse him for being less rigorous/have more fun with the rest; the goal there was not to debate but to allow the reader to experience what something akin to will-to-think would be like (im aware this is frowned upon in some circles);The crux of the matter, imo, is not that thinking a lot about meta-ethics changes your values. Rather, that an increase in intelligence does—and namely, it changes them in the direction of greater appreciation for complexity and desire for thinking, and this change takes forms unintelligible to those one rung below. Of course, here the argument is either inductive/empirical or kinda neoplatonic. I will spare you the latter version, but the former would look something like:
- Imagine a fairly uncontroversial intelligence-sorted line-up, going:
thermostat → mosquito → rat(🐭) → chimp → median human → rat(Ω)
- Notice how intelligence grows together with the desire for more complexity, with curiosity, and ultimately with the drive towards increasing intelligence, per se: and notice also how morality evolves to accommodate those drives (one really wouldn’t want those on the left of wherever one stands to impose their moral code to those on the right).
While I agree these sort of arguments don’t cut it for a typical post-analytical, lesswrong-type debate, I still think that, at the very least, Occam’s razor should strongly slash their way—unless there’s some implicit counterargument i missed.
(As for the opportunity cost of deepening your familiarity with the subject matter, you might be right. The style of philosophy Land adopts is very very different from the one appreciated around here—it is indeed often a target for snark—and while I think there’s much of interest on that side of the continental split, the effort required for overcoming the aesthetic shift, weighted by chance of such shift completing, might still not make it worth it).
I’m not sure I agree—in the original thought experiment, it was a given that increasing intelligence would lead to changes in values in ways that the agent, at t=0, would not understand or share.
At this point, one could decide whether to go for it or hold back—and we should all consider ourself lucky that our early sapiens predecessors didn’t take the second option.
(btw, I’m very curious to know what you make of this other Land text: https://etscrivner.github.io/cryptocurrent/
I personally don’t see the choice of “allowing a more intelligent set of agents take over” as particularly altruistic: personally, i think intelligence trumps species, and I am not convinced interrupting its growth to make sure more sets of genes similar to mine find hosts for longer would somehow be “for my benefit”.
Even in my AI Risk years, what I was afraid is the same I’m afraid of now: Boring Futures. The difference is that in the meantime the arguments for a singleton ASI, with a single unchangeable utility function that is not more intelligence/knowledge/curiosity became less and less tenable (together with FOOM within our lifetimes).
This being the case, “altruistic” really seems out of place: it’s likely that early sapiens would have understood nothing of our goals, our morality, and the drives that got us to build civilisations—but would it have been better for them had they murdered the first guy in the troop they found flirting with a neanderthal and prevented this? I personally doubt it, and I think the comparison between us and ASI is more or less in the same ballpark,
let’s try it from the other direction:
do you think stable meta-values are to be observed between australopiteci and say contemporary western humans? on the other hand: do values across primitive tribes or early agricultural empires not look surprisingly similar? third hand: what makes it so that we can look back and compare those value systems, while it would be nigh-impossible for the agents in questions to wrap their head around even something as “basic” as representative democracy?
i don’t think it’s thought as much as capacity for it that changes one’s values. for instance, aontogeny recapitulating phylogeny: would you think it wise to have @TsviBT¹⁹⁹⁹ align contemporary Tsvi based on his values? How about vice versa?