I think this is mostly just the macro-trend of the internet shifting away from open forums and blogs and towards the “cozy web” of private groupchats etc., not anything specific about LessWrong. If anything, LessWrong seems to be bucking the trend here, since it remains much more active than most other sites that had their heyday in the late 00s.
I don’t have any dog in the Achmiz/Worley debate, but I’m having trouble getting in the headspace of someone who is driven away from posting here because of one specific commenter.
First of all, I don’t think anyone is ever under any obligation to reply to commenters at all—simply dropping out of a conversation thread doesn’t feel rude/confrontational in the way it would be to say IRL “I’m done talking to you now.”
Second, I would find it far more demotivating to just get zero engagement on my posts—if I didn’t think anybody was reading, it’s hard to justify the time and effort of posting. But otherwise, even if some commenters disagree with me, my post is still part of the discourse, which makes it worthwhile.
jchan
I think this approach wouldn’t work for rationalists, for two reasons:
The rationality community is based around disputation, not canonicalization, of texts. That is, the litmus test for being a rationalist is not “Do you agree with this list of propositions?” (I have tried many times to draw up such a list, but this always just leads to even more debate), but rather “Are you familiar with this body of literature and do you know how to respond to it?” The kind of person who goes to LW meetups isn’t going to enjoy simply being “talked at” and told what to believe—they want to be down in the arena, getting their hands dirty.
Your “recommended template” is essentially individualistic—participants come with their hopes and desires already in-hand, and the only question is “How can I use this community to help me achieve my goals?” Just as a gut feeling I don’t think this is going to work well in building a community or meaningful relationships (seeing others not merely as means, but as ends in themselves—or something like that). Instead, there needs to be some shared purpose for which involvement in the community is essential and not just an afterthought. Now, this isn’t easy. “Solving AI alignment” might be a tall order. But I think the rationality community is doing a passable job at one thing at least—creating a culture of high epistemic standards that will be essential (for both ourselves and the wider world) in navigating the unprecedented challenges our civilization faces.
Can’t speak for Said Achmiz, but I guess for me the main stumbling block is the unreality of the hypothetical, which you acknowledge in the section “This is not a literal description of reality” but don’t go into further. How is it possible for me to imagine what “I” would want in a world where by construction “I” don’t exist? Created Already in Motion and No Universally Compelling Arguments are gesturing at a similar problem, that there is no “ideal mind of perfect emptiness” whose reasoning can be separated from its contingent properties. Now, I don’t go that far—I’ll grant at least that logic and mathematics are universally true even if some particular person doesn’t accept them. But the veil-of-ignorance scenario is specifically inquiring into subjectivity (preferences and values), and so it doesn’t seem coherent to do so while at the same time imagining a world without the contingent properties that constitute that subjectivity.
I think ancient DNA analysis is the space to watch here. We’ve all heard about Neanderthal intermixing by now, but it’s only recently become possible to determine e.g. that two skeletons found in the same grave were 2nd cousins on their father’s side, or whatever. It seems like this can tell us a lot about social behavior that would otherwise be obscure.
It took me years of going to bars and clubs and thinking the same thoughts:
Wow this music is loud
I can barely hear myself talk, let alone anyone else
We should all learn sign language so we don’t have to shout at the top of our lungs all the time
before I finally realized—the whole draw of places like this is specifically that you don’t talk.
Humans have a considerable up-front cost also—it’s called birth and childrearing!
Maybe absolute horse productivity actually has declined, at least for non-farm work. All the roads have been rebuilt for cars, there are no places to tie up my horse for a quick trip to the supermarket, etc. (For that matter, is riding horses on city streets even legal? Regardless, it’s certainly less convenient than it would’ve been in 1900.)
AI analogy: If a job consists mainly of communicating with other employees, it’ll become harder for a human to maintain the same productivity when 99% of those other employees have been replaced with AIs, even if the human is just as intelligent as before.
If there is less demand for hay from horses, that will not increase the supply much because cattle, sheep, etc. will pick up the slack.
Other funny thought: From a horse’s perspective, the Amish are an aligned superintelligence.
In the end, despite cheaper feed, the daily cost of horse upkeep (the horse’s subsistence wage, if you will) was higher than the horse’s productivity in its transport and agricultural roles.
Presumably the absolute productivity of a horse (the amount of land it can plow or stuff it can haul) has not changed. So this only makes sense if the market value of the horse’s labor has declined even faster than the price of feed. Is that the case?
rather than, say, assigning equal probability to all strings of bits we might observe
If the space of possibilities is not arbitrarily capped at a certain length, then such a distribution would have to favor shorter strings over longer ones in much the same way as the Solomonoff prior over programs (because if it doesn’t, then its sum will diverge, etc.). But then this yields a prior that is constantly predicting that the universe will end at every moment, and is continually surprised when it keeps on existing. I’m not sure if this is logically inconsistent, but at least it seems useless for any practical purpose.
For certain kinds of questions (e.g. “I need a new car; what should I get?”), it’s better to ask a bunch of random people than to turn to the internet for advice.
In order to be well-informed, you’ll need to go out and meet people IRL who are connected (at least indirectly) to the thing you want information about.
In the following, I will use the term “my DIT” to refer to the claim that:
In some specific non-trivial contexts, on average more than half of the participants in online debate who pose as distinct human beings are actually bots.
I agree with this version, and I was surprised to see that the Wikipedia definition also includes the bit about it being a deliberate conspiracy, which seems like a strawman, since I have always understood the “Dead Internet Theory” to include only the first part. There’s a lot of stuff on the internet that’s very obviously AI-generated, and so it’s not too far a stretch to suppose that there’s also a lot of synthetic content that hides it better. But this can be explained far more simply than by some vast conspiracy—as SEO, marketing, and astroturfing campaigns.
If Dead Internet Theory is correct, when you see something online, the question you should ask yourself is not “Is this true?” but “Why am I seeing this?” This was always the case to some extent of any algorithmically-curated feed (where the algorithm is anything more complex than “show me all of the posts in reverse chronological order”), but is even more significant when the content itself is algorithmically generated.
If I’m searching online for information about e.g. what new car I should buy, there’s a very strong incentive for all the algorithms involved (both the search engine itself, and the algorithm that spits out the list of recommended car models) to sell their recommendations to the highest bidder and churn out ex post facto justifications explaining why their car is really the best. These algorithms are almost totally uncorrelated with the underlying fact about which car I’d actually want, so I know to consider it of very little value. On the other hand, I would argue, asking a bunch of random acquaintances for car recommendations is much more useful because, although they might not be experts, they were at least not specifically selected in order to deceive me. Even if I ask a friend and they say “Well, I haven’t bought a new car in years, but I heard my coworker’s cousin bought the XYZ and never stops complaining about it”, then this is much more useful information than anything I could find online, because it’s much less likely that my friend’s coworker’s cousin was specifically being paid to say that.
More broadly, on many questions of public concern there may be parties with a strong interest in using bots to create the impression of a broad consensus one way or another. This means that you have no choice but to go out into the real world and ask people, and hope ideally that they’re not simply repeating what they read online, but have some non-AI-mediated connection to the thing.
However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I’m caring about something unobservable.
If we’re going to make sense of living in a branching multiverse, then we’ll need to adopt a more fluid concept of personal identity.
Scenario: I take a sleeping pill that will make me fall asleep in 30 minutes. However, the person who wakes up in my bed the next morning will have no memory of that 30-minute period; his last memory will be of taking the pill.
If I imagine myself experiencing that 30-minute interval, intuitively it doesn’t at all feel like “I have less than 30 minutes to live.” Instead, it feels like I’d be pretty much indifferent to being in this situation—maybe the person who wakes up tomorrow is not “me” in the artificial sense of having a forward-looking continuity of consciousness with my current self, but that’s not really what I care about anyway. He is similar enough to current-me that I value his existence and well-being to nearly the same degree as I do my own; in other words, he “is me” for all practical purposes.
The same is true of the versions of me in nearby world branches. I can no longer observe or influence them, but they still “matter” to me. Of course, the degree of self-identification will decrease over time as they diverge, but then again, so does my degree of identification with the “me” many decades in the future, even assuming a single timeline.
This can be a great time-saver because it relies on each party to present the best possible case for their side. This means I don’t have to do any evidence-gathering myself; I just need to evaluate the arguments presented, with that heuristic in mind. For example, if the pro-X side cites a bunch of sources in favor of X, but I look into them and find them unconvincing, then this is pretty good evidence against X, and I don’t have to go combing through all the other sources myself. The mere existence of bad arguments for X is not in itself evidence against X, but the fact that they’re presented as the best possible arguments is.
Of course the problem is, outside of a legal proceeding, parties rarely have that strong an incentive to dig up the best possible arguments. Their time is limited as well, and they don’t really suffer much consequence from failing to convince you. Also, the discussion medium might structurally impede the best arguments from being given (e.g. replies in a Twitter thread need to be posted quickly or else nobody will see them). Or worse yet, a skilled propaganda campaign can flood the zone with bad pro-X arguments from personages who appear to be pro-X but are secretly against it, knowing that the audience is going to be evaluating these arguments using the adversarial heuristic.
In my experience, Americans are actually eager to talk to strangers and make friends with them if and only if they have some good reason to be where they are and talk to those people besides making friends with people.
A corollary of this is that if anyone at an [X] gathering is asked “So, what got you into [X]?” and answers “I heard there’s a great community around [X]”, then that person needs to be given the cold shoulder and made to feel unwelcome, because otherwise the bubble of deniability is pierced and the lemon spiral will set in, ruining it for everyone else.
However, this is pretty harsh, and I’m not confident enough in this chain of reasoning to actually “gatekeep” people like this in practice. Does this ring true to you?
I highly recommend Val Plumwood’s essay Tasteless: towards a food-based approach to death for a “green-according-to-green” perspective.
Plumwood would turn the “deep atheism” framing on its head, by saying in effect “No, you (the rationalist) are the real theist”. The idea is that even if you’ve rejected Cartesian/Platonic dualism in metaphysics, you might still cling for historical reasons to a metaethical-dualist view that a “real monist” would reject, i.e. the dualism between the evaluator and the evaluated, or between the subject and object of moral values. Plumwood (I think) would say that even the “yin” (acceptance of nature) framing is missing the mark, because it still assumes a distinction between the one doing the accepting and the nature being accepted, positing that they simply happen to be aligned through some fortunate circumstance, rather than being one and the same thing.
It’s a question of whether drawing a boundary on the “aligned vs. unaligned” continuum produces an empirically-valid category; and to this end, I think we need to restrict the scope to the issues actually being discussed by the parties, or else every case will land on the “unaligned” side. Here, both parties agree on where they stand vis-a-vis C and D, and so would be “Antagonistic” in any discussion of those options, but since nobody is proposing them, the conversation they actually have shouldn’t be characterized as such.
On the contrary, I’d say internet forum debating is a central example of what I’m talking about.
This “trying to convince” is where the discussion will inevitably lead, at least if Alice and Bob are somewhat self-aware. After the object-level issues have been tabled and the debate is now about whether Alice is really on Bob’s side, Bob will view this as just another sophisticated trick by Alice. In my experience, Bob-as-the-Mule can only be dislodged when someone other than Alice comes along, who already has a credible stance of sincere friendship towards him, and repeats the same object-level points that Alice made. Only then will Bob realize that his conversation with Alice had been Cassandra/Mule.
(Example I’ve heard: “At first I was indifferent about whether I should get the COVID vaccine, but then I heard [detestable left-wing personalities] saying I should get it, so I decided not to out of spite. Only when [heroic right-wing personality] told me it was safe did I get it.”)
#1 - I hadn’t thought of it in those terms, but that’s a great example.
#2 - I think this relates to the involvement of the third-party audience. Free speech will be “an effective arena of battle for your group” if you think the audience will side with you once they learn the truth about what [outgroup] is up to. Suppose Alice and Bob are the rival groups, and Carol is the audience, and:
Alice/Bob are SE/SE (Antagonist/Antagonist)
Alice/Carol are SF/IE (Guru/Rebel)
Bob/Carol are IF/SE (Siren/Sailor)
If this is really what’s going on, Alice will be in favor of the debate continuing because she thinks it’ll persuade Carol to join her, while Bob is opposed to the debate for the same reason. This is why I personally am pro-free-speech—because I think I’m often in the role of Carol, and supporting free speech is a “tell” for who’s really on my side.
I think this is not a great example because the virtues being extolled here are orthogonal to the outcome.
Would it still be possible to explain these virtues in a consequentialist way, or is it only some virtues that can be explained in this way?
And consequentialists can choose to value their own side more than the other side, or to be indifferent between sides, so I’m not sure what the conflict between virtue ethics and consequentialism would be here.
The special difficulty here is that the two sides are following the same virtue-ethics framework, and come into conflict precisely because of that. So, whatever this framework is, it cannot be cashed out into a single corresponding consequentialist framework that gives the same prescriptions.
It could be that people regard the likelihood of being resurrected into a bad situation (e.g. as a zoo exhibit, a tortured worker em, etc.) as outweighing that of a positive outcome.
Isn’t this what we experience every day when we go to sleep or wake up? We know it must be a gradual transition, not a sudden on/off switch, because sleep is not experienced as a mere time-skip—when you wake up, you are aware that you were recently asleep, and not confused how it’s suddenly the next day. (Or at least, I don’t get the time-skip experience unless I’m very tired.)
(When I had my wisdom teeth extracted under laughing gas, it really did feel like all-or-nothing, because once I reawoke I asked if they were going to get started with the surgery soon, and I had to be told “Actually it’s finished already”. This is not how I normally experience waking up every morning.)