jchan
However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I’m caring about something unobservable.
If we’re going to make sense of living in a branching multiverse, then we’ll need to adopt a more fluid concept of personal identity.
Scenario: I take a sleeping pill that will make me fall asleep in 30 minutes. However, the person who wakes up in my bed the next morning will have no memory of that 30-minute period; his last memory will be of taking the pill.
If I imagine myself experiencing that 30-minute interval, intuitively it doesn’t at all feel like “I have less than 30 minutes to live.” Instead, it feels like I’d be pretty much indifferent to being in this situation—maybe the person who wakes up tomorrow is not “me” in the artificial sense of having a forward-looking continuity of consciousness with my current self, but that’s not really what I care about anyway. He is similar enough to current-me that I value his existence and well-being to nearly the same degree as I do my own; in other words, he “is me” for all practical purposes.
The same is true of the versions of me in nearby world branches. I can no longer observe or influence them, but they still “matter” to me. Of course, the degree of self-identification will decrease over time as they diverge, but then again, so does my degree of identification with the “me” many decades in the future, even assuming a single timeline.
This can be a great time-saver because it relies on each party to present the best possible case for their side. This means I don’t have to do any evidence-gathering myself; I just need to evaluate the arguments presented, with that heuristic in mind. For example, if the pro-X side cites a bunch of sources in favor of X, but I look into them and find them unconvincing, then this is pretty good evidence against X, and I don’t have to go combing through all the other sources myself. The mere existence of bad arguments for X is not in itself evidence against X, but the fact that they’re presented as the best possible arguments is.
Of course the problem is, outside of a legal proceeding, parties rarely have that strong an incentive to dig up the best possible arguments. Their time is limited as well, and they don’t really suffer much consequence from failing to convince you. Also, the discussion medium might structurally impede the best arguments from being given (e.g. replies in a Twitter thread need to be posted quickly or else nobody will see them). Or worse yet, a skilled propaganda campaign can flood the zone with bad pro-X arguments from personages who appear to be pro-X but are secretly against it, knowing that the audience is going to be evaluating these arguments using the adversarial heuristic.
In my experience, Americans are actually eager to talk to strangers and make friends with them if and only if they have some good reason to be where they are and talk to those people besides making friends with people.
A corollary of this is that if anyone at an [X] gathering is asked “So, what got you into [X]?” and answers “I heard there’s a great community around [X]”, then that person needs to be given the cold shoulder and made to feel unwelcome, because otherwise the bubble of deniability is pierced and the lemon spiral will set in, ruining it for everyone else.
However, this is pretty harsh, and I’m not confident enough in this chain of reasoning to actually “gatekeep” people like this in practice. Does this ring true to you?
I highly recommend Val Plumwood’s essay Tasteless: towards a food-based approach to death for a “green-according-to-green” perspective.
Plumwood would turn the “deep atheism” framing on its head, by saying in effect “No, you (the rationalist) are the real theist”. The idea is that even if you’ve rejected Cartesian/Platonic dualism in metaphysics, you might still cling for historical reasons to a metaethical-dualist view that a “real monist” would reject, i.e. the dualism between the evaluator and the evaluated, or between the subject and object of moral values. Plumwood (I think) would say that even the “yin” (acceptance of nature) framing is missing the mark, because it still assumes a distinction between the one doing the accepting and the nature being accepted, positing that they simply happen to be aligned through some fortunate circumstance, rather than being one and the same thing.
It’s a question of whether drawing a boundary on the “aligned vs. unaligned” continuum produces an empirically-valid category; and to this end, I think we need to restrict the scope to the issues actually being discussed by the parties, or else every case will land on the “unaligned” side. Here, both parties agree on where they stand vis-a-vis C and D, and so would be “Antagonistic” in any discussion of those options, but since nobody is proposing them, the conversation they actually have shouldn’t be characterized as such.
On the contrary, I’d say internet forum debating is a central example of what I’m talking about.
This “trying to convince” is where the discussion will inevitably lead, at least if Alice and Bob are somewhat self-aware. After the object-level issues have been tabled and the debate is now about whether Alice is really on Bob’s side, Bob will view this as just another sophisticated trick by Alice. In my experience, Bob-as-the-Mule can only be dislodged when someone other than Alice comes along, who already has a credible stance of sincere friendship towards him, and repeats the same object-level points that Alice made. Only then will Bob realize that his conversation with Alice had been Cassandra/Mule.
(Example I’ve heard: “At first I was indifferent about whether I should get the COVID vaccine, but then I heard [detestable left-wing personalities] saying I should get it, so I decided not to out of spite. Only when [heroic right-wing personality] told me it was safe did I get it.”)
#1 - I hadn’t thought of it in those terms, but that’s a great example.
#2 - I think this relates to the involvement of the third-party audience. Free speech will be “an effective arena of battle for your group” if you think the audience will side with you once they learn the truth about what [outgroup] is up to. Suppose Alice and Bob are the rival groups, and Carol is the audience, and:
Alice/Bob are SE/SE (Antagonist/Antagonist)
Alice/Carol are SF/IE (Guru/Rebel)
Bob/Carol are IF/SE (Siren/Sailor)
If this is really what’s going on, Alice will be in favor of the debate continuing because she thinks it’ll persuade Carol to join her, while Bob is opposed to the debate for the same reason. This is why I personally am pro-free-speech—because I think I’m often in the role of Carol, and supporting free speech is a “tell” for who’s really on my side.
I think this is not a great example because the virtues being extolled here are orthogonal to the outcome.
Would it still be possible to explain these virtues in a consequentialist way, or is it only some virtues that can be explained in this way?
And consequentialists can choose to value their own side more than the other side, or to be indifferent between sides, so I’m not sure what the conflict between virtue ethics and consequentialism would be here.
The special difficulty here is that the two sides are following the same virtue-ethics framework, and come into conflict precisely because of that. So, whatever this framework is, it cannot be cashed out into a single corresponding consequentialist framework that gives the same prescriptions.
It could be that people regard the likelihood of being resurrected into a bad situation (e.g. as a zoo exhibit, a tortured worker em, etc.) as outweighing that of a positive outcome.
Aren’t there situations (at least in some virtue-ethics systems) where it’s fundamentally impossible to reduce (or reconcile) virtue-ethics to consequentialism because actions tending towards the same consequence are called both virtuous and unvirtuous depending on who does them? (Or, conversely, where virtuous conduct calls for people to do things whose consequences are in direct opposition.)
For example, the Iliad portrays both Achilles (Greek) and Hector (Trojan) as embodying the virtues of bravery/loyalty/etc. for fighting for their respective sides, even though Achilles’s consequentialist goal is for Troy to fall, and Hector’s is for that not to happen. Is this an accurate characterization of how virtue-ethics works? Is it possible to explain this in a consequentialist frame?
Thanks everyone for coming! Feedback survey here: https://forms.gle/w32pisonKdwK1bHJ6
It’s also nice to be able to charge up in a place where directly plugging in your device would be inconvenient or would risk theft, e.g. at a busy cafe where the only outlet is across the room from your table.
I want to say something like: “The bigger N is, the bigger a computer needs to be in order to implement that prior; and given that your brain is the size that it is, it can’t possibly be setting N=3↑↑↑↑↑3.”
Now, this isn’t strictly correct, since the Solomonoff prior is uncomputable regardless of the computer’s size, etc. - but is there some kernel of truth there? Like, is there a way of approximating the Solomonoff prior efficiently, which becomes less efficient the larger N gets?
I’m unsure whether it’s a good thing that LLaMA exists in the first place, but given that it does, it’s probably better that it leak than that it remain private.
What are the possible bad consequences of inventing LLaMA-level LLMs? I can think of three. However, #1 and #2 are of a peculiar kind where the downsides are actually mitigated rather than worsened by greater proliferation. I don’t think #3 is a big concern at the moment, but this may change as LLM capabilities improve (and please correct me if I’m wrong in my impression of current capabilities).
Economic disruption: LLMs may lead to unemployment because it’s cheaper to use one than to hire a human to do the same work. However, given that they already exist, it’s only a question of whether the economic gains accrue to a few large corporations or to a wider mass of people. If you think economic inequality is bad (whether per se or due to its consequences), then you’ll think the LLaMA leak is a good thing.
Informational chaos: You can never know whether product reviews, political opinions, etc. are actually genuine expressions of what some human being thinks rather than AI-generated fluff created by actors with an interest in deceiving you. This was already a problem (i.e. paid shills), but with LLMs it’s much easier to generate disinformation at scale. However, this problem “solves itself” once LLMs are so easily accessible that everyone knows not to trust anything they read anyway. (By contrast, if LLMs are kept private, AI-generated content seems more trustworthy because it comes in a wider context where most content is still human-authored.)
Infohazard production: If e.g. there’s some way of building a devastating bioweapon using household materials, then it’d be really bad if LLaMA made this knowledge more accessible, or could discover it anew. However, I haven’t seen any evidence that LLaMA is capable of discovering new scientific knowledge that’s not in the training set, or that querying it to surface existing such knowledge is any more effective than using a regular search engine. But this may change with more advanced models.
One time, a bunch of particularly indecisive friends had started an email thread in order to arrange a get-together. Several of them proposed various times/locations but nobody expressed any preferences among them. With the date drawing near, I broke the deadlock by saying something like “I have consulted the omens and determined that X is the most auspicious time/place for us to meet.” (I hope they understood I was joking!) I have also used coin-flips or the hash of an upcoming Bitcoin block for similar purposes.
I think the sociological dynamic is something like: Nobody really cares what we coordinate on, but they do care about (a) not wanting to be seen as unjustifiably grabbing social status by imposing a single choice on everyone else, and (b) not wanting to accept lower status by going along with someone else’s preference. So, to coordinate, we defer the choice to some “objective” external process, so that nobody’s social status is altered by it.
An example where this didn’t work: The Gregorian calendar took centuries to be adopted throughout Europe, despite being justified by “objective” astronomical data, because non-Catholic countries thought of it as a “papal imposition” whose acceptance would imply acceptance of the Pope’s authority over the whole Christian church. (Much better to stick with Julius Caesar’s calendar instead!)
This may shed some light onto why people have fun playing the Schelling game. It’s always amusing when I discover how uncannily others’ thoughts match my own, e.g. when I think to myself “X! No, X is too obscure, I should probably say the more common answer Y instead”, and then it turns out X is the majority answer after all.
Thanks everyone for coming! Feedback survey here: https://forms.gle/Nx4vqmXZnJ8EuuKP9
What exactly did you do with the candles? I’ve seen pictures and read posts mentioning the fact that candles are used at solstice events, but I’m having trouble imagining how it works without being logistically awkward. E.g.:
Where are the candles stored before they’re passed out to the audience?
At what point are the candles passed out? Do people get up from their seats, go get a candle, and then return to their seats, or do you pass around a basket full of candles?
When are the candles initially lit? Before or after they’re distributed?
When are the candles extinguished during the “darkening” phase? How does each person know when to extinguish their own candle?
Is there a point later when people can ditch their candles? Otherwise, it must be annoying to have to hold a lit candle throughout the whole “brightening” phase.
What happens to the candles at the end?
For certain kinds of questions (e.g. “I need a new car; what should I get?”), it’s better to ask a bunch of random people than to turn to the internet for advice.
In order to be well-informed, you’ll need to go out and meet people IRL who are connected (at least indirectly) to the thing you want information about.
I agree with this version, and I was surprised to see that the Wikipedia definition also includes the bit about it being a deliberate conspiracy, which seems like a strawman, since I have always understood the “Dead Internet Theory” to include only the first part. There’s a lot of stuff on the internet that’s very obviously AI-generated, and so it’s not too far a stretch to suppose that there’s also a lot of synthetic content that hides it better. But this can be explained far more simply than by some vast conspiracy—as SEO, marketing, and astroturfing campaigns.
If Dead Internet Theory is correct, when you see something online, the question you should ask yourself is not “Is this true?” but “Why am I seeing this?” This was always the case to some extent of any algorithmically-curated feed (where the algorithm is anything more complex than “show me all of the posts in reverse chronological order”), but is even more significant when the content itself is algorithmically generated.
If I’m searching online for information about e.g. what new car I should buy, there’s a very strong incentive for all the algorithms involved (both the search engine itself, and the algorithm that spits out the list of recommended car models) to sell their recommendations to the highest bidder and churn out ex post facto justifications explaining why their car is really the best. These algorithms are almost totally uncorrelated with the underlying fact about which car I’d actually want, so I know to consider it of very little value. On the other hand, I would argue, asking a bunch of random acquaintances for car recommendations is much more useful because, although they might not be experts, they were at least not specifically selected in order to deceive me. Even if I ask a friend and they say “Well, I haven’t bought a new car in years, but I heard my coworker’s cousin bought the XYZ and never stops complaining about it”, then this is much more useful information than anything I could find online, because it’s much less likely that my friend’s coworker’s cousin was specifically being paid to say that.
More broadly, on many questions of public concern there may be parties with a strong interest in using bots to create the impression of a broad consensus one way or another. This means that you have no choice but to go out into the real world and ask people, and hope ideally that they’re not simply repeating what they read online, but have some non-AI-mediated connection to the thing.