jchan
This can be a great time-saver because it relies on each party to present the best possible case for their side. This means I don’t have to do any evidence-gathering myself; I just need to evaluate the arguments presented, with that heuristic in mind. For example, if the pro-X side cites a bunch of sources in favor of X, but I look into them and find them unconvincing, then this is pretty good evidence against X, and I don’t have to go combing through all the other sources myself. The mere existence of bad arguments for X is not in itself evidence against X, but the fact that they’re presented as the best possible arguments is.
Of course the problem is, outside of a legal proceeding, parties rarely have that strong an incentive to dig up the best possible arguments. Their time is limited as well, and they don’t really suffer much consequence from failing to convince you. Also, the discussion medium might structurally impede the best arguments from being given (e.g. replies in a Twitter thread need to be posted quickly or else nobody will see them). Or worse yet, a skilled propaganda campaign can flood the zone with bad pro-X arguments from personages who appear to be pro-X but are secretly against it, knowing that the audience is going to be evaluating these arguments using the adversarial heuristic.
In my experience, Americans are actually eager to talk to strangers and make friends with them if and only if they have some good reason to be where they are and talk to those people besides making friends with people.
A corollary of this is that if anyone at an [X] gathering is asked “So, what got you into [X]?” and answers “I heard there’s a great community around [X]”, then that person needs to be given the cold shoulder and made to feel unwelcome, because otherwise the bubble of deniability is pierced and the lemon spiral will set in, ruining it for everyone else.
However, this is pretty harsh, and I’m not confident enough in this chain of reasoning to actually “gatekeep” people like this in practice. Does this ring true to you?
I highly recommend Val Plumwood’s essay Tasteless: towards a food-based approach to death for a “green-according-to-green” perspective.
Plumwood would turn the “deep atheism” framing on its head, by saying in effect “No, you (the rationalist) are the real theist”. The idea is that even if you’ve rejected Cartesian/Platonic dualism in metaphysics, you might still cling for historical reasons to a metaethical-dualist view that a “real monist” would reject, i.e. the dualism between the evaluator and the evaluated, or between the subject and object of moral values. Plumwood (I think) would say that even the “yin” (acceptance of nature) framing is missing the mark, because it still assumes a distinction between the one doing the accepting and the nature being accepted, positing that they simply happen to be aligned through some fortunate circumstance, rather than being one and the same thing.
It’s a question of whether drawing a boundary on the “aligned vs. unaligned” continuum produces an empirically-valid category; and to this end, I think we need to restrict the scope to the issues actually being discussed by the parties, or else every case will land on the “unaligned” side. Here, both parties agree on where they stand vis-a-vis C and D, and so would be “Antagonistic” in any discussion of those options, but since nobody is proposing them, the conversation they actually have shouldn’t be characterized as such.
On the contrary, I’d say internet forum debating is a central example of what I’m talking about.
This “trying to convince” is where the discussion will inevitably lead, at least if Alice and Bob are somewhat self-aware. After the object-level issues have been tabled and the debate is now about whether Alice is really on Bob’s side, Bob will view this as just another sophisticated trick by Alice. In my experience, Bob-as-the-Mule can only be dislodged when someone other than Alice comes along, who already has a credible stance of sincere friendship towards him, and repeats the same object-level points that Alice made. Only then will Bob realize that his conversation with Alice had been Cassandra/Mule.
(Example I’ve heard: “At first I was indifferent about whether I should get the COVID vaccine, but then I heard [detestable left-wing personalities] saying I should get it, so I decided not to out of spite. Only when [heroic right-wing personality] told me it was safe did I get it.”)
#1 - I hadn’t thought of it in those terms, but that’s a great example.
#2 - I think this relates to the involvement of the third-party audience. Free speech will be “an effective arena of battle for your group” if you think the audience will side with you once they learn the truth about what [outgroup] is up to. Suppose Alice and Bob are the rival groups, and Carol is the audience, and:
Alice/Bob are SE/SE (Antagonist/Antagonist)
Alice/Carol are SF/IE (Guru/Rebel)
Bob/Carol are IF/SE (Siren/Sailor)
If this is really what’s going on, Alice will be in favor of the debate continuing because she thinks it’ll persuade Carol to join her, while Bob is opposed to the debate for the same reason. This is why I personally am pro-free-speech—because I think I’m often in the role of Carol, and supporting free speech is a “tell” for who’s really on my side.
Ten Modes of Culture War Discourse
I think this is not a great example because the virtues being extolled here are orthogonal to the outcome.
Would it still be possible to explain these virtues in a consequentialist way, or is it only some virtues that can be explained in this way?
And consequentialists can choose to value their own side more than the other side, or to be indifferent between sides, so I’m not sure what the conflict between virtue ethics and consequentialism would be here.
The special difficulty here is that the two sides are following the same virtue-ethics framework, and come into conflict precisely because of that. So, whatever this framework is, it cannot be cashed out into a single corresponding consequentialist framework that gives the same prescriptions.
It could be that people regard the likelihood of being resurrected into a bad situation (e.g. as a zoo exhibit, a tortured worker em, etc.) as outweighing that of a positive outcome.
Aren’t there situations (at least in some virtue-ethics systems) where it’s fundamentally impossible to reduce (or reconcile) virtue-ethics to consequentialism because actions tending towards the same consequence are called both virtuous and unvirtuous depending on who does them? (Or, conversely, where virtuous conduct calls for people to do things whose consequences are in direct opposition.)
For example, the Iliad portrays both Achilles (Greek) and Hector (Trojan) as embodying the virtues of bravery/loyalty/etc. for fighting for their respective sides, even though Achilles’s consequentialist goal is for Troy to fall, and Hector’s is for that not to happen. Is this an accurate characterization of how virtue-ethics works? Is it possible to explain this in a consequentialist frame?
Thanks everyone for coming! Feedback survey here: https://forms.gle/w32pisonKdwK1bHJ6
It’s also nice to be able to charge up in a place where directly plugging in your device would be inconvenient or would risk theft, e.g. at a busy cafe where the only outlet is across the room from your table.
I want to say something like: “The bigger N is, the bigger a computer needs to be in order to implement that prior; and given that your brain is the size that it is, it can’t possibly be setting N=3↑↑↑↑↑3.”
Now, this isn’t strictly correct, since the Solomonoff prior is uncomputable regardless of the computer’s size, etc. - but is there some kernel of truth there? Like, is there a way of approximating the Solomonoff prior efficiently, which becomes less efficient the larger N gets?
On the proper date for solstice celebrations
Proof of posteriority: a defense against AI-generated misinformation
[Question] What is some unnecessarily obscure jargon that people here tend to use?
Through a panel, darkly: a case study in internet BS detection
I’m unsure whether it’s a good thing that LLaMA exists in the first place, but given that it does, it’s probably better that it leak than that it remain private.
What are the possible bad consequences of inventing LLaMA-level LLMs? I can think of three. However, #1 and #2 are of a peculiar kind where the downsides are actually mitigated rather than worsened by greater proliferation. I don’t think #3 is a big concern at the moment, but this may change as LLM capabilities improve (and please correct me if I’m wrong in my impression of current capabilities).
Economic disruption: LLMs may lead to unemployment because it’s cheaper to use one than to hire a human to do the same work. However, given that they already exist, it’s only a question of whether the economic gains accrue to a few large corporations or to a wider mass of people. If you think economic inequality is bad (whether per se or due to its consequences), then you’ll think the LLaMA leak is a good thing.
Informational chaos: You can never know whether product reviews, political opinions, etc. are actually genuine expressions of what some human being thinks rather than AI-generated fluff created by actors with an interest in deceiving you. This was already a problem (i.e. paid shills), but with LLMs it’s much easier to generate disinformation at scale. However, this problem “solves itself” once LLMs are so easily accessible that everyone knows not to trust anything they read anyway. (By contrast, if LLMs are kept private, AI-generated content seems more trustworthy because it comes in a wider context where most content is still human-authored.)
Infohazard production: If e.g. there’s some way of building a devastating bioweapon using household materials, then it’d be really bad if LLaMA made this knowledge more accessible, or could discover it anew. However, I haven’t seen any evidence that LLaMA is capable of discovering new scientific knowledge that’s not in the training set, or that querying it to surface existing such knowledge is any more effective than using a regular search engine. But this may change with more advanced models.
If we’re going to make sense of living in a branching multiverse, then we’ll need to adopt a more fluid concept of personal identity.
Scenario: I take a sleeping pill that will make me fall asleep in 30 minutes. However, the person who wakes up in my bed the next morning will have no memory of that 30-minute period; his last memory will be of taking the pill.
If I imagine myself experiencing that 30-minute interval, intuitively it doesn’t at all feel like “I have less than 30 minutes to live.” Instead, it feels like I’d be pretty much indifferent to being in this situation—maybe the person who wakes up tomorrow is not “me” in the artificial sense of having a forward-looking continuity of consciousness with my current self, but that’s not really what I care about anyway. He is similar enough to current-me that I value his existence and well-being to nearly the same degree as I do my own; in other words, he “is me” for all practical purposes.
The same is true of the versions of me in nearby world branches. I can no longer observe or influence them, but they still “matter” to me. Of course, the degree of self-identification will decrease over time as they diverge, but then again, so does my degree of identification with the “me” many decades in the future, even assuming a single timeline.