Giving up this new technology would be analogous to living like a quaker today
Perhaps you meant “Amish” or “Mennonite” rather than “quaker”?
Giving up this new technology would be analogous to living like a quaker today
Perhaps you meant “Amish” or “Mennonite” rather than “quaker”?
Nice article all around!
Another error that conspiracy-theorists make is to “take the org chart literally”.
CTists attribute superhuman powers to the CIA, etc., because they suppose that decision-making in these organizations runs exactly as shown on the chart. Each box, they suppose, takes in direction from above and distributes it below just as infallibly as the lines connecting the boxes are drawn on the chart.
If you read org charts literally, it looks like leaders at the top have complete control over everything that their underlings do. So of course the leader can just order the underlings not to defect or leak or baulk at tasks that seem beyond the pale!
This overly literal reading of the org chart obscures the fact that all these people are self-interested agents, perhaps with only a nominal loyalty to the structure depicted on the chart. But many CTists miss this, because they read the org chart as if it were a flowchart documenting the dependencies among subroutines in a computer program.
LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.
LW should not be comparing itself to Plato. It’s trying to do something different. The best of what Plato did is, for the most part, orthogonal to what LW does.
You can take the LW worldview totally onboard and still learn a lot from Plato that will not in any way conflict with that worldview.
Or you may find Plato totally useless. But it won’t be your adoption of the LW memeplex alone that determines which way you go.
Also, your empathy reassures them that you will be ready with truly helpful help if they do later want it.
I agree that a rich person won’t tolerate disposable products where more durable versions are available. Durability is a desirable thing, and people who can afford it will pay for it when it’s an option.
But imagine a world where washing machines cost as much as they do in our world, but all washing machines inevitably break down after a couple years. Durable machines just aren’t available.
Then, in that world, you have to be wealthier to maintain your washing-machine-owning status. People who couldn’t afford to repurchase a machine every couple of years would learn to do without. But people who could afford it would consider it an acceptable cost of living in the style to which they have become accustomed.
Did your really need to say that you’d be brief? Wasn’t it enough to say that you’d omit needless words? :)
It seems unlikely that joining a specific elite is terminally valuable as such, except to ephemeral subagents that were built for instrumental reasons to pursue it.
It seems quite likely that people seek to join whatever elite they can as a means to some more fundamental ends. Those of us who aren’t driven to join the elite are probably satisfying our hunger to pursue those more fundamental ends in other ways.
For example, people might seek elite status in part to win security against bad fortune or against powerful enemies. But it might seem to you that there are other ways to be more secure against these things. It might even seem that being elite would leave you more exposed to such dangers.
For example, if you think that the main danger is unaligned AI, then you won’t think of elite status as a safe haven, so you’ll be less motivated to seek it. You’ll find that sense of security in doing something else that seems to address that danger better.
I’ve played lot of role-playing games back in my day and often people write all kinds of things as flavour text. And none of it is meant to be taken literally.
This line gave me an important insight into how you were thinking.
The creators were thinking of it as a community trust-building exercise. But you thought that it was intended to be a role-playing game. So, for you, “cooperate” meant “make the game interesting and entertaining for everyone.” That paints the risk of taking the site down in a very different light.
And if there was a particular goal, instead of us being supposed to decide for ourselves what the goal was, then maybe it would have made sense to have been clear about it?
But the “role-playing game” glasses that you were wearing would have (understandably) made such a statement look like “flavor text”.
I wrote a LessWrong post that addressed this: What Bayesianism Taught Me
Typo: “And that’s why the thingies you multiply probabilities by—the thingies that you use to weight uncertain outcomes in your imagination,”
Here, “probabilities” should be “utilities”.
Trying the pill still makes you the kind of person who tries pills. Not trying really does avoid that.
You may be interpreting “signalling” in a more specific way than I intended. You might be thinking of the kind of signalling that is largely restricted to status jockeying in zero-sum status games.
But I was using “signaling tool” in a very general sense. I just mean that you can use the signaling tool to convey information, and that you and your intended recipients have common knowledge about what your signal means. In that way, it’s basically just a piece of language.
As with any piece of language, the fact that it signals something does place restrictions on what you can do.
For example, you can’t yell “FIRE!” unless you are prepared to deal with certain consequences. But if the utterance “FIRE!” had no meaning, you would be freer, in a sense, to say it. If the mood struck you, you could burst out with a loud shout of “FIRE!” without causing a big commotion and making a bunch of people really angry at you.
But you would also lack a convenient tool that reliably brings help when you need it. This is a case where I think that the value of the signal heavily outweighs the restrictions that the signal’s existence places on your actions.
Good points.
I’m having a hard time separating this from the ‘offense’ argument that you’re not including.
I agree that part of offense is just “what it feels like on the inside to anticipate diminished status”.
Analogously, part of the pain of getting hit by a hammer is just “what it feels like on the inside to get hit by a hammer.”
However, in both cases, neither the pain nor the offense is just passive internal information about an objective external state of affairs. They include such information, but they are more than that. In particular, in both cases, they are also what it feels like to execute a program designed by evolution to change the situation.
Pain, for example, is an inducement to stop any additional hammer blows and to see to the wounds already inflicted. More generally, pain is part of an active program that is interacting with the world, planning responses, anticipating reactions to those responses, and so on. And likewise with offense.
The premise of my distinction between “offense” and “diminished status” is this. I maintain that we can conceptually separate the initial and unavoidable diminished status from the potential future diminished status.
The potential future diminished status depends on how the offendee responds. The emotion of offense is heavily wrapped up in this potential future and in what kinds of responses will influence that future. For that reason, offense necessarily involves the kinds of recursive issues that Katja explores.
In the end, these recursive issues will have to be considered. (They are real, so they should be reflected in our theory in the end.) But it seems like it should be possible to see what initial harm, if any, occurs before the recursion kicks in.
In the examples that occur to me, both sides agree that mocking the culture in question would be bad. They just disagree about whether the person accused of CA is doing that.
Do you have in mind a case in which the accused party defended themselves by saying that the appropriated culture should be mocked?
That seems like a different kind of dispute that follows a different rhetorical script, on both sides. For example, critics of Islam will be accused of Islamophobia, not cultural appropriation. And people accused of CA are more likely to defend themselves by saying that they’re honoring the culture. They will not embrace the claim that they are mocking it.
I’m not contesting the claim that mockery can be good in some cases. But that point isn’t at the crux of the arguments over cultural appropriation that I’ve seen. Disputes where the goodness of mockery is at the crux will not be of the kind that I’m considering here.
I am not asserting that those aspects of “westward” apply to “factward”.
Analogies typically assert a similarity between only some, not all, aspects of the two analogous situations. But maybe those aspects of “westward” are so salient that they interfere with the analogy.
suppose that I agree with Sam Harris that ~all humans find the same set of objective facts to be morally motivating. But then it turns out that we disagree on just which facts those are! How do we resolve this disagreement? We can hardly appeal to objective facts, to do so…
I don’t follow. Sam would say (and I would agree) that which facts which humans find motivating (in the limit of ideal reflection, etc.) is an empirical question. With regard to each human, it is a scientific question about that human’s motivational architecture.
It’s true that a moral realist could always bridge the is–ought gap by the simple expedient of converting every statement of the form “I ought to X” to “Objectively and factually, X is what I ought to do”.
But that is not enough for Sam’s purposes. It’s not enough for him that every moral claim is or is not the case. It’s not enough that moral claims are matters of fact. He wants them to be matters of scientific fact.
On my reading, what he means by that is the following: When you are pursuing a moral inquiry, you are already a moral agent who finds certain objective and scientifically determinable facts to be motivating (inducing of pursuit or avoidance). You are, as Eliezer puts it, “created already in motion”. Your inquiry, therefore, is properly restricted just to determining which scientific “is” statements are true and which are false. In that sense, moral inquiry reduces entirely to matters of scientific fact. This is the dialectical-argumentation point of view.
But his interlocutors misread him to be saying that every scientifically competent agent should find the same objective facts to be motivating. In other words, all such agents should [edit: I should have said “would”] feel compelled to act according to the same moral axioms. This is what “bridging the is–ought gap” would mean if you confined yourself to the logical-argumentation framework. But it’s not what Sam is claiming to have shown.
If you’re trying to convince me to do some thing X, then you must want me to do X, too. So we must be at least that aligned.
We don’t have to be aligned in every regard. And you needn’t yourself value every consequence of X that you hold up to me to entice me to X. But you do have to understand me well enough to know that I find that consequence enticing.
But that seems to me to be both plausible and enough to support the kind of dialectical moral argumentation that I’m talking about.
I just saw this post today. I was a little worried that I’d somehow subconsciously stolen this concept and its name from you until I saw your link to my comment. At any rate you definitely described it more memorably than I did.