The alternative theory is that political bias has gotten much greater, and the acceptable political beliefs are strongly in the direction of trusting some groups and not trusting others. By that theory, progressive movements are trusted more because they have better press. Realizing that you can increase trust by creating worker co-ops would then be an example of Goodhart’s Law—optimizing for “being trusted” independently of “being trustworthy” is not a worthy goal.
Jiro
I am not making the simple argument that religion makes for better societies, and I can see you’re totally confused here.
If all you’re saying is that at least one thing was better in at least one religious society in at least one era, then I can’t disagree, but there isn’t much to disagree with either.
And I think you’re making an excessively fine distinction if you’re not arguing that religion makes for better societies, but you are arguing that religion doesn’t damage society. (Unless you think religion keeps things exactly the same?)
Scott once had a post about how it’s hard to get advice only to the people who need it.
Sam Bankman-Fried may have lied too much (although the real problem was probably goals that conflict with ours) but the essay here is aimed at the typical LW geek, and LW geeks tend not to lie enough.
I would think that if your “common ground” with someone is something 99% of humans agree with and which is absurdly broad anyway, you haven’t really found common ground.
In the premodern Christian medieval context, slavery was all but completely forbidden
It feels like you’re gerrymandering a time, place, and scenario to make the answer come out correctly. The medieval era was not the only era where Christianity was powerful, and you’re just handwaving it away by saying that Christianity was an arm of the state after that (and ignoring the period before that—the Romans kept slaves even when they were Christian.) You’re also including or not including Islam depending on whether it’s convenient for your argument (they don’t count when they keep slaves, but they count as religion being a source of learning). And you’re handwaving away serfdom—yes, it isn’t slavery, but it’s still a pretty big violation of human rights practiced by religious people back then that we don’t practice today.
(For that matter, I’m not convinced that “we only enslave pagans” is much of an excuse. Modern secular society doesn’t enslave pagans, after all, so we’re still better than them.)
How can the church hold back something which doesn’t yet exist?
By making it dangerous for it to come into existence.
it is fair to say that religious people in the industrial era have the desire to hold back science. But they simply don’t have the power anymore, they are no longer the center of learning.
There’s a big gap between “no power worth speaking of” and “not in the position of the Pope in 1500″. For instance, religion had enough power to be a serious obstacle to the acceptance of evolution, even if in the 21st century the remaining creationists are a joke.
We can happily and easily disprove the idea that Judeo-Christian cosmology “damages society” by comparing the modern secular society developing after 1500AD with that of the Christian society before it.
-
You’re cherrypicking features of the society. I could respond by pointing to feudalism or slavery, for instance. Having less hospitality but no slavery seems overall positive.
-
I’m pretty sure you’re exaggerating what hospitality requires. If it was actually required to feed and house all beggars who come to your door, people would be overwhelmed by beggars.
-
“Judeo-Christian” here doesn’t make sense. You’ll have to at least include Islam. And even then, I wouldn’t say that non-Judeo-Christian-Islamic religions made the society especially horrible. Ancient China and Japan weren’t great, but in ways comparable to how “Judeo-Christian” societies weren’t great.
-
“Judeo-Christian” cosmology “causes problems” by holding science back. Obviously, ancient societies had less science than we do, so this is perfectly consistent with reality.
-
He asks “How interested are you in Widgets?” He has learnt from previous job interviews that, if he answers honestly, the interviewer will think he is any of lying, insane, or too weird to deal with, and not hire him, even though this is not in the best financial interests of the company, were they fully informed.
By the standard “intentionally or knowingly cause the other person to have false beliefs”, answering ‘honestly’ would be lying, and answering in a toned down way would not (because it maximizes the truth of the belief that the interviewer gets).
In Materialist Conceptions of God, I wrote about how one can interpret religious claims as hyperstititions, beliefs that become true as a result of you believing in them.
While this works for some religious claims, it doesn’t work for many of the most important ones. If heaven doesn’t exist, believing in it, and even acting as though you want to go there, won’t get you there. And believing that the world was created in seven literal days, and acting thus, not only doesn’t cause the world to have been created in seven literal days, it leads you to damage the society around you.
Here’s how easy it is to run an LLM evaluation of a debate.
Running it through the LLM is easy.
Refuting the argument that you’re using the LLM’s output for takes longer, though.
The motte and bailey is:
“All I’m asking you to do is to run this through an LLM”.
But
“Actually, that’s not all I’m asking you to do. You also need to refute this whole post.”
And your stated reason for not responding to any of it is that it’s inconvenient.
It’s inconvenient to reply to lots of things, even false things. I probably wouldn’t reply to a homeopath or a Holocaust denier, for instance, especially not to refute the things he says.
When someone in my family expresses their concern that Covid-19 vaccines are causing harm to the population, I can respond by: “I also think that it is very important to seriously monitor the adverse health effects of all drugs, in the case of [...]”.
If they said that the Jews are drinking the blood of Christian babies, would you reply that of course you think it’s important to keep babies safe?
Your description of finding common ground is within a hairsbreadth of being concern trolling.
You’re going heavy on the motivated reasoning here. The reason people don’t want to respond to you is not that you’re pure genius, it’s that it isn’t worth the effort.
You’re also doing a motte and bailey on exactly what argument you’re trying to make. If all you’re saying is “sending X through an LLM produces Y”, then yes, I could just try an LLM. But that’s not all that you’re saying. You’re trying to draw conclusions from the result of the LLM. Refuting those conclusions is a lot of effort for little benefit.
The challenge was to simply run the argument provided above through your own LLM and post the results. It would take about 30 seconds.
If you claim that “Not one of you made a case. Not one of you pointed to an error.”, that isn’t going to be resolved by running the argument through an LLM. Pointing to an error means manually going through your argument and trying to refute it.
Not one of you made a case. Not one of you pointed to an error. And yet the judgment was swift and unanimous. That tells me the argument was too strong, not too weak. It couldn’t be refuted, so it had to be dismissed.
Believing that your post was voted down because it was too strong is very convenient for you, which means that your belief that it was voted down because it’s too strong is likely motivated reasoning.
It’s a lot easier to write BS than to refute it, so people don’t usually want to bother exhaustively analyzing why BS is BS.
I support putting bank robbers in jail. Yet I refuse to support anything that would put myself in jail. I’m clearly supporting it in an imbalanced way that is beneficial to myself.
Even if we make the extremely conservative assumption that their deaths are only one 600,000th as bad, in terms of suffering, as humans deaths, insect suffering is still obviously the worst thing in the world.
But you pulled the number 600000 out of thin air.
People, when asked to name a small number or a large number, will usually name numbers within a certain range, and think “well, that number sounds good to me”. That doesn’t mean that the number really is small or large enough. It may be in normal situations--$600000 can buy a lot—but if you try to do calculations with it, the fact that people name numbers in certain ranges lets you manipulate the result by starting from a “conservative” number and coming to an absurd conclusion.
If it was, oh, 10000000000000000000000 instead, your conclusion would be very different. The fact that not many people will pick 10000000000000000000000, and that you can conclude insect suffering is important based on 600000, says more about how people pick numbers than it does about insect suffering.
People will pay as much to save 2,000 birds as 20,000 and 200,000 birds.
When you ask the question “what would you pay to save 2000 birds”, the fact that your question contains the number 2000 is information about how many birds it is important and/or possible to save. If you ask the question with different numbers, each version of the question provides different information, and therefore should produce inconsistent answers (unless it’s a poll question specifically designed to test different numbers, but most people won’t take that into account).
One refers to morality emerging spontaneously from intelligence—which I argue is highly unlikely without a clear mechanism.
That’s not emerging artifically. That’s emerging naturally. “Emerging artificially” makes no sense here, even as a concept being refuted.
If you think it could emerge artificially, you need to explain the mechanism, not just assert the possibility.
...
If you hardwire morality as a primary goal, then yes, the AGI might be moral.
I don’t see you explaining any mechanism in the second quote. (And how is it possible for something to emerge artificially anyway?)
Your comment reads like it’s AI generated. It doesn’t say much, but damn if it doesn’t have a lot of ordered and numbered subpoints.
Food gets used up quickly, but it takes a long while to use up housing, so banning new housing really isn’t comparable to banning making food.
I think the advice works better as “if it’s a social situation, and the situation calls for what you consider to be a lie, don’t let that stop you.” You do not have to tell someone that you’re not feeling fine when they ask how you’re doing. You do not need to tell them that actually the color they painted their house in is really ugly. And you certainly shouldn’t go to a job interview, get asked for your biggest weakness, and actually state your biggest weakness.
If someone reads the advice and thinks “Lying, that’s an idea! I’ll use it every time I can” they’ve overcorrected by far too much.