Catholicism never would have collected the intelligence necessary to invent a nuke. Their worldview was not compatible with science. It was an inferior organizing principle. (“inferior” meaning less capable of coordinating a collective intelligence needed to build nukes.)
You believe intelligence is such a high good, a high virtue, that it would be hard for you to see how intelligence is deeply and intricately causal with the destruction of life on this planet, and therefore the less intelligent, less destructive religions actually have more ethical ground to stand on, even though they were still fairly corrupt.
But it’s a straightforward comparison.
Medieval “dark” ages = almost no technological progress, very little risk of blowing up the planet in any way; relatively, not inspiring, but still—kudos for keeping us from hurtling toward extinction, and at this point, we’re fine with rewarding this even though it’s such a “low bar”
Today = massive, exponential technological progress, nuclear war could already take us all out, but we have a number of other x-risks to worry about. And we’re so identified with science and tech that we aren’t willing to stop, even as we admit OUT LOUD that it could cause extinction-level catastrophe. This is worse than the Crusades by a long shot. We’re not talking about sending children to war. We’re talking about the end of children. Just no more children. This is worse than suicide cults that claim we go to heaven as long as we commit suicide. We don’t even think what we’re doing will necessarily result in heaven, and we do it anyway. We have no evidence we can upload consciousnesses at all. Or end aging and death. Or build a friendly AI. At least the Catholics were convinced a very good thing would happen by sending kids to war. We’re not even convinced, and we are willing to risk the lives of all children. Do you see how this is worse than the Catholics?
I agree that religions mostly don’t cause x-risk, because (for the most part) they’re not sufficiently good at organizing intellectual endeavor. (There might be exceptions to that generalization, and they can coopt the technological products of other organizational systems.)
I agree that the x-risk is an overriding concern, in terms of practical consequences. If any given person does tons of good things, and also contributes to x-risk, its easy for the x-risk contribution to swamp everything else in their overall impact.
But yeah, I object to calling a person or an institution more ethical because they are / it is too weak to do (comparatively) much harm.
I care about identifying which people and institutions are more ethical so that 1) I can learn from them ethics from them 2) so that I can defer to them.
If a person or institution avoids causing harm because they’re weak, they’re mostly not very helpful to learn from (they can’t help me figure out how to wield power ethically, at least) and defering to them or otherwise empowering them is actively harmful because doing so removes the feature that was keeping them (relatively) harmless.
A person who is dispositionally a bully, but who is physically weak, but who would immediately start acting like a bully if he were bigger, or if he had more social power, is not ethical on account of his not bullying people. An AGI that is “aligned”, until it is much more powerful than the rest of the world, is not aligned. A church that does (relatively) less harm unless and until it is powerful enough to command armies or nukes, is likewise not very trustworthy.
To reason well in these domains, I need a concept of ethics that can be discussed independently of power. And therefore I need to be able to evaluate ethics independently of actualharm caused.
Not just “how much harm does this institution do?” but “how much harm would it do, in other circumstances?”. I might want to ask “how does this person or institution behave, if given different levels or different kinds of power over the world?”
Given that criterion.
The Catholic Church causes less overall harm than OpenAI. (I think, as always it’s hard to evaluate.)
It causes less overall harm than the US government.
It’s unclear to me if it causes more or less harm than Coca-cola.
Harm-caused is certainly relevant evidence about the ethics of an institution, but not most of the question.
Considering the comparison with the US government:
The US government seems to me to be overall more robust to the stresses imposed by power, than the Catholic Church.
I think the organizations are probably about equally trustworthy in terms of how much you can rely on them to follow their agreements when you don’t have particular power to enforce those agreements?
I think they’re about equally likely to cover up the illegal or immoral actions of their members?
I would prefer that the US government and the Catholic hierarchy to have their current relative distributions of power rather than to have them reversed. I don’t think that the world would get better if the Catholic hierarchy was a the leading world superpower, instead of the US.
As a shorthand for that, I might say that the US government, while not ethical by any means, is more ethical than the the Catholic Church.
There is a bit of an out here where people or institutions that do less harm because they are less powerful, and which are less powerful by their own choice, might indeed be ethically superior. They might be safe to give more power to, because they would not accept the power granted, and they might be worth learning from.
I would be interested in examples of religious institutions declining power granted to them.
From my read of history, the Catholic Hierarchy has never done this?
We’re not even convinced, and we are willing to risk the lives of all children. Do you see how this is worse than the Catholics?
Absolutely. I definitely think there’s something awful about being willing to risk the future, and even more awful about being willing to risk the future for no particular ideal.
I’d probably agree that that’s worse than Catholicism. Catholicism seems unlikely to metastasize into an actively ominicidal worldview to me. Though I think if it were more powerful and relevant, and it’s incentives were somewhat different, it would totally risk omnicide in a holy war against heresy (extrapolating from the long history of Christian holy wars causing great destruction, short of omnicide, because omnicide wasn’t technologically on the table yet.)
But, I don’t know who you’re referring to when you say “we”. It sounds like something like “moderns” or “post-enlightenment societies” or maybe “cultures based on ‘scientific materialism’”?
I mostly reject those charges. Mostly it looks to me like there are a small number (~10,000 to 100,000) of people who are willing to risk all the children, unilaterally, while most people broadly oppose that, to the extent that they’re informed about it.
Almost everyone does oppose the destruction of all life (though by their revealed preferences, almost everyone is fine with subsidising factory farming).
You believe intelligence is such a high good, a high virtue, that it would be hard for you to see how intelligence is deeply and intricately causal with the destruction of life on this planet, and therefore the less intelligent, less destructive religions actually have more ethical ground to stand on, even though they were still fairly corrupt.
I mean, it’s obviously hard for me to say definitively if I have a cultural blindspot.
But, FYI, while I would say that intelligence is “a good”, I am unlikely to call it a “virtue” or a “high good” (which connotes a moral good, as opposed to eg an economic good).
Intelligence is a force multiplier. More intelligent agents are more capable. They do a better job of doing whatever it is that they do.
And yeah, it’s pretty obvious to me that “intelligence is deeply and intricately causal with the destruction of life on this planet”. Humans might destroy the biosphere, specifically by dint of their collective intelligence. No other species is even remotely in the running to do that, except for the AIs we’re rushing forward to create. If you remove the intelligence and you don’t get the omnicide.
I think you mean something more specific here. Not just that destroying all life is a big action, and so is only possible with a big force multiplier, but that intelligence is the motivating factor, or actively obscures moral truth, or something.
What do you mean here?
and therefore the less intelligent, less destructive religions actually have more ethical ground to stand on, even though they were still fairly corrupt.
Yeah, I don’t buy this, for the reasons outlined above.
If you’re less destructive because you’re weak, you don’t get “moral points”. You get “moral points” based on how you behave, relative to the options and incentives presented to you.
Catholicism never would have collected the intelligence necessary to invent a nuke. Their worldview was not compatible with science. It was an inferior organizing principle. (“inferior” meaning less capable of coordinating a collective intelligence needed to build nukes.)
You believe intelligence is such a high good, a high virtue, that it would be hard for you to see how intelligence is deeply and intricately causal with the destruction of life on this planet, and therefore the less intelligent, less destructive religions actually have more ethical ground to stand on, even though they were still fairly corrupt.
But it’s a straightforward comparison.
Medieval “dark” ages = almost no technological progress, very little risk of blowing up the planet in any way; relatively, not inspiring, but still—kudos for keeping us from hurtling toward extinction, and at this point, we’re fine with rewarding this even though it’s such a “low bar”
Today = massive, exponential technological progress, nuclear war could already take us all out, but we have a number of other x-risks to worry about. And we’re so identified with science and tech that we aren’t willing to stop, even as we admit OUT LOUD that it could cause extinction-level catastrophe. This is worse than the Crusades by a long shot. We’re not talking about sending children to war. We’re talking about the end of children. Just no more children. This is worse than suicide cults that claim we go to heaven as long as we commit suicide. We don’t even think what we’re doing will necessarily result in heaven, and we do it anyway. We have no evidence we can upload consciousnesses at all. Or end aging and death. Or build a friendly AI. At least the Catholics were convinced a very good thing would happen by sending kids to war. We’re not even convinced, and we are willing to risk the lives of all children. Do you see how this is worse than the Catholics?
I agree that religions mostly don’t cause x-risk, because (for the most part) they’re not sufficiently good at organizing intellectual endeavor. (There might be exceptions to that generalization, and they can coopt the technological products of other organizational systems.)
I agree that the x-risk is an overriding concern, in terms of practical consequences. If any given person does tons of good things, and also contributes to x-risk, its easy for the x-risk contribution to swamp everything else in their overall impact.
But yeah, I object to calling a person or an institution more ethical because they are / it is too weak to do (comparatively) much harm.
I care about identifying which people and institutions are more ethical so that 1) I can learn from them ethics from them 2) so that I can defer to them.
If a person or institution avoids causing harm because they’re weak, they’re mostly not very helpful to learn from (they can’t help me figure out how to wield power ethically, at least) and defering to them or otherwise empowering them is actively harmful because doing so removes the feature that was keeping them (relatively) harmless.
A person who is dispositionally a bully, but who is physically weak, but who would immediately start acting like a bully if he were bigger, or if he had more social power, is not ethical on account of his not bullying people. An AGI that is “aligned”, until it is much more powerful than the rest of the world, is not aligned. A church that does (relatively) less harm unless and until it is powerful enough to command armies or nukes, is likewise not very trustworthy.
To reason well in these domains, I need a concept of ethics that can be discussed independently of power. And therefore I need to be able to evaluate ethics independently of actual harm caused.
Not just “how much harm does this institution do?” but “how much harm would it do, in other circumstances?”. I might want to ask “how does this person or institution behave, if given different levels or different kinds of power over the world?”
Given that criterion.
The Catholic Church causes less overall harm than OpenAI. (I think, as always it’s hard to evaluate.)
It causes less overall harm than the US government.
It’s unclear to me if it causes more or less harm than Coca-cola.
Harm-caused is certainly relevant evidence about the ethics of an institution, but not most of the question.
Considering the comparison with the US government:
The US government seems to me to be overall more robust to the stresses imposed by power, than the Catholic Church.
I think the organizations are probably about equally trustworthy in terms of how much you can rely on them to follow their agreements when you don’t have particular power to enforce those agreements?
I think they’re about equally likely to cover up the illegal or immoral actions of their members?
I would prefer that the US government and the Catholic hierarchy to have their current relative distributions of power rather than to have them reversed. I don’t think that the world would get better if the Catholic hierarchy was a the leading world superpower, instead of the US.
As a shorthand for that, I might say that the US government, while not ethical by any means, is more ethical than the the Catholic Church.
There is a bit of an out here where people or institutions that do less harm because they are less powerful, and which are less powerful by their own choice, might indeed be ethically superior. They might be safe to give more power to, because they would not accept the power granted, and they might be worth learning from.
I would be interested in examples of religious institutions declining power granted to them.
From my read of history, the Catholic Hierarchy has never done this?
Absolutely. I definitely think there’s something awful about being willing to risk the future, and even more awful about being willing to risk the future for no particular ideal.
I’d probably agree that that’s worse than Catholicism. Catholicism seems unlikely to metastasize into an actively ominicidal worldview to me. Though I think if it were more powerful and relevant, and it’s incentives were somewhat different, it would totally risk omnicide in a holy war against heresy (extrapolating from the long history of Christian holy wars causing great destruction, short of omnicide, because omnicide wasn’t technologically on the table yet.)
But, I don’t know who you’re referring to when you say “we”. It sounds like something like “moderns” or “post-enlightenment societies” or maybe “cultures based on ‘scientific materialism’”?
I mostly reject those charges. Mostly it looks to me like there are a small number (~10,000 to 100,000) of people who are willing to risk all the children, unilaterally, while most people broadly oppose that, to the extent that they’re informed about it.
Almost everyone does oppose the destruction of all life (though by their revealed preferences, almost everyone is fine with subsidising factory farming).
I mean, it’s obviously hard for me to say definitively if I have a cultural blindspot.
But, FYI, while I would say that intelligence is “a good”, I am unlikely to call it a “virtue” or a “high good” (which connotes a moral good, as opposed to eg an economic good).
Intelligence is a force multiplier. More intelligent agents are more capable. They do a better job of doing whatever it is that they do.
And yeah, it’s pretty obvious to me that “intelligence is deeply and intricately causal with the destruction of life on this planet”. Humans might destroy the biosphere, specifically by dint of their collective intelligence. No other species is even remotely in the running to do that, except for the AIs we’re rushing forward to create. If you remove the intelligence and you don’t get the omnicide.
I think you mean something more specific here. Not just that destroying all life is a big action, and so is only possible with a big force multiplier, but that intelligence is the motivating factor, or actively obscures moral truth, or something.
What do you mean here?
Yeah, I don’t buy this, for the reasons outlined above.
If you’re less destructive because you’re weak, you don’t get “moral points”. You get “moral points” based on how you behave, relative to the options and incentives presented to you.