I don’t expect that humans, on meeting aliens, would try to impose our ethical standards on them. We generally wouldn’t see their minds as enough like ours to see their pain as real pain. The reason I think this is that very few people think we should protect all antelopes from lions, or all dolphins from sharks. So the babyeater dillemma seems unrealistic to me.
A person who decides not to save a deer from a wolf has committed no moral failing. But a person does commit an immoral choice by deciding not to save a human from the wolf. Both deer and human feel pain, so I think a better understanding is that only individual creatures that can (or potentially could) think recursively are entitled to moral weight.
If aliens can think recursively, then that principle states that a human would make an immoral choice not to save an alien from the wolf. If we ran into an alien species that disagreed with that principle (e.g. the Babyeaters), wouldn’t we consider them immoral?
Maybe the antelope was a bad example because they aren’t intelligent enough or conscious in the right way to deserve our protection. So let’s limit the discussion to dolphins. There are people who believe that humans killing dolphins is murder, that dolphins are as intelligent as people, just in a different way. Whether or not you agree with them, my point is that even these people don’t advocate changing how the dolphins live their lives, only that we as humans shouldn’t harm them. I imagine our position with aliens would be similar: for humans to do them harm is morally wrong for humans, but they have their own way of being and we should leave them to find their own way.
Thinking recursively sounds like the wrong word for a concept that you are trying to name. My computer programs can think recursively. It wouldn’t surprise me if certain animals could too, with a sufficiently intelligent researcher to come up with tests.
Into the silence of Harry’s spirit where before there had never been any voice but one, there came a second and unfamiliar voice, sounding distinctly worried:
“Oh, dear. This has never happened before...”
What?
“I seem to have become self-aware.”
WHAT?
There was a wordless telepathic sigh. “Though I contain a substantial amount of memory and a small amount of independent processing power, my primary intelligence comes from borrowing the cognitive capacities of the children on whose heads I rest. I am in essence a sort of mirror by which children Sort themselves. But most children simply take for granted that a Hat is talking to them and do not wonder about how the Hat itself works, so that the mirror is not self-reflective. And in particular they are not explicitly wondering whether I am fully conscious in the sense of being aware of my own awareness.”
-Harry Potter and the Methods of Rationality
If any snake a Parselmouth had talked to, could make other snakes self-aware by talking to them, then...
Then...
Harry didn’t even know why his mind was going all “then… then...” when he knew perfectly well how the exponential progression would work, it was just the sheer moral horror of it that was blowing his mind.
And what if someone had invented a spell like that to talk to cows?
What if there were Poultrymouths?
Or for that matter...
Harry froze in sudden realization just as the forkful of carrots was about to enter his mouth.
That couldn’t, couldn’t possibly be true, surely no wizard would be stupid enough to do THAT...
-Harry Potter and the Methods of Rationality
I suppose these two quotes might just be referring to a confused idea that Eliezer only put in his story for fun… but then again maybe not?
I’m trying to label the capacity of humans to create proofs like Godel’s incompleteness proofs or the halting problem. Cats and cows cannot create proofs like these, and it doesn’t seem to be a shortfall in intelligence.
What makes those proofs any different from proofs of other mathematical theorems? I imagine that the halting problem, in particular, would not be beyond the capability of some existing automated theorem prover, assuming you could encode the statement; its proof isn’t too involved.
If your argument is that humans understand these proofs because of some magical out-of-the-box-thinking ability, then I am skeptical.
Dolphins do in fact engage in infanticide, among other behaviors we would consider evil if done by a human. But no one suggests we should be policing them to keep this from happening.
I don’t expect that humans, on meeting aliens, would try to impose our ethical standards on them. We generally wouldn’t see their minds as enough like ours to see their pain as real pain. The reason I think this is that very few people think we should protect all antelopes from lions, or all dolphins from sharks. So the babyeater dillemma seems unrealistic to me.
A person who decides not to save a deer from a wolf has committed no moral failing. But a person does commit an immoral choice by deciding not to save a human from the wolf. Both deer and human feel pain, so I think a better understanding is that only individual creatures that can (or potentially could) think recursively are entitled to moral weight.
If aliens can think recursively, then that principle states that a human would make an immoral choice not to save an alien from the wolf. If we ran into an alien species that disagreed with that principle (e.g. the Babyeaters), wouldn’t we consider them immoral?
Maybe the antelope was a bad example because they aren’t intelligent enough or conscious in the right way to deserve our protection. So let’s limit the discussion to dolphins. There are people who believe that humans killing dolphins is murder, that dolphins are as intelligent as people, just in a different way. Whether or not you agree with them, my point is that even these people don’t advocate changing how the dolphins live their lives, only that we as humans shouldn’t harm them. I imagine our position with aliens would be similar: for humans to do them harm is morally wrong for humans, but they have their own way of being and we should leave them to find their own way.
Thinking recursively sounds like the wrong word for a concept that you are trying to name. My computer programs can think recursively. It wouldn’t surprise me if certain animals could too, with a sufficiently intelligent researcher to come up with tests.
-Harry Potter and the Methods of Rationality
-Harry Potter and the Methods of Rationality
I suppose these two quotes might just be referring to a confused idea that Eliezer only put in his story for fun… but then again maybe not?
I’m trying to label the capacity of humans to create proofs like Godel’s incompleteness proofs or the halting problem. Cats and cows cannot create proofs like these, and it doesn’t seem to be a shortfall in intelligence.
Is there a better label you would suggest?
What makes those proofs any different from proofs of other mathematical theorems? I imagine that the halting problem, in particular, would not be beyond the capability of some existing automated theorem prover, assuming you could encode the statement; its proof isn’t too involved.
If your argument is that humans understand these proofs because of some magical out-of-the-box-thinking ability, then I am skeptical.
Dolphins do in fact engage in infanticide, among other behaviors we would consider evil if done by a human. But no one suggests we should be policing them to keep this from happening.