I don’t know whether the statement (intelligence ⇒ consciousness) is true, so I assign a non-zero probability to it being false.
Suppose I said “Assume NP = P”, or the contrary “Assume NP != P”. One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts “Assume 1 = 2”, you probably shouldn’t do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.
Also a few words about concepts. You can explain a concept using other concepts, and then explain the concepts you have used to explain the first one, and so on, but the chain should end somewhere, right? So here it ends on consciousness.
1) I know that there is a phenomenon (that I call ‘consciousness’), because I observe it directly.
2) I don’t know a decent theory to explain what it really is, and what properties does it have.
3) To my knowledge, nobody actually has. That is why, the problem of consciousness is labeled as ‘hard’.
Too many people, I’ve noticed, just pick a theory of consciousness that they consider the best, and then become overconfident of it. Not quite a good idea given that there is so little data.
So if the most plausible says (intelligence ⇒ consciousness) is true, you shouldn’t immediately dismiss everything that is based on the opposite. The Bayesian way is to integrate over all possible theories, weighted by their probabilities.
So, what you’re really saying is that the aliens lack some indefinable trait that the humans consider “moral”, and the humans lack a definable trait that the aliens consider moral.
This is a common scifi scenario, explored elsewhere on the site. See EG three worlds colide.
Your specific scenario seems to me to involve a highly improbable scenario where humans are considered immoral, but somehow miraculously they created something that is considered moral, and the response is to hide from the inferior immoral civilization.
If you dont know what you’re talking about when you say consciousness, your premise becomes incoherent.
I don’t know whether the statement (intelligence ⇒ consciousness) is true, so I assign a non-zero probability to it being false.
Suppose I said “Assume NP = P”, or the contrary “Assume NP != P”. One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts “Assume 1 = 2”, you probably shouldn’t do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.
Also a few words about concepts. You can explain a concept using other concepts, and then explain the concepts you have used to explain the first one, and so on, but the chain should end somewhere, right? So here it ends on consciousness.
1) I know that there is a phenomenon (that I call ‘consciousness’), because I observe it directly.
2) I don’t know a decent theory to explain what it really is, and what properties does it have.
3) To my knowledge, nobody actually has. That is why, the problem of consciousness is labeled as ‘hard’.
Too many people, I’ve noticed, just pick a theory of consciousness that they consider the best, and then become overconfident of it. Not quite a good idea given that there is so little data.
So if the most plausible says (intelligence ⇒ consciousness) is true, you shouldn’t immediately dismiss everything that is based on the opposite. The Bayesian way is to integrate over all possible theories, weighted by their probabilities.
Ok, fair enough.
So, what you’re really saying is that the aliens lack some indefinable trait that the humans consider “moral”, and the humans lack a definable trait that the aliens consider moral.
This is a common scifi scenario, explored elsewhere on the site. See EG three worlds colide.
Your specific scenario seems to me to involve a highly improbable scenario where humans are considered immoral, but somehow miraculously they created something that is considered moral, and the response is to hide from the inferior immoral civilization.