There are concepts which are hardly explainable (given our current understanding of them). Consciousness is one of them. Qualia. Subjective experience. The thing which separates p-zombies from non-p-zombies.
If you don’t already understand what I mean, small chance that I would be able to explain.
As for the assumption, I agree that it is implausible, yet possible. Do you consider your computer conscious?
And no doubt that scenarios your mention are more plausible.
Are (modern) computers intelligent but not conscious, by your lights?
If so, then there’s a very important thing you might provide some insight into, which is what sort of observations humans could make of an alien race, that would lead to us thinking that they’re intelligent but not conscious.
Modern computers can be programmed to do almost every task a human can make, including very high-level ones, that’s why sort-of yes, they are (and maybe sort-of conscious, if you are willing to stretch this concept that far).
Some time ago we could program computers to execute some algorithm which solves a problem; now we have machine learning and don’t have to provide an algorithm for every task; but we still have different machine learning algorithms for different areas/meta-tasks (computer vision, classification, time series prediction, etc.). When we build systems that are capable of solving problems in all these areas simultaneously—and combining the results to reach some goal—I would call such systems truly intelligent.
Having said that, I don’t think I need an insight or explanation here—because well, I mostly agree with you or jacob_cannel—it’s likely that intelligence and unconsciousness are logically incompatible. Yet as long as the problem of consciousness is not fully resolved, I can’t be certain, therefore assign non-zero probability for the conjunction to be possible.
“can be programmed to” is not the same thing as intelligence. It requires external intelligence to program it. Using the same pattern, I could say that atoms are intelligent (and maybe sort-of conscious), because for almost any human task, they can be rebuilt into something that does it.
I don’t know whether the statement (intelligence ⇒ consciousness) is true, so I assign a non-zero probability to it being false.
Suppose I said “Assume NP = P”, or the contrary “Assume NP != P”. One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts “Assume 1 = 2”, you probably shouldn’t do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.
Also a few words about concepts. You can explain a concept using other concepts, and then explain the concepts you have used to explain the first one, and so on, but the chain should end somewhere, right? So here it ends on consciousness.
1) I know that there is a phenomenon (that I call ‘consciousness’), because I observe it directly.
2) I don’t know a decent theory to explain what it really is, and what properties does it have.
3) To my knowledge, nobody actually has. That is why, the problem of consciousness is labeled as ‘hard’.
Too many people, I’ve noticed, just pick a theory of consciousness that they consider the best, and then become overconfident of it. Not quite a good idea given that there is so little data.
So if the most plausible says (intelligence ⇒ consciousness) is true, you shouldn’t immediately dismiss everything that is based on the opposite. The Bayesian way is to integrate over all possible theories, weighted by their probabilities.
So, what you’re really saying is that the aliens lack some indefinable trait that the humans consider “moral”, and the humans lack a definable trait that the aliens consider moral.
This is a common scifi scenario, explored elsewhere on the site. See EG three worlds colide.
Your specific scenario seems to me to involve a highly improbable scenario where humans are considered immoral, but somehow miraculously they created something that is considered moral, and the response is to hide from the inferior immoral civilization.
There are concepts which are hardly explainable (given our current understanding of them). Consciousness is one of them. Qualia. Subjective experience. The thing which separates p-zombies from non-p-zombies.
If you don’t already understand what I mean, small chance that I would be able to explain.
As for the assumption, I agree that it is implausible, yet possible. Do you consider your computer conscious?
And no doubt that scenarios your mention are more plausible.
Are (modern) computers intelligent but not conscious, by your lights?
If so, then there’s a very important thing you might provide some insight into, which is what sort of observations humans could make of an alien race, that would lead to us thinking that they’re intelligent but not conscious.
Modern computers can be programmed to do almost every task a human can make, including very high-level ones, that’s why sort-of yes, they are (and maybe sort-of conscious, if you are willing to stretch this concept that far).
Some time ago we could program computers to execute some algorithm which solves a problem; now we have machine learning and don’t have to provide an algorithm for every task; but we still have different machine learning algorithms for different areas/meta-tasks (computer vision, classification, time series prediction, etc.). When we build systems that are capable of solving problems in all these areas simultaneously—and combining the results to reach some goal—I would call such systems truly intelligent.
Having said that, I don’t think I need an insight or explanation here—because well, I mostly agree with you or jacob_cannel—it’s likely that intelligence and unconsciousness are logically incompatible. Yet as long as the problem of consciousness is not fully resolved, I can’t be certain, therefore assign non-zero probability for the conjunction to be possible.
“can be programmed to” is not the same thing as intelligence. It requires external intelligence to program it. Using the same pattern, I could say that atoms are intelligent (and maybe sort-of conscious), because for almost any human task, they can be rebuilt into something that does it.
If you dont know what you’re talking about when you say consciousness, your premise becomes incoherent.
I don’t know whether the statement (intelligence ⇒ consciousness) is true, so I assign a non-zero probability to it being false.
Suppose I said “Assume NP = P”, or the contrary “Assume NP != P”. One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts “Assume 1 = 2”, you probably shouldn’t do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.
Also a few words about concepts. You can explain a concept using other concepts, and then explain the concepts you have used to explain the first one, and so on, but the chain should end somewhere, right? So here it ends on consciousness.
1) I know that there is a phenomenon (that I call ‘consciousness’), because I observe it directly.
2) I don’t know a decent theory to explain what it really is, and what properties does it have.
3) To my knowledge, nobody actually has. That is why, the problem of consciousness is labeled as ‘hard’.
Too many people, I’ve noticed, just pick a theory of consciousness that they consider the best, and then become overconfident of it. Not quite a good idea given that there is so little data.
So if the most plausible says (intelligence ⇒ consciousness) is true, you shouldn’t immediately dismiss everything that is based on the opposite. The Bayesian way is to integrate over all possible theories, weighted by their probabilities.
Ok, fair enough.
So, what you’re really saying is that the aliens lack some indefinable trait that the humans consider “moral”, and the humans lack a definable trait that the aliens consider moral.
This is a common scifi scenario, explored elsewhere on the site. See EG three worlds colide.
Your specific scenario seems to me to involve a highly improbable scenario where humans are considered immoral, but somehow miraculously they created something that is considered moral, and the response is to hide from the inferior immoral civilization.