I feel that dfranke’s questions make all kinds of implicit assumptions about the reader’s worldview which makes them difficult for most computationalists to answer. I’ve prepared a different list—I’m not really interested in answers, just an opinion as to whether they’re reasonable questions to ask people or whether they only make sense to me.
But you can answer them if you like.
For probability estimates, I’m talking about subjective probability. If you believe it doesn’t make sense to give a probability, try answering as a yes/no question and then guess the probability that your reasoning is flawed.
1: Which of these concepts are at least somewhat meaningful?
a) consciousness
b) qualia
2: Do you believe that an agent is conscious if and only if it experiences qualia?
3: Are qualia epiphenomenal?
4: If yes:
a) Would you agree that there is no causal connection between the things we say about qualia and the actual qualia we experience?
b) Are there two kinds of qualia: the ones we talk about and the ones we actually experience?
5: Is it possible to build a computer simulation of a human to any required degree of accuracy?
a) If you did, what is the probability that simulation would be conscious/experience qualia?
b) Would this probability depend on how the simulation is constructed or implemented?
6: What is the probability that we are living in a simulation?
a) If you prefer to talk about how much “measure of our existence” comes from simulations, give that instead
7: What is the probability that a Theory of Everything would explain consciousness?
8: Would you agree that it makes sense to describe a universe as “real” if and only if it contains conscious observers?
9: Suppose the universe that we see can be described completely by a particular initial state and evolution rule. Suppose also for the sake of simplicity that we’re not in a simulation.
a) What is the probability that our universe is the only “real” one?
b) What is the probability that all such describable universes are “real”?
c) If they are all “real”, are they all equally real or does each get a different “measure”? How is that measure determined?
d) Are simulated universes “real”? How much measure do they inherit from their parent universe?
10: Are fictional universes “real”? Do they contain conscious observers? (Or give a probability)
a) If you answered “no” here but answered “yes” for simulated universes, explain what makes the simulation special and the fiction not.
I’ll try and clarify the questions which came out as nonsense merely due to being phrased badly (rather than philosophical disagreement).
5: I basically meant, “can you simulate a human brain on a computer?”. The “any degree of accuracy” thing was just to try and prevent arguments of the kind “well you haven’t modelled every single atom in every single neuron”, while accepting that a crude chatbot isn’t good enough.
7: By “Theory of everything” I mean a set of axioms that will in principle predict the result of any physics experiment. Would you expect to see equations such as “consciousness = f(x), qualia = g(x)”? Or would you instead say “these equations describe the physical world to any required level of detail, yet I still don’t see where the consciousness comes from”? (EDIT: I’m still not making sense here, so it may be best just to ignore this one)
8: People seem more eager to taboo the word “real” than the word “conscious”. Not sure there’s much I can do to rephrase this one. I wrote it in order to frame q9, which was easier to phrase in terms of reality than consciousness.
9: Sorry for the inferential distance. I was basically referring to the concept some people here call “reality fluid”. A better question might be: how do you resolve Eliezer Yudkowsky’s little confusion here?
2 - That’s a question about the meanings of words. I don’t object to those constraints on the meanings of those words, though I don’t feel strongly about them.
3 - If “qualia” is meaningful (see 1), then no.
4 - N/A
5 - Ugh. “Any required degree” is damningly vague. Labeling confidence levels as follows:
C1 that it’s in-principle-possible to build as good a simulation of a particular human as any other human is.
C2 that it’s ipp to build a good enough simulation of a human that no currently existing test could reliably tell it apart from the original.
C3 that it’s ipp to build one that could pass an “interview test” (a la Turing) with the most knowledgeable currently available judges.
...I’d say C1 > C2 > C3 > 99%, though C2 would require also implementing the computer in neurons in a cloned body.
5a—Depends on the required level of accuracy: ~0% for a stone statue, for example. For any of the above examples, I’d expect it to do so as much as the original does.
5b—Not in the sense you mean.
6 - I am not sure that question makes sense. If it does, accurate priors are beyond me. For lack of anything better, I go with a universal prior of 50%.
7 - Mostly that’s a question about definitions… if it doesn’t explain consciousness, is it really a Theory of Everything? But given what I think you mean by ToE: 99+%.
8 - Question about definitions. I’m willing to constrain my definition of “real” that way, for the sake of discussion.
9 - I have no idea and am not convinced the questions make sense, x4.
10 - x5.
11 - Not entirely, though it is a regular student at a nonsensei-run dojo.
No, though parts of it were. Of course, people here who agree with me on that will likely disagree as to which parts those are.
The main virtue of this list, and of dfranke’s list that led to its production, is that the list stimulates thinking. For example, your question 9c struck me as somewhat nonsensical, and I think I learned something by trying to read some sense into it. (A space can have many measures. One imposes a particular measure for some purpose. What are we trying to accomplish by imposing a measure here?)
Another thought stimulated by your list of questions was whether it might be interesting/useful/fun to produce a LessWrong version of the Philpapers survey. My conclusion was that it would probably require more work than it would be worth. But YMMV, so I will put the idea “out there”.
I like your questionnaire much more than the OP’s. I didn’t understand question 7, could you rephrase it? Question 8 seems to be about words. Otherwise everything’s fine :-)
I feel that dfranke’s questions make all kinds of implicit assumptions about the reader’s worldview which makes them difficult for most computationalists to answer. I’ve prepared a different list—I’m not really interested in answers, just an opinion as to whether they’re reasonable questions to ask people or whether they only make sense to me.
But you can answer them if you like.
For probability estimates, I’m talking about subjective probability. If you believe it doesn’t make sense to give a probability, try answering as a yes/no question and then guess the probability that your reasoning is flawed.
1: Which of these concepts are at least somewhat meaningful?
a) consciousness
b) qualia
2: Do you believe that an agent is conscious if and only if it experiences qualia?
3: Are qualia epiphenomenal?
4: If yes:
a) Would you agree that there is no causal connection between the things we say about qualia and the actual qualia we experience?
b) Are there two kinds of qualia: the ones we talk about and the ones we actually experience?
5: Is it possible to build a computer simulation of a human to any required degree of accuracy?
a) If you did, what is the probability that simulation would be conscious/experience qualia?
b) Would this probability depend on how the simulation is constructed or implemented?
6: What is the probability that we are living in a simulation?
a) If you prefer to talk about how much “measure of our existence” comes from simulations, give that instead
7: What is the probability that a Theory of Everything would explain consciousness?
8: Would you agree that it makes sense to describe a universe as “real” if and only if it contains conscious observers?
9: Suppose the universe that we see can be described completely by a particular initial state and evolution rule. Suppose also for the sake of simplicity that we’re not in a simulation.
a) What is the probability that our universe is the only “real” one?
b) What is the probability that all such describable universes are “real”?
c) If they are all “real”, are they all equally real or does each get a different “measure”? How is that measure determined?
d) Are simulated universes “real”? How much measure do they inherit from their parent universe?
10: Are fictional universes “real”? Do they contain conscious observers? (Or give a probability)
a) If you answered “no” here but answered “yes” for simulated universes, explain what makes the simulation special and the fiction not.
11: Is this entire survey nonsense?
I’ll save my defense of these answers for my next post, but here are my answers:
Both of them.
Yes. The way I understand these words, this is a tautology.
No. Actually, hell no.
N/A
Yes; a. I’m not quite sure how to make sense of “probability” here, but something strictly between 0 and 1; b. Yes.
Negligibly larger than 0.
1, tautologically.
For the purposes of this discussion, “No”. In an unrelated discussion about epistemology, “No, with caveats.”
This question is nonsense.
No.
If I answered “yes” to this, it would imply that I did not think question 11 was nonsense, leading to contradiction.
I’ll try and clarify the questions which came out as nonsense merely due to being phrased badly (rather than philosophical disagreement).
5: I basically meant, “can you simulate a human brain on a computer?”. The “any degree of accuracy” thing was just to try and prevent arguments of the kind “well you haven’t modelled every single atom in every single neuron”, while accepting that a crude chatbot isn’t good enough.
7: By “Theory of everything” I mean a set of axioms that will in principle predict the result of any physics experiment. Would you expect to see equations such as “consciousness = f(x), qualia = g(x)”? Or would you instead say “these equations describe the physical world to any required level of detail, yet I still don’t see where the consciousness comes from”? (EDIT: I’m still not making sense here, so it may be best just to ignore this one)
8: People seem more eager to taboo the word “real” than the word “conscious”. Not sure there’s much I can do to rephrase this one. I wrote it in order to frame q9, which was easier to phrase in terms of reality than consciousness.
9: Sorry for the inferential distance. I was basically referring to the concept some people here call “reality fluid”. A better question might be: how do you resolve Eliezer Yudkowsky’s little confusion here?
http://lesswrong.com/lw/19d/the_anthropic_trilemma/
11: This question is referring to q2-10 only.
Oh, all right. I’m bored and suggestible.
1 - Both potentially meaningful
2 - That’s a question about the meanings of words. I don’t object to those constraints on the meanings of those words, though I don’t feel strongly about them.
3 - If “qualia” is meaningful (see 1), then no.
4 - N/A
5 - Ugh. “Any required degree” is damningly vague. Labeling confidence levels as follows:
C1 that it’s in-principle-possible to build as good a simulation of a particular human as any other human is.
C2 that it’s ipp to build a good enough simulation of a human that no currently existing test could reliably tell it apart from the original.
C3 that it’s ipp to build one that could pass an “interview test” (a la Turing) with the most knowledgeable currently available judges.
...I’d say C1 > C2 > C3 > 99%, though C2 would require also implementing the computer in neurons in a cloned body.
5a—Depends on the required level of accuracy: ~0% for a stone statue, for example. For any of the above examples, I’d expect it to do so as much as the original does.
5b—Not in the sense you mean.
6 - I am not sure that question makes sense. If it does, accurate priors are beyond me. For lack of anything better, I go with a universal prior of 50%.
7 - Mostly that’s a question about definitions… if it doesn’t explain consciousness, is it really a Theory of Everything? But given what I think you mean by ToE: 99+%.
8 - Question about definitions. I’m willing to constrain my definition of “real” that way, for the sake of discussion.
9 - I have no idea and am not convinced the questions make sense, x4.
10 - x5.
11 - Not entirely, though it is a regular student at a nonsensei-run dojo.
No, though parts of it were. Of course, people here who agree with me on that will likely disagree as to which parts those are.
The main virtue of this list, and of dfranke’s list that led to its production, is that the list stimulates thinking. For example, your question 9c struck me as somewhat nonsensical, and I think I learned something by trying to read some sense into it. (A space can have many measures. One imposes a particular measure for some purpose. What are we trying to accomplish by imposing a measure here?)
Another thought stimulated by your list of questions was whether it might be interesting/useful/fun to produce a LessWrong version of the Philpapers survey. My conclusion was that it would probably require more work than it would be worth. But YMMV, so I will put the idea “out there”.
I like your questionnaire much more than the OP’s. I didn’t understand question 7, could you rephrase it? Question 8 seems to be about words. Otherwise everything’s fine :-)