That calibration question actually raised a question for me. I haven’t seen this question raised, although I suspect it has and I just haven’t found the discussion as yet; if so, please point me in the right direction.
The point, in short, is that my gut reaction to honing my subjective probability estimations is “I’m not interested in trying to be a computer.” As far as I know, all the value I’ve gotten out of rationalist practice has been much more qualitative than probability calculations, such as becoming aware of and learning to notice the subjective feeling of protecting a treasured belief from facts. (Insert the standard caveats, please.) I’m not sure what I would gain by taking the time to train myself to be better at generating numerical estimates of my likelihood of being correct about my guesses.
In this particular case, I guessed 1650 based on my knowledge that Newton was doing his thing in the mid to late seventeenth century. I put a 65% confidence on the 15-year interval, figuring that it might be a decent bit later than 1650 but not terribly much. It turns out that I was wrong about that interval, so I know now that based on that situation and the amount of confidence I felt, I should probably try a smaller confidence estimate. How much smaller I really don’t know; I suspect that’s what this “HonoreDB’s rule” others are talking about addresses.
But the thing is, I’m not sure what I would gain if it turned out that 65% was really the right guess to make. If my confidence estimates turn out to be really, really well-honed, so what?
On the flipside, if it turned out that 60% was the right guess to make and I spend some time calibrating my own probability estimates, what do I gain by doing that? That might be valuable for building an AI, but I’m a social scientist, not an AI researcher. I find it valuable to notice the planning fallacy in action, but I don’t find it terribly helpful to be skilled at, say, knowing that my guess of being 85% likely to arrive with a 5-minute margin of when I say is very likely to be accurate. What’s helpful is knowing that a given trip tends to take about 25 minutes and that somewhat regularly but seemingly randomly traffic will make it take 40 minutes, so if I need to be on time I leave 40 minutes before I need to be there but with things I can do for 15 minutes once I’m there in case traffic is smooth. No explicit probabilities needed.
But I see this business about learning to make better conscious probability estimates coming up fairly often on Less Wrong. It was even a theme in the Boot Camps if I understand correctly. So either quite a number of very smart people who spend their time thinking about rationality all managed to converge on valuing something that doesn’t actually matter that much to rationality, or I’m missing something.
Would someone do me the courtesy of helping me see what I’m apparently blind to here?
ETA: I just realized that the rule everyone is talking about must be what HonoreDB outlined in the lead post of this discussion. Sorry for not noticing that. But my core question remains!
Mmm. Yes, I think you’re right. As I’ve chewed on this, I’ve come to wonder if that’s part of where I’ve been getting the impression that there’s a hard problem in the first place. As I’ve tried to reduce the question enough to notice where reduction seems to fail or at least get a bit lost, my confusion confuses me. I don’t know if that’s progress, but at least it’s different!
I’m afraid I’m a bit slow on the uptake here. Why does this require solipsism? I agree that you can go there with a discussion of consciousness, but I’m not sure how it’s necessarily tied into the fact that consciousness is how you know there’s a question in the first place. Could you explain that a bit more?
Well… Yes, I think I agree in spirit. The term “behavior” is a bit fuzzy in an important way, because a lot of the impression I have that others are conscious comes from a perception that, as far as I can tell, is every bit as basic as my ability to identify a chair by sight. I don’t see a crying person and consciously deduce sadness; the sadness seems self-evident to me. Similarly, I sometimes just get a “feel” for what someone’s emotional state is without really being able to pinpoint why I get that impression. But as long as we’re talking about a generalized sense of “behavior” that includes cues that go unnoticed by the conscious mind, then sure!
It’s not a matter of what qualia buy you. The oddity is that they’re there at all, in anything. I think you’re pointing out that it’d be very odd to have a quale-free but otherwise perfect simulation of a human mind. I agree, that would be odd. But what’s even more odd is that even though we can be extremely confident that there’s some mechanism that goes from firing neurons to qualia, we have no clue what it could be. Not just that we don’t yet know what it is, but as far as I know we don’t know what could possibly play the role of such a mechanism.
It’s almost as though we’re in the position of early 19th century natural philosophers who are trying to make sense of magnetism: “Surely, objects can’t act at a distance without a medium, so there must be some kind of stuff going on between the magnets to pull them toward one another.” Sure, that’s close enough, but if you focus on building more and more powerful microscopes to try to find that medium, you’ll be SOL. The problem in this context is that there are some hidden assumptions that are being brought to bear on the question of what magnetism is that keep us from asking the right questions.
Mind you, I don’t know if understanding consciousness will actually turn out to yield that much of a shift in our understanding of the human mind. But it does seem to be slippery in much the same way that magnetism from a billiard-balls-colliding perspective was, as I understand it. I suspect in the end consciousness will turn out to be no more mysterious than magnetism, and we’ll be quite capable of building conscious machines someday.
In case this adds some clarity: My personal best proto-guess is that consciousness is a fuzzy term that applies to both (a) the coordination of various parts of the mind, including sensory input and our sense of social relationships; and (b) the internal narrative that accompanies (a). If this fuzzily stated guess is in the right ballpark, then the reason consciousness seems like such a hard problem is that we can’t ever pin down a part of the brain that is the “seat of consciousness”, nor can we ever say exactly when a signal from the optic nerve turns into vision. Similarly, we can’t just “remove consciousness”, although we can remove parts of it (e.g., cutting out the narrator or messing with the coordination, as in meditation or alcohol).
I wouldn’t be at all surprised if this guess were totally bollocks. But hopefully that gives you some idea of what I’m guessing the end result of solving the consciousness riddle might look like.