If I develop an algorithm A1 for solving certain problems that is more reliable than my own intuition, the fact that A1 is not perfectly reliable is a great reason to try and develop a superior algorithm A2.
Exactly. What I am arguing is 1) that we should be risk averse about the actions suggested by algorithm A1, if they exceed a certain scope, and 2) that we should devote resources to 2.1) verifying the correctness of A1 by empirical evidence and or 2.2) trying to improve A1 or develop a superior algorithm A2. What we shouldn’t do is to simply accept the actions that are recommended by A1 and follow through on them.
But you’re evading the “compared to what?” question.
I mean, OK, A1 suggests I do X. I perform a variety of empirical tests, on the basis of which I conclude that I should do Y (where Y implies NOT(X)). Fed the results of those empirical tests, A1 continues to suggest X.
Granted that I should be working on developing A2 throughout, I’m still faced with a choice: do I do X or Y?
Yes, I should be cognizant of the risks of trusting A1, so it’s not clear I should do X. I should also be cognizant of the risks of trusting my brain, so it’s not clear I should do Y.
But you’re evading the “compared to what?” question.
If I could answer that then I would probably be the smartest person around here. I only “know” that if people decide that we should walk into death camps because some of our ethical and game theoretic insights suggest that to be the favorable option, then I’d rather go with my intuition and hope for someone like General Thud.
I am sorry for giving such an unsatisfying answer. I simply have no clue. But I have this intuition that something is wrong here and that we should think about it.
if people decide that we should walk into death camps because some of our ethical and game theoretic insights suggest that to be the favorable option, then I’d rather go with my intuition
That’s an answer to my question, then: you consider your intuition more reliable than the predictions of ethical and game theory. Yes?
That’s an answer to my question, then: you consider your intuition more reliable than the predictions of ethical and game theory. Yes?
From an very early age on I suffer from various delusional ideas and feelings. I often have a strong feeling that food is poisoned or that I have to walk the same way twice because otherwise really bad things will happen. I could mention countless other example on how my intuition is nothing more than a wreckage. So I am possible better equipped to judge the shortcomings of human intuition than many other people. And yet there are situations in which I would rather trust my intuition.
OK. So a situation arises, and your intuition says you should do X, and the most reliable formal theory you’ve got says you should do Y, where Y implies NOT(X). For some situations you do X, for others you do Y, depending on how much you trust your intuition and how much you trust your formal theory.
As far as I can tell, in this respect you are exactly like everyone else on this site.
You see a difference, though, between yourself and the others on this site… a difference important enough that you continue to point out its implications.
I can’t quite tell what you think that difference is. Some possibilities:
You think they are trusting certain formal theories more than their own intuitions in situations where you would trust your intuition more.
You think they are trusting certain formal theories more than their own intuitions in situations where they ought to trust their intuitions more.
You think their intuitions are poor and they ought to intuit different things.
You think they are trusting certain formal theories more than their own intuitions in situations where you would trust your intuition more.
You think they are trusting certain formal theories more than their own intuitions in situations where they ought to trust their intuitions more.
You think their intuitions are poor and they ought to intuit different things.
I can’t speak for XiXiDu, but for myself it’s a combination of all three. In particular, consciously held theories over time tends to affect one’s intuition towards those theories. Thus I worry that by the time they actually wind up in such a conflict between theory and intuition, their intuitions will no longer be up to the task.
This sounds like a general argument in favor of acting on my intuitions rather than implementing theory. For example, if I intuit that turning left at this intersection will get me where I want to go, it seems that this argument suggests that I should turn left at this intersection rather than looking at a map.
This sounds like a general argument in favor of acting on my intuitions rather than implementing theory.
Come to think of it, I don’t actually see how that follows from what I said. I said that intuitions can change as a result of consciously held theories, not that this is necessarily bad, depending on the theory (although it would be nice to keep an copy of an old intuition on ROM and do periodic sanity checks).
But if you start with intuition I1 and theory T at time T1, and subsequently end up with intuition I2 at time T2, what you seem to be endorsing is following I1 at T1 and I2 at T2. At no time are you endorsing following T if T conflicts with I at that time.
Which is what I meant by acting on my intuitions rather than implementing theory.
I’m at a complete loss for what a “sanity check” might look like. That is, OK, I have I2 in my brain, and I1 backed up on ROM, and I can compare them, and they make different judgments. Now what?
I’m at a complete loss for what a “sanity check” might look like. That is, OK, I have I2 in my brain, and I1 backed up on ROM, and I can compare them, and they make different judgments. Now what?
If I1 finds the judgement returned by I2 completely absurd even after looking at the argument, recognize that I should be confused and act accordingly.
This sounds like a general argument in favor of acting on my intuitions rather than implementing theory. For example, if I intuit that turning left at this intersection will get me where I want to go, it seems that this argument suggests that I should turn left at this intersection rather than looking at a map.
No because I intuitively find that conclusion absurd.
I don’t fully understand your question, so I’ll clarify my previous comment in the hope that that helps.
Like I said, I find the notion that I should always rely on my intuition at the expense of looking at a map intuitively absurd, and that intuition is “stronger then” (for lack of a better term) then the intuition that I should turn left.
Yeah, I think that answers my question. If all you’ve got are intuitive judgments and a sense of their relative strength in various situations, then I need to know what your intuitive judgments about a situation are before I can apply any argument you make to that situation.
Regardless of my evaluation of your argument, given what you’ve told me so far, I cannot apply it to real-world situations without knowing your intuitions.
Or, at the very least, if I do apply it, there’s no reason to expect that you will endorse the result, or that the result will be at all related to what you will do in that situation, since what you will in fact do (if I’ve understood your account correctly) is consult your intuitions in that situation and act accordingly, regardless of the conclusions of your argument.
You should evaluate any argument I make on its merits, not on the basis of the intuitions I used to produce it.
Not true! The intuitions used constitute evidence! Evaluating only arguments provided and not the sampling used to provide them will (sometimes) lead you to wrong conclusions.
So a situation arises, and your intuition says you should do X, and the most reliable formal theory you’ve got says you should do Y, where Y implies NOT(X).
Accept Y but adjust its associated utility downwards according to your intuition. If after doing so it is still the action with the highest expected utility, then follow through on it and ignore your intuition.
Exactly. What I am arguing is 1) that we should be risk averse about the actions suggested by algorithm A1, if they exceed a certain scope, and 2) that we should devote resources to 2.1) verifying the correctness of A1 by empirical evidence and or 2.2) trying to improve A1 or develop a superior algorithm A2. What we shouldn’t do is to simply accept the actions that are recommended by A1 and follow through on them.
But you’re evading the “compared to what?” question.
I mean, OK, A1 suggests I do X. I perform a variety of empirical tests, on the basis of which I conclude that I should do Y (where Y implies NOT(X)). Fed the results of those empirical tests, A1 continues to suggest X.
Granted that I should be working on developing A2 throughout, I’m still faced with a choice: do I do X or Y?
Yes, I should be cognizant of the risks of trusting A1, so it’s not clear I should do X.
I should also be cognizant of the risks of trusting my brain, so it’s not clear I should do Y.
If I could answer that then I would probably be the smartest person around here. I only “know” that if people decide that we should walk into death camps because some of our ethical and game theoretic insights suggest that to be the favorable option, then I’d rather go with my intuition and hope for someone like General Thud.
I am sorry for giving such an unsatisfying answer. I simply have no clue. But I have this intuition that something is wrong here and that we should think about it.
That’s an answer to my question, then: you consider your intuition more reliable than the predictions of ethical and game theory. Yes?
From an very early age on I suffer from various delusional ideas and feelings. I often have a strong feeling that food is poisoned or that I have to walk the same way twice because otherwise really bad things will happen. I could mention countless other example on how my intuition is nothing more than a wreckage. So I am possible better equipped to judge the shortcomings of human intuition than many other people. And yet there are situations in which I would rather trust my intuition.
OK. So a situation arises, and your intuition says you should do X, and the most reliable formal theory you’ve got says you should do Y, where Y implies NOT(X). For some situations you do X, for others you do Y, depending on how much you trust your intuition and how much you trust your formal theory.
As far as I can tell, in this respect you are exactly like everyone else on this site.
You see a difference, though, between yourself and the others on this site… a difference important enough that you continue to point out its implications.
I can’t quite tell what you think that difference is. Some possibilities:
You think they are trusting certain formal theories more than their own intuitions in situations where you would trust your intuition more.
You think they are trusting certain formal theories more than their own intuitions in situations where they ought to trust their intuitions more.
You think their intuitions are poor and they ought to intuit different things.
I can’t speak for XiXiDu, but for myself it’s a combination of all three. In particular, consciously held theories over time tends to affect one’s intuition towards those theories. Thus I worry that by the time they actually wind up in such a conflict between theory and intuition, their intuitions will no longer be up to the task.
This sounds like a general argument in favor of acting on my intuitions rather than implementing theory. For example, if I intuit that turning left at this intersection will get me where I want to go, it seems that this argument suggests that I should turn left at this intersection rather than looking at a map.
Am I misunderstanding you?
Come to think of it, I don’t actually see how that follows from what I said. I said that intuitions can change as a result of consciously held theories, not that this is necessarily bad, depending on the theory (although it would be nice to keep an copy of an old intuition on ROM and do periodic sanity checks).
Sure.
But if you start with intuition I1 and theory T at time T1, and subsequently end up with intuition I2 at time T2, what you seem to be endorsing is following I1 at T1 and I2 at T2. At no time are you endorsing following T if T conflicts with I at that time.
Which is what I meant by acting on my intuitions rather than implementing theory.
I’m at a complete loss for what a “sanity check” might look like. That is, OK, I have I2 in my brain, and I1 backed up on ROM, and I can compare them, and they make different judgments. Now what?
If I1 finds the judgement returned by I2 completely absurd even after looking at the argument, recognize that I should be confused and act accordingly.
No because I intuitively find that conclusion absurd.
So… is it possible for me to understand what your stated argument actually suggests about X if I don’t know what your intuitive judgments on X are?
I don’t fully understand your question, so I’ll clarify my previous comment in the hope that that helps.
Like I said, I find the notion that I should always rely on my intuition at the expense of looking at a map intuitively absurd, and that intuition is “stronger then” (for lack of a better term) then the intuition that I should turn left.
Yeah, I think that answers my question. If all you’ve got are intuitive judgments and a sense of their relative strength in various situations, then I need to know what your intuitive judgments about a situation are before I can apply any argument you make to that situation.
You should evaluate any argument I make on its merits, not on the basis of the intuitions I used to produce it.
Regardless of my evaluation of your argument, given what you’ve told me so far, I cannot apply it to real-world situations without knowing your intuitions.
Or, at the very least, if I do apply it, there’s no reason to expect that you will endorse the result, or that the result will be at all related to what you will do in that situation, since what you will in fact do (if I’ve understood your account correctly) is consult your intuitions in that situation and act accordingly, regardless of the conclusions of your argument.
Not true! The intuitions used constitute evidence! Evaluating only arguments provided and not the sampling used to provide them will (sometimes) lead you to wrong conclusions.
Accept Y but adjust its associated utility downwards according to your intuition. If after doing so it is still the action with the highest expected utility, then follow through on it and ignore your intuition.