And yet, I get the sentiment that Valentine seems to have been trying to communicate—it sure seems like there are epistemic rationality techniques that seem incredibly valuable and neglected, and one could discover them in the course of doing something about as useless as paperwork, and talking about how you became more efficient at paperwork would seem like a waste of time to everyone involved.
Is this a real example or one that you’ve made up? That is, do you actually have cases in mind where someone discovered valuable and neglected epistemic rationality techniques in the course of doing paperwork?
I apologize for not providing a good enough example—yes, it was made up. Here’s a more accurate explanation of what causes me to believe that Valentine’s sentiment has merit:
It seems to me that a lot of epistemic confusion can be traced to almost unrelated upstream misconceptions. Examples: thinking that people must be suspended upside down below the equator, once someone understands the notion of an approximately spherical Earth; the illusion that mirrors create horizontal asymmetry but retain vertical symmetry; the notion that an AGI will automatically be moral. Similarly, it seems plausible to me that while attempting to fix one issue (similar to attempting to fix a confusion of the sort just listed), one could find themselves making almost unrelated upstream epistemic discoveries that might just be significantly more valuable). I do acknowledge that these epistemic discoveries do also seem object-level and communicable, and I do think that the sentiment that Valentine showed could make sense.
It also seems that a lot of rationality skill involves starting out with a bug one notices (“hey, I seem to be really bad at going to the gym”), and then making multiple attempts to fix the problem (ideally focusing on making an intervention as close to the ‘root’ of the issue as possible), and then discovering epistemic rationality techniques that may be applicable in many places. I agree that it seems like really bad strategy to then not try to explain why the technique is useful by giving another example where the technique is useful and results in good object-level outcomes, instead of simply talking about (given my original example) paperwork for a sentence and then spending paragraphs talking about some rationality technique in the abstract.
thinking that people must be suspended upside down below the equator, once someone understands the notion of an approximately spherical Earth
That page seems to be talking about a four-year-old child, who has not yet learned about space, how gravity works, etc. It’s not clear to me that there’s anything to conclude from this about what sorts of epistemic rationality techniques might be useful to adults.
More importantly, it’s not clear to me how any of your examples are supposed to be examples of “epistemic confusion [that] can be traced to almost unrelated upstream misconceptions”. Could you perhaps make the connection more explicitly?
Similarly, it seems plausible to me that while attempting to fix one issue (similar to attempting to fix a confusion of the sort just listed), one could find themselves making almost unrelated upstream epistemic discoveries that might just be significantly more valuable).
And… do you have any examples of this?
It also seems that a lot of rationality skill involves starting out with a bug one notices (“hey, I seem to be really bad at going to the gym”), and then making multiple attempts to fix the problem (ideally focusing on making an intervention as close to the ‘root’ of the issue as possible), and then discovering epistemic rationality techniques that may be applicable in many places.
There’s a lot of “<whatever> seems like it could be true” in your comment. Are you really basing your views on this subject on nothing more than abstract intuition?
I agree that it seems like really bad strategy to then not try to explain why the technique is useful by giving another example where the technique is useful and results in good object-level outcomes, instead of simply talking about (given my original example) paperwork for a sentence and then spending paragraphs talking about some rationality technique in the abstract.
If, hypothetically, you discovered some alleged epistemic rationality technique while doing paperwork, I would certainly want you to either explain how you applied this technique originally (with a worked example involving your paperwork), or explain how the reader might (or how you did) apply the technique to some other domain (with a worked example involving something else, not paperwork), or (even better!) both.
It would be very silly to just talk about the alleged technique, with no demonstration of its purported utility.
If, hypothetically, you discovered some alleged epistemic rationality technique while doing paperwork, I would certainly want you to either explain how you applied this technique originally (with a worked example involving your paperwork), or explain how the reader might (or how you did) apply the technique to some other domain (with a worked example involving something else, not paperwork), or (even better!) both.
This seems sensible, yes.
It would be very silly to just talk about the alleged technique, with no demonstration of its purported utility.
I agree that it seems silly to not demonstrate the utility of a technique when trying to discuss it! I try to give examples to support my reasoning when possible. What I attempted to do with that one passage that you seemed to have taken offense to was show that I could guess at one causal cognitive chain that would have led Valentine to feel the way they did and therefore act and communicate the way they did, not that I endorse the way Kensho was written—because I did not get anything out of the original post.
There’s a lot of “<whatever> seems like it could be true” in your comment.
Here’s a low investment attempt to point at the cause of what seems to you a verbal tic:
I can tell you that when I put “it seems to me” at the front of so many of my sentences, it’s not false humility, or insecurity, or a verbal tic. (It’s a deliberate reflection on the distance between what exists in reality, and the constellations I’ve sketched on my map.)
If you need me to write up a concrete elaboration to help you get a better idea about this, please tell me.
Are you really basing your views on this subject on nothing more than abstract intuition?
My intuitions on my claim related to rationality skill seem to be informed by concrete personal experience, which I haven’t yet described in length, mainly because I expected that using a simple plausible made-up example would serve as well. I apologize for not adding a “(based on experience)” in that original quote, although I guess I assumed that was deducible.
That page seems to be talking about a four-year-old child, who has not yet learned about space, how gravity works, etc. It’s not clear to me that there’s anything to conclude from this about what sorts of epistemic rationality techniques might be useful to adults.
I’m specifically pointing at examples of deconfusion here, which I consider the main (and probably the only?) strand of epistemic rationality techniques. I concede that I haven’t provided you useful information about how to do it—but that isn’t something I’d like to get into right now, when I am still wrapping my mind around deconfusion.
More importantly, it’s not clear to me how any of your examples are supposed to be examples of “epistemic confusion [that] can be traced to almost unrelated upstream misconceptions”. Could you perhaps make the connection more explicitly?
For the gravity example, the ‘upstream misconception’ is that the kid did not realize that ‘up and down’ is relative to the direction in which Earth’s gravity acts on the body, and therefore the kid tries to fit the square peg of “Okay, I see that humans have heads that point up and legs that point down” into the round hole of “Below the equator, humans are pulled upward, and humans heads are up, so humans’ heads point to the ground”.
For the AI example, the ‘upstream misconception’ can be[1] conflating the notion of intelligence with ‘human’s behavior and tendencies that I recognize as intelligence’ (and this in turn can be due to other misconceptions, such as not understanding how alien the selection process that underlies evolution is; not understanding how intelligence is not the same as saying impressive things in a social party but the ability to squeeze the probability distribution of future outcomes into a smaller space; et cetera), and then making a reasoning error that seems like anthromorphizing an AI, and concluding that the more intelligent a system would be, the more it would care about the ‘right things’ that us humans seem to care about.
The second example is a bit expensive to elaborate on, so I will not do so right now. I apologize.
Anyway, I intended to write this stuff up when I felt like I understood deconfusion enough that I could explain it to other people.
Similarly, it seems plausible to me that while attempting to fix one issue (similar to attempting to fix a confusion of the sort just listed), one could find themselves making almost unrelated upstream epistemic discoveries that might just be significantly more valuable).
And… do you have any examples of this?
I find this plausible based on my experience with deconfusion and my current state of understanding of the skill. I do not believe I understand deconfusion well enough to communicate it to people who have an inferential distance as huge as the one between you and I, so I do not intend to try.
[1]: There are a myriad of ways you can be confused, and only one way you can be deconfused.
Is this a real example or one that you’ve made up? That is, do you actually have cases in mind where someone discovered valuable and neglected epistemic rationality techniques in the course of doing paperwork?
I apologize for not providing a good enough example—yes, it was made up. Here’s a more accurate explanation of what causes me to believe that Valentine’s sentiment has merit:
It seems to me that a lot of epistemic confusion can be traced to almost unrelated upstream misconceptions. Examples: thinking that people must be suspended upside down below the equator, once someone understands the notion of an approximately spherical Earth; the illusion that mirrors create horizontal asymmetry but retain vertical symmetry; the notion that an AGI will automatically be moral. Similarly, it seems plausible to me that while attempting to fix one issue (similar to attempting to fix a confusion of the sort just listed), one could find themselves making almost unrelated upstream epistemic discoveries that might just be significantly more valuable). I do acknowledge that these epistemic discoveries do also seem object-level and communicable, and I do think that the sentiment that Valentine showed could make sense.
It also seems that a lot of rationality skill involves starting out with a bug one notices (“hey, I seem to be really bad at going to the gym”), and then making multiple attempts to fix the problem (ideally focusing on making an intervention as close to the ‘root’ of the issue as possible), and then discovering epistemic rationality techniques that may be applicable in many places. I agree that it seems like really bad strategy to then not try to explain why the technique is useful by giving another example where the technique is useful and results in good object-level outcomes, instead of simply talking about (given my original example) paperwork for a sentence and then spending paragraphs talking about some rationality technique in the abstract.
That page seems to be talking about a four-year-old child, who has not yet learned about space, how gravity works, etc. It’s not clear to me that there’s anything to conclude from this about what sorts of epistemic rationality techniques might be useful to adults.
More importantly, it’s not clear to me how any of your examples are supposed to be examples of “epistemic confusion [that] can be traced to almost unrelated upstream misconceptions”. Could you perhaps make the connection more explicitly?
And… do you have any examples of this?
There’s a lot of “<whatever> seems like it could be true” in your comment. Are you really basing your views on this subject on nothing more than abstract intuition?
If, hypothetically, you discovered some alleged epistemic rationality technique while doing paperwork, I would certainly want you to either explain how you applied this technique originally (with a worked example involving your paperwork), or explain how the reader might (or how you did) apply the technique to some other domain (with a worked example involving something else, not paperwork), or (even better!) both.
It would be very silly to just talk about the alleged technique, with no demonstration of its purported utility.
This seems sensible, yes.
I agree that it seems silly to not demonstrate the utility of a technique when trying to discuss it! I try to give examples to support my reasoning when possible. What I attempted to do with that one passage that you seemed to have taken offense to was show that I could guess at one causal cognitive chain that would have led Valentine to feel the way they did and therefore act and communicate the way they did, not that I endorse the way Kensho was written—because I did not get anything out of the original post.
Here’s a low investment attempt to point at the cause of what seems to you a verbal tic:
If you need me to write up a concrete elaboration to help you get a better idea about this, please tell me.
My intuitions on my claim related to rationality skill seem to be informed by concrete personal experience, which I haven’t yet described in length, mainly because I expected that using a simple plausible made-up example would serve as well. I apologize for not adding a “(based on experience)” in that original quote, although I guess I assumed that was deducible.
I’m specifically pointing at examples of deconfusion here, which I consider the main (and probably the only?) strand of epistemic rationality techniques. I concede that I haven’t provided you useful information about how to do it—but that isn’t something I’d like to get into right now, when I am still wrapping my mind around deconfusion.
For the gravity example, the ‘upstream misconception’ is that the kid did not realize that ‘up and down’ is relative to the direction in which Earth’s gravity acts on the body, and therefore the kid tries to fit the square peg of “Okay, I see that humans have heads that point up and legs that point down” into the round hole of “Below the equator, humans are pulled upward, and humans heads are up, so humans’ heads point to the ground”.
For the AI example, the ‘upstream misconception’ can be[1] conflating the notion of intelligence with ‘human’s behavior and tendencies that I recognize as intelligence’ (and this in turn can be due to other misconceptions, such as not understanding how alien the selection process that underlies evolution is; not understanding how intelligence is not the same as saying impressive things in a social party but the ability to squeeze the probability distribution of future outcomes into a smaller space; et cetera), and then making a reasoning error that seems like anthromorphizing an AI, and concluding that the more intelligent a system would be, the more it would care about the ‘right things’ that us humans seem to care about.
The second example is a bit expensive to elaborate on, so I will not do so right now. I apologize.
Anyway, I intended to write this stuff up when I felt like I understood deconfusion enough that I could explain it to other people.
I find this plausible based on my experience with deconfusion and my current state of understanding of the skill. I do not believe I understand deconfusion well enough to communicate it to people who have an inferential distance as huge as the one between you and I, so I do not intend to try.
[1]: There are a myriad of ways you can be confused, and only one way you can be deconfused.