There are situations where this isn’t so, namely educational ones, where having a pupil or student express their muddled understanding makes it possible to correct them. But I don’t think you have that sort of didactic context in mind.
I do, actually, which raises the question as to why you think I didn’t have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state? Moreover, if we weren’t in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve? This honestly seems like a highly contradictory stance, so I hope I’m not attacking a straw man.
This would appear to be false.
So it would. Thank you for taking the time to track down those articles. As always, it’s given me a few new ideas about how to work with LessWrong.
LWers aren’t (as far as I know) doing anything more about global warming or peak oil than they are about astronomical waste or the (insufficient speed of) the global demographic transition. So what makes the former more legit than the latter?
I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world. There are topics and discussions that further this process and there are topics and discussion that simply do not. Similarly, there are topics and discussions where you can pretend you’re disagreeing, but not really honing your rationality in any way by participating. For reference, this conversation isn’t honing our rationality very well; we’re already pretty finely tuned. What’s happening between us now is current-optimum information exchange. I’m providing you with tangible structural components, and you’ve providing me with excellent calibration data.
If someone goes, “gee, I used to think I should devote my life to philosophy/writing/computer programming/medicine/social work/law, but now I read LW I just want to throw money at MIRI, or fly to California to help out with CFAR”, and then they actually follow through, one can hardly accuse them of not changing their deep beliefs!
Oh but that is very much exactly what I can do!
In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a “deep belief” state, but rather a relatively blank slate primed to receive the first idea that came to mind. uFAI is not a high-class danger; EY is wrong, and the funding and effort is, in large part, illegitimate. I am personally content leaving that fear, effort and funding in place precisely because I can milk it for my own personal benefit. Does every such person who reads the sequences run off to donate or start having nightmares about FAI punishing them for not donating? Absolutely, positively; this is not the case.
Deep beliefs are an entirely different class of psychological construct entirely. Imagine I am very much of the belief that AI cannot be created because there’s something fundamental in the organic brain that a machine cannot replicate. What will reading every AI-relevant article in the sequences get me? Will my deep (and irrational) beliefs be overridden and replaced with AI existential fear? It is very difficult for me to assume you’ll do anything but agree that such things do not happen, but I must leave open the possibly that you’ll see something that I missed. This is a relatively strong belief of mine, but unlike most others, I will never close myself off to new ideas. I am very much of the intention that child-like plasticity can be maintained so long as I do not make the conscious decision to close myself off and pretend I know more than I actually do.
Eliezer did ask that person to elaborate, but got no response.
Ah. No harm, no foul, then.
You haven’t actually mounted an argument for your own managerial superiority yet.
I’ve been masking heavily. To be honest, my ideas were embedded many replies ago. I’m only responding now insofar as seeing what all you have to offer, what level you’re at, and what levels and subjects you’re overtly receptive to. (And on the off-chance, picking up an observer or two.)
I need to you to be slightly more self-aware in order [...]
How about this: I need you to spell out what you mean with this “true face of LessWrong” stuff. (And ideally why you think I’m different & special.)
“Self-aware” is a non-trivial aspect here. It’s not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. Among other things, I’m measuring the rate at which you come to realizations. “If you’ve properly taken the time to reflect on the opening question of this comment,” is more than enough of a clue. That you haven’t put the reflection in simply from my cluing gives me a very detailed picture of how much you currently trust my judgment. I actually thought it was pretty awesome that you responded to the opening question in an isolated reply and had to rush out right after answering it, giving you very much more time to have reflected on it than the case of serially reading and replying without expending too much mental effort in doing so. I’m really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.
I’ve already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.
My own hunch:
Yeah, pretty much. LessWrong’s memetic moment in history isn’t necessarily at a point in time at which it is active. That’s sort of the premise of the concern of LessWrong’s immediate memeplex going viral. As the population’s intelligence slowly increases, it’ll eventually hit a sweet spot where LessWrong’s content will resonate with it.
...But yeah, ban on politics isn’t one of the dangerous LessWrong memes.
I do, actually, which raises the question as to why you think I didn’t have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state?
I did not. And do not, in fact. Those didactic states are states where there’s someone who’s clearly the teacher (primarily interested in passing on knowledge), and someone who’s clearly the pupil (or pupils plural — but however many, the pupil(s) are well aware they’re not the teacher). But on LW and most other places where grown-ups discuss things, things don’t run so much on a teacher-student model; it’s mostly peers arguing with each other on a roughly even footing, and in a lot of those arguments, nobody’s thinking of themselves as the pupil. Even though people are still learning from each other in such situations, they’re not what I had in mind as “didactic”.
In hindsight I should’ve used the word “pedagogical” rather than “didactic”.
Moreover, if we weren’t in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve?
I think these questions are driven by misunderstandings of what I meant by “didactic context”. What I wrote above might clarify.
This would appear to be false.
So it would. Thank you for taking the time to track down those articles.
Thank you for updating in the face of evidence.
I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world.
Fair enough.
In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a “deep belief” state, but rather a relatively blank slate primed to receive the first idea that came to mind.
I interpreted “deep beliefs” as referring to beliefs that matter enough to affect the believer’s behaviour. Under that interpretation, any new belief that leads to a major, consistent change in someone’s behaviour (e.g. changing jobs to donate thousands to MIRI) would seem to imply a change in deep beliefs. You evidently have a different meaning of “deep belief” in mind but I still don’t know what (even after reading that paragraph and the one after it).
“Self-aware” is a non-trivial aspect here. It’s not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. [...] “If you’ve properly taken the time to reflect on the opening question of this comment,” is more than enough of a clue. [...] I’m really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.
I’ve already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.
Hrmm. Well, that wraps up that branch of the conversation quite tidily.
LessWrong’s memetic moment in history isn’t necessarily at a point in time at which it is active.
I suppose that’s true...
That’s sort of the premise of the concern of LessWrong’s immediate memeplex going viral. As the population’s intelligence slowly increases, it’ll eventually hit a sweet spot where LessWrong’s content will resonate with it.
...but I’d still soften that “will” to a “might, someday, conceivably”. Things don’t go viral in so predictable a fashion. (And even when they do, they often go viral as short-term fads.)
Another reason I’m not too worried: the downsides of LW memes invading everyone’s head would be relatively small. People believe all sorts of screamingly irrational and generally worse things already.
I do, actually, which raises the question as to why you think I didn’t have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state? Moreover, if we weren’t in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve? This honestly seems like a highly contradictory stance, so I hope I’m not attacking a straw man.
So it would. Thank you for taking the time to track down those articles. As always, it’s given me a few new ideas about how to work with LessWrong.
I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world. There are topics and discussions that further this process and there are topics and discussion that simply do not. Similarly, there are topics and discussions where you can pretend you’re disagreeing, but not really honing your rationality in any way by participating. For reference, this conversation isn’t honing our rationality very well; we’re already pretty finely tuned. What’s happening between us now is current-optimum information exchange. I’m providing you with tangible structural components, and you’ve providing me with excellent calibration data.
Oh but that is very much exactly what I can do!
In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a “deep belief” state, but rather a relatively blank slate primed to receive the first idea that came to mind. uFAI is not a high-class danger; EY is wrong, and the funding and effort is, in large part, illegitimate. I am personally content leaving that fear, effort and funding in place precisely because I can milk it for my own personal benefit. Does every such person who reads the sequences run off to donate or start having nightmares about FAI punishing them for not donating? Absolutely, positively; this is not the case.
Deep beliefs are an entirely different class of psychological construct entirely. Imagine I am very much of the belief that AI cannot be created because there’s something fundamental in the organic brain that a machine cannot replicate. What will reading every AI-relevant article in the sequences get me? Will my deep (and irrational) beliefs be overridden and replaced with AI existential fear? It is very difficult for me to assume you’ll do anything but agree that such things do not happen, but I must leave open the possibly that you’ll see something that I missed. This is a relatively strong belief of mine, but unlike most others, I will never close myself off to new ideas. I am very much of the intention that child-like plasticity can be maintained so long as I do not make the conscious decision to close myself off and pretend I know more than I actually do.
Ah. No harm, no foul, then.
I’ve been masking heavily. To be honest, my ideas were embedded many replies ago. I’m only responding now insofar as seeing what all you have to offer, what level you’re at, and what levels and subjects you’re overtly receptive to. (And on the off-chance, picking up an observer or two.)
“Self-aware” is a non-trivial aspect here. It’s not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. Among other things, I’m measuring the rate at which you come to realizations. “If you’ve properly taken the time to reflect on the opening question of this comment,” is more than enough of a clue. That you haven’t put the reflection in simply from my cluing gives me a very detailed picture of how much you currently trust my judgment. I actually thought it was pretty awesome that you responded to the opening question in an isolated reply and had to rush out right after answering it, giving you very much more time to have reflected on it than the case of serially reading and replying without expending too much mental effort in doing so. I’m really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.
I’ve already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.
Yeah, pretty much. LessWrong’s memetic moment in history isn’t necessarily at a point in time at which it is active. That’s sort of the premise of the concern of LessWrong’s immediate memeplex going viral. As the population’s intelligence slowly increases, it’ll eventually hit a sweet spot where LessWrong’s content will resonate with it.
...But yeah, ban on politics isn’t one of the dangerous LessWrong memes.
I did not. And do not, in fact. Those didactic states are states where there’s someone who’s clearly the teacher (primarily interested in passing on knowledge), and someone who’s clearly the pupil (or pupils plural — but however many, the pupil(s) are well aware they’re not the teacher). But on LW and most other places where grown-ups discuss things, things don’t run so much on a teacher-student model; it’s mostly peers arguing with each other on a roughly even footing, and in a lot of those arguments, nobody’s thinking of themselves as the pupil. Even though people are still learning from each other in such situations, they’re not what I had in mind as “didactic”.
In hindsight I should’ve used the word “pedagogical” rather than “didactic”.
I think these questions are driven by misunderstandings of what I meant by “didactic context”. What I wrote above might clarify.
Thank you for updating in the face of evidence.
Fair enough.
I interpreted “deep beliefs” as referring to beliefs that matter enough to affect the believer’s behaviour. Under that interpretation, any new belief that leads to a major, consistent change in someone’s behaviour (e.g. changing jobs to donate thousands to MIRI) would seem to imply a change in deep beliefs. You evidently have a different meaning of “deep belief” in mind but I still don’t know what (even after reading that paragraph and the one after it).
Hrmm. Well, that wraps up that branch of the conversation quite tidily.
I suppose that’s true...
...but I’d still soften that “will” to a “might, someday, conceivably”. Things don’t go viral in so predictable a fashion. (And even when they do, they often go viral as short-term fads.)
Another reason I’m not too worried: the downsides of LW memes invading everyone’s head would be relatively small. People believe all sorts of screamingly irrational and generally worse things already.