LW straw man: “OMG! You took advantage of a cheap syncretic symmetry between the perspectives of Thomism and computationalist singularitarianism in order to carve up reality using the words of the hated enemy, instead of sitting by while people who know basically nothing about philosophy assert that people who actually do know something about philosophy use the word ‘soul’ to designate something that’s easy to contemptuously throw aside as transparently ridiculous! Despite your initial strong emphasis that your effort was very hasty and largely an attempt at having fun, I am still very skeptical of your mental health, let alone your rationality!
One-fourths-trolling variation on Will_Newsome: “Aside from the very real importance of not setting a precedent or encouraging a norm of being contemptuous of things you don’t understand, which we’ll get back to… First of all, I was mostly just having fun, and second of all, more importantly, the sort of thing I did there is necessary for people to do if they want to figure out what people are actually saying instead of systematically misguidedly attributing their own inaccurate maps to some contemptible (non-existent) enemy of Reason. Seriously, you are flinching away from things because they’re from the wrong literary genre, even though you’ve never actually tried to understand that literary genre. (By the way, I’ve actually looked at the ideas I’m talking about, and I don’t have the conceptual allergies that keep you from actually trying to understand them on grounds of “epistemic hygiene”, or in other words on grounds of assuming the conclusion of deserved contempt.) If someone took a few minutes to describe the same concepts in a language you had positive affect towards then you probably wouldn’t even bother to be skeptical. But if I cipher-substitute the actually quite equivalent ideas thought up by the contemptible enemy then those same ideas become unmotivated insanity, obviously originally dreamed up because of some dozens of cognitive biases. (By the way, “genetic fallacy”; by the way, “try not to criticize people when they’re right”.) And besides charity and curiosity being fundamental virtue-skills in themselves, they’re also necessary if one is to accurately model any complex phenomenon/concept/thing/perspective at all.
LW straw man: “What is this nonsense? You are trying to tell us that, ‘it is virtuous to engage in lots of purposeful misinterpretation of lots of different models originally constructed by various people who you for some probably-motivatedly-misguided reason already suspect are generally unreasonable, even at the cost of building a primary maximally precise model, assuming for some probably-motivatedly-misguided reason that those two are necessarily at odds’. Or perhaps you are saying, ‘it is generally virtuous to naively pattern match concepts from unfamiliar models to the nearest concept that you can easily imagine from a model you already have’. Or maybe, ‘hasty piecemeal misinterpretations of mainstream Christianity and similar popular religions are a good source of useful ideas’, or ‘all you have to do is lower your epistemic standards and someday you might even become as clever as me’, or ‘just be stupid’. But that’s horrible advice. You are clearly wrong, and thus I am justified in condescendingly admonishing you and guessing that you are yet another sympathizer of the contemptible enemies of Reason. (By the way aren’t those hated enemies of Reason so contemptible? Haha! So contemptible! Om nom nom signalling nom contempt nom nom “rationality” nom.)
One-thirds-trolling variation on Will_Newsome: ”...So, ignoring the extended mutual epistemic back-patting session… I am seriously warning you: it is important that you become very skillful—fast, thorough, reflective, self-sharpening—at finding or building various decently-motivated-if-imperfect models of the same process/concept/thing so as to form a constellation of useful perspectives on different facets of it, and different ways of carving its joints, and why different facets/carvings might seem differentially important to various people or groups of people in different memetic or psychological contexts, et cetera. Once you have built this and a few other essential skills of sanity, that is when you can be contemptuous of any meme you happen upon that hasn’t already been stamped with your subculture’s approval. Until then you are simply reveling in your ignorance while sipping poison. Self-satisfied insanity is the default, for you or for any other human who doesn’t quite understand that real-life rationality is a set of skills, not just a few tricks or a game or a banner or a type of magic used by Harry James Potter-Evans-Verres. Like any other human, you use your cleverness to systematically ignore the territory rather than try to understand it. Like any other human, you cheer for your side rather than notice confusion. Like any other human, you self-righteously stand on a mountain of cached judgments rather than use curiosity to see anything anew. Have fun with that, humans. But don’t say I didn’t warn you.”
By the way aren’t those hated enemies of Reason so contemptible? Haha! So contemptible! Om nom nom signalling nom contempt nom nom “rationality” nom.
I am seriously warning you: it is important that you become very skillful—fast, thorough, reflective, self-sharpening—at finding or building various decently-motivated-if-imperfect models of the same process/concept/thing so as to form a constellation of useful perspectives on different facets of it, and different ways of carving its joints, and why different facets/carvings might seem differentially important to various people or groups of people in different memetic or psychological contexts, et cetera.
Why do you think this is so important? As far as I can tell, this is not how humanity made progress in the past. Or was it? Did our best scientists and philosophers find or build “various decently-motivated-if-imperfect models of the same process/concept/thing so as to form a constellation of useful perspectives on different facets of it”?
Or do you claim that humanity made progress in the past despite not doing what you suggest, and that we could make much faster progress if we did? If so, what do you base your claim on (besides your intuition)?
Why do you think this is so important? As far as I can tell, this is not how humanity made progress in the past.
This actually seems to me exactly how humanity has made progress—countless fields and paradigms clashing and putting various perspectives on problems and making progress. This is a basic philosophy of science perspective, common to views as dissimilar as Kuhn and Feyerabend. There’s no one model that dominates in every field (most models don’t even dominate their field; if we look at the ones considered most precise and successful like particle physics or mathematics, we see that various groups don’t agree on even methodology, much less data or results).
But I think the individuals who contributed most to progress did so by concentrating on particular models that they found most promising or interesting. The proliferation of models only happen on a social level. Why think that we can improve upon this by consciously trying to “find or build various decently-motivated-if-imperfect models”?
None of that defends the assertion that humanity made progress by following one single model, which is what I was replying to, as shown by a highly specific quote from your post. Try again.
I didn’t mean to assert that humanity made progress by following one single model as a whole. As you point out, that is pretty absurd. What I was saying is that humanity made progress by (mostly) having each individual human pursue a single model. (I made a similar point before.)
I took Will’s suggestion to be that we, as individuals, should try to pursue many models, even ones that we don’t think are most promising, as long as they are “decently motivated”. (This is contrary to my intuitions, but not obviously absurd, which is why I wanted to ask Will for his reasons.)
I tried to make my point/question clearer in rest of the paragraph after the sentence you quoted, but looking back I notice that the last sentence there was missing the phrase “as individuals” and therefore didn’t quite serve my purpose.
I think you’re looking at later stages of development than I am. By the time Turing came around the thousands-year-long effort to formalize computation was mostly over; single models get way too much credit because they herald the triumph at the end of the war. It took many thousands of years to get to the point of Church/Goedel/Turing. I think that regarding justification we haven’t even had our Leibniz yet. If you look at Leibniz’s work he combined philosophy (monadology), engineering (expanding on Pascal’s calculators), cognitive science (alphabet of thought), and symbolic logic, all centered around computation though at that time there was no such thing as ‘computation’ as we know it (and now we know it so well that we can use it to listen to music or play chess). Archimedes is a much earlier example but he was less focused. If you look at Darwin he spent the majority of his time as a very good naturalist, paying close attention to lots of details. His model of evolution came later.
With morality we happen to be up quite a few levels of abstraction where ‘looking at lots of details’ involves paying close attention to themes from evolutionary game theory, microeconomics, theoretical computer science &c. Look at CFAI to see Eliezer drawing on evolution and evolutionary psychology to establish an extremely straightforward view of ‘justification’, e.g. “Story of a Blob”. It’s easy to stumble around in a haze and fall off a cliff if you don’t have a ton of models like that and more importantly a very good sense of the ways in which they’re unsatisfactory.
Those reasons aren’t convincing by themselves of course. It’d be nice to have a list of big abstract ideas whose formulation we can study on both the individual and memetic levels. E.g. natural selection and computation, and somewhat smaller less-obviously-analogous ones like general relativity, temperature (there’s a book about its invention), or economics. Unfortunately there’s a lot of success story selection effects and even looking closely might not be enough to get accurate info. People don’t really have introspective access to how they generate ideas.
Side question: how long do you think it would’ve taken the duo of Leibniz and Pascal to discover algorithmic probability theory if they’d been roommates for eternity?
If so, what do you base your claim on (besides your intuition)?
I think my previous paragraph answered this with representative reasons. This is sort of an odd way to ask the question ’cuz it’s mixing levels of abstraction. Intuition is something you get after looking at a lot of history or practicing a skill for awhile or whatever. There are a lot of chess puzzles I can solve just using my intuition, but I wouldn’t have those intuitions unless I’d spent some time on the object level practicing my tactics. So “besides your intuition” means like “and please give a fine-grained answer” and not literally “besides your intuition”. Anyway, yeah, personal experience plus history of science. I think you can see it in Nesov’s comments from back when, e.g. his looking at things like game semantics and abstract interpretation as sources of inspiration.
I think you’re looking at later stages of development than I am.
You’re right, and perhaps I should better familiarize myself with earlier intellectual history. Do you have any books you can recommend, on Leibniz for example?
This one perhaps. I haven’t read it but feel pretty guilty about that fact. Two FAI-minded people have recommended it to me, though I sort of doubt that they’ve actually read it either. Ah, the joys and sorrows of hypothetical CliffsNotes.
ETA: I think Vassar is the guy to ask about history of science or really history of anything. It’s his fault I’m so interested in history.
I don’t recall attempting to make any (partial) jokes, no. I’m not sure what you’re referring to as “these reactions”. I’ll try to respond to what I think is your (not necessarily explicit) question. I’m sort of responding to everyone in this thread.
When I suspect that a negative judgment of me or some thing(s) associated with me might be objectively correct or well-motivated—when I suspect that I might be objectively unjustified in a way that I hadn’t already foreseen, even if it would be “objectively” unreasonable for me/others to expect me to have seen so in advance—well, that causes me to, how should I put it?, “freak out”. My omnipresent background fear of being objectively unjustified causes me to actually do things, like update my beliefs, or update my strategy (e.g. by flying to California to volunteer for SingInst), or help people I care about (e.g. by flying back to Tucson on a day’s notice if I fear that someone back home might be in danger). This strong fear of being objectively (e.g. reflectively) morally (thus epistemicly) antijustified—contemptible, unvirtuous, not awesome, imperfect—has been part of me forever. You can see why I would put an abnormally large amount of effort into becoming a decent “rationalist”, and why I would have learned abnormally much, abnormally quickly from my year-long stint as a Visiting Fellow. (Side note: It saddens me that there are no longer any venues for such in-depth rationality training, though admittedly it’s hard/impossible for most aspiring rationalists to take advantage of that sort of structure.) You can see why I would take LW’s reactions very, very seriously—unless I had some heavyweight ultra-good reasons for laughing at them instead.
(It’s worth noting that I can make an incorrect epistemic argument and this doesn’t cause me to freak out as long as the moral-epistemic state I was in that caused me to make that argument wasn’t “particularly” unjustified. It’s possible that I should make myself more afraid of ever being literally wrong, but by default I try not to compound my aversions. Reality’s great at doing that without my help.)
“Luckily”, judgments of me or my ideas, as made by most humans, tend to be straightforwardly objectively wrong. Obviously this default of dismissal does not extend to judgments made by humans who know me or my ideas well, e.g. my close friends if the matter is moral in nature and/or some SingInst-related people if the matter is epistemic and/or moral in nature. If someone related to SingInst were to respond like Less Wrong did then that would be serious cause for concern, “heavyweight ultra-good reasons” be damned; but such people aren’t often wrong and thus did not in fact respond in a manner similar to LW’s. Such people know me well enough to know that I am not prone to unreflective stupidity (e.g. prone to unreflective stupidity in the ways that Less Wrong unreflectively interpreted me as being).
If they were like, “The implicit or explicit strategy that motivates you to make comments like that on LW isn’t really helping you achieve your goals, you know that right?”, then I’d be like, “Burning as much of my credibility as possible with as little splash damage as possible is one of my goals; but yes, I know that half-trolling LW doesn’t actually teach them what they need to learn.”. But if they responded like LW did, I’d cock an eyebrow, test if they were trolling me, and if not, tell them to bring up Mage: The Ascension or chakras or something next time they were in earshot of Michael Vassar. And if that didn’t shake their faith in my stupidity, I’d shrug and start to explain my object-level research questions.
The problem of having to avoid the object-level problems when talking to LW is simple enough. My pedagogy is liable to excessive abstraction, lack of clear motivation, and general vagueness if I can’t point out object-level weird slippery ideas in order to demonstrate why it would be stupid to not load your procedural memory with lots and lots of different perspectives on the same thing, or in order to demonstrate the necessity and nature of many other probably-useful procedural skills. This causes people to assume that I’m suggesting certain policies only out of weird aesthetics or a sense of moral duty, when in reality, though aesthetic and moral reasons also count, I’m actually frustrated because I know of many objective-level confusions that cannot be dealt with satisfactorily without certain knowledge and fundamental skills, and also can’t be dealt with without avoiding many, many, many different errors that even the best LW members are just not yet experienced enough to avoid. And that would be a problem even if my general audience wasn’t already primed to interpret my messages as semi-sensical notes-to-self at best. (“General audience”, for sadly my intended audience mostly doesn’t exist, yet.)
This cleared things up somewhat for me, but not completely. You might consider making a post that explains why your writing style differs from other writing and what you’re trying to accomplish (in a style that is more easily understood by other LWers) and then linking to it when people get confused (or just habitually).
Is this a (partial) joke? Do you have some particular reason for not taking these reactions seriously?
Comment reply 2 of 2.
Like,
By the way aren’t those hated enemies of Reason so contemptible? Haha! So contemptible! Om nom nom signalling nom contempt nom nom “rationality” nom.
Why do you think this is so important? As far as I can tell, this is not how humanity made progress in the past. Or was it? Did our best scientists and philosophers find or build “various decently-motivated-if-imperfect models of the same process/concept/thing so as to form a constellation of useful perspectives on different facets of it”?
Or do you claim that humanity made progress in the past despite not doing what you suggest, and that we could make much faster progress if we did? If so, what do you base your claim on (besides your intuition)?
This actually seems to me exactly how humanity has made progress—countless fields and paradigms clashing and putting various perspectives on problems and making progress. This is a basic philosophy of science perspective, common to views as dissimilar as Kuhn and Feyerabend. There’s no one model that dominates in every field (most models don’t even dominate their field; if we look at the ones considered most precise and successful like particle physics or mathematics, we see that various groups don’t agree on even methodology, much less data or results).
But I think the individuals who contributed most to progress did so by concentrating on particular models that they found most promising or interesting. The proliferation of models only happen on a social level. Why think that we can improve upon this by consciously trying to “find or build various decently-motivated-if-imperfect models”?
None of that defends the assertion that humanity made progress by following one single model, which is what I was replying to, as shown by a highly specific quote from your post. Try again.
I didn’t mean to assert that humanity made progress by following one single model as a whole. As you point out, that is pretty absurd. What I was saying is that humanity made progress by (mostly) having each individual human pursue a single model. (I made a similar point before.)
I took Will’s suggestion to be that we, as individuals, should try to pursue many models, even ones that we don’t think are most promising, as long as they are “decently motivated”. (This is contrary to my intuitions, but not obviously absurd, which is why I wanted to ask Will for his reasons.)
I tried to make my point/question clearer in rest of the paragraph after the sentence you quoted, but looking back I notice that the last sentence there was missing the phrase “as individuals” and therefore didn’t quite serve my purpose.
I think you’re looking at later stages of development than I am. By the time Turing came around the thousands-year-long effort to formalize computation was mostly over; single models get way too much credit because they herald the triumph at the end of the war. It took many thousands of years to get to the point of Church/Goedel/Turing. I think that regarding justification we haven’t even had our Leibniz yet. If you look at Leibniz’s work he combined philosophy (monadology), engineering (expanding on Pascal’s calculators), cognitive science (alphabet of thought), and symbolic logic, all centered around computation though at that time there was no such thing as ‘computation’ as we know it (and now we know it so well that we can use it to listen to music or play chess). Archimedes is a much earlier example but he was less focused. If you look at Darwin he spent the majority of his time as a very good naturalist, paying close attention to lots of details. His model of evolution came later.
With morality we happen to be up quite a few levels of abstraction where ‘looking at lots of details’ involves paying close attention to themes from evolutionary game theory, microeconomics, theoretical computer science &c. Look at CFAI to see Eliezer drawing on evolution and evolutionary psychology to establish an extremely straightforward view of ‘justification’, e.g. “Story of a Blob”. It’s easy to stumble around in a haze and fall off a cliff if you don’t have a ton of models like that and more importantly a very good sense of the ways in which they’re unsatisfactory.
Those reasons aren’t convincing by themselves of course. It’d be nice to have a list of big abstract ideas whose formulation we can study on both the individual and memetic levels. E.g. natural selection and computation, and somewhat smaller less-obviously-analogous ones like general relativity, temperature (there’s a book about its invention), or economics. Unfortunately there’s a lot of success story selection effects and even looking closely might not be enough to get accurate info. People don’t really have introspective access to how they generate ideas.
Side question: how long do you think it would’ve taken the duo of Leibniz and Pascal to discover algorithmic probability theory if they’d been roommates for eternity?
I think my previous paragraph answered this with representative reasons. This is sort of an odd way to ask the question ’cuz it’s mixing levels of abstraction. Intuition is something you get after looking at a lot of history or practicing a skill for awhile or whatever. There are a lot of chess puzzles I can solve just using my intuition, but I wouldn’t have those intuitions unless I’d spent some time on the object level practicing my tactics. So “besides your intuition” means like “and please give a fine-grained answer” and not literally “besides your intuition”. Anyway, yeah, personal experience plus history of science. I think you can see it in Nesov’s comments from back when, e.g. his looking at things like game semantics and abstract interpretation as sources of inspiration.
You’re right, and perhaps I should better familiarize myself with earlier intellectual history. Do you have any books you can recommend, on Leibniz for example?
This one perhaps. I haven’t read it but feel pretty guilty about that fact. Two FAI-minded people have recommended it to me, though I sort of doubt that they’ve actually read it either. Ah, the joys and sorrows of hypothetical CliffsNotes.
ETA: I think Vassar is the guy to ask about history of science or really history of anything. It’s his fault I’m so interested in history.
Comment reply 1 of 2.
I don’t recall attempting to make any (partial) jokes, no. I’m not sure what you’re referring to as “these reactions”. I’ll try to respond to what I think is your (not necessarily explicit) question. I’m sort of responding to everyone in this thread.
When I suspect that a negative judgment of me or some thing(s) associated with me might be objectively correct or well-motivated—when I suspect that I might be objectively unjustified in a way that I hadn’t already foreseen, even if it would be “objectively” unreasonable for me/others to expect me to have seen so in advance—well, that causes me to, how should I put it?, “freak out”. My omnipresent background fear of being objectively unjustified causes me to actually do things, like update my beliefs, or update my strategy (e.g. by flying to California to volunteer for SingInst), or help people I care about (e.g. by flying back to Tucson on a day’s notice if I fear that someone back home might be in danger). This strong fear of being objectively (e.g. reflectively) morally (thus epistemicly) antijustified—contemptible, unvirtuous, not awesome, imperfect—has been part of me forever. You can see why I would put an abnormally large amount of effort into becoming a decent “rationalist”, and why I would have learned abnormally much, abnormally quickly from my year-long stint as a Visiting Fellow. (Side note: It saddens me that there are no longer any venues for such in-depth rationality training, though admittedly it’s hard/impossible for most aspiring rationalists to take advantage of that sort of structure.) You can see why I would take LW’s reactions very, very seriously—unless I had some heavyweight ultra-good reasons for laughing at them instead.
(It’s worth noting that I can make an incorrect epistemic argument and this doesn’t cause me to freak out as long as the moral-epistemic state I was in that caused me to make that argument wasn’t “particularly” unjustified. It’s possible that I should make myself more afraid of ever being literally wrong, but by default I try not to compound my aversions. Reality’s great at doing that without my help.)
“Luckily”, judgments of me or my ideas, as made by most humans, tend to be straightforwardly objectively wrong. Obviously this default of dismissal does not extend to judgments made by humans who know me or my ideas well, e.g. my close friends if the matter is moral in nature and/or some SingInst-related people if the matter is epistemic and/or moral in nature. If someone related to SingInst were to respond like Less Wrong did then that would be serious cause for concern, “heavyweight ultra-good reasons” be damned; but such people aren’t often wrong and thus did not in fact respond in a manner similar to LW’s. Such people know me well enough to know that I am not prone to unreflective stupidity (e.g. prone to unreflective stupidity in the ways that Less Wrong unreflectively interpreted me as being).
If they were like, “The implicit or explicit strategy that motivates you to make comments like that on LW isn’t really helping you achieve your goals, you know that right?”, then I’d be like, “Burning as much of my credibility as possible with as little splash damage as possible is one of my goals; but yes, I know that half-trolling LW doesn’t actually teach them what they need to learn.”. But if they responded like LW did, I’d cock an eyebrow, test if they were trolling me, and if not, tell them to bring up Mage: The Ascension or chakras or something next time they were in earshot of Michael Vassar. And if that didn’t shake their faith in my stupidity, I’d shrug and start to explain my object-level research questions.
The problem of having to avoid the object-level problems when talking to LW is simple enough. My pedagogy is liable to excessive abstraction, lack of clear motivation, and general vagueness if I can’t point out object-level weird slippery ideas in order to demonstrate why it would be stupid to not load your procedural memory with lots and lots of different perspectives on the same thing, or in order to demonstrate the necessity and nature of many other probably-useful procedural skills. This causes people to assume that I’m suggesting certain policies only out of weird aesthetics or a sense of moral duty, when in reality, though aesthetic and moral reasons also count, I’m actually frustrated because I know of many objective-level confusions that cannot be dealt with satisfactorily without certain knowledge and fundamental skills, and also can’t be dealt with without avoiding many, many, many different errors that even the best LW members are just not yet experienced enough to avoid. And that would be a problem even if my general audience wasn’t already primed to interpret my messages as semi-sensical notes-to-self at best. (“General audience”, for sadly my intended audience mostly doesn’t exist, yet.)
This cleared things up somewhat for me, but not completely. You might consider making a post that explains why your writing style differs from other writing and what you’re trying to accomplish (in a style that is more easily understood by other LWers) and then linking to it when people get confused (or just habitually).
I use this strategy playing basketball with my younger cousin. If I win, I win. And if I lose, I wasn’t really trying.
This strategy is pretty transparent to Western males with insecurities revolving around zero-sum competitions.
His reason for not taking the reactions seriously is “because he can”.