I see that my conception of the “principle of charity” is either non-trivial to articulate or so inchoate as to be substantially altered by my attempts to do so. Bearing that in mind:
The principle of charity isn’t a propositional thesis, it’s a procedural rule, like the presumption of innocence. It exists because the cost of false positives is high relative to the cost of reducing false positives: the shortest route towards correctness in many cases is the instruction or argumentation of others, many of whom would appear, upon initial contact, to be stupid, mindkilled, dishonest, ignorant, or otherwise unreliable sources upon the subject in question. The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.
My remark took the above as a basis and proposed behavior to execute in cases where the initial remark strongly suggests that the speaker is thinking irrationally (e.g. an assertion that the modern evolutionary synthesis is grossly incorrect) and your estimate of the time required to evaluate the actual state of the speaker’s reasoning processes was more than you are willing to spend. In such a case, what the principle of charity implies are two things:
You should consider the nuttiness of the speaker as being an open question with a large prior probability, akin to your belief prior to lifting a dice cup that you have not rolled double-sixes, rather than a closed question with a large posterior probability, akin to your belief that the modern evolutionary synthesis is largely correct.
You should withdraw from the conversation in such a fashion as to emphasize that you are in general willing to put forth the effort to understand what they are saying, but that the moment is not opportune.
the cost of false positives is high relative to the cost of reducing false positives
I don’t see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not.
The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.
You are saying (a bit later in your post) that the principle of charity implies two things. The second one is a pure politeness rule and it doesn’t seem to me that the fashion of withdrawing from a conversation will help me “reliably distinguish” anything.
As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn’t help me reliably distinguish anything either.
In fact, I don’t see why there should be a particular exception here (“a procedural rule”) to the bog-standard practice of updating on evidence. If my updating process is incorrect, I should fix it and not paper it over with special rules for seemingly-stupid people. If it is reasonably OK, I should just go ahead and update. That will not necessarily result in either a “closed question” or a “large posterior”—it all depends on the particulars.
I’ll say it again: POC doesn’t mean “believe everyone is sane and intelligent”, it means “treat everyone’s comments as though they were made by a sane , intelligent, person”.
Ie, its a defeasible assumption. If you fail, you have evidence that it was a dumb comment. Ift you succeed, you have evidence it wasn’t. Either way, you have evidence, and you are not sitting in an echo chamber where your beliefs about people’s dumbness go forever untested, because you reject out of hand anything that sounds superficially dumb, .or was made by someone you have labelled , however unjustly,as dumb.
The PoC tends to be advised in the context of philosophy, where there is a background assumption of infinite amounts of time to consider things, The resource-constrained version would be to interpret comments charitably once you have, for whatever reason, got into a discussion....with the corollary of reserving some space for “I might be wrong” where you haven’t had the resources to test the hypothesis.
background assumption of infinite amounts of time to consider things
LOL. While ars may be longa, vita is certainly brevis. This is a silly assumption, better suited for theology, perhaps—it, at least, promises infinte time. :-)
If I were living in English countryside around XVIII century I might have had a different opinion on the matter, but I do not.
interpret comments charitably once you have, for whatever reason, got into a discussion
It’s not a binary either-or situation. I am willing to interpret comments charitably according to my (updateable) prior of how knowledgeable, competent, and reasonable the writer is. In some situations I would stop and ponder, in others I would roll my eyes and move on.
As I operationalize it, that definition effectively waters down the POC to a degree I suspect most POC proponents would be unhappy with.
Sane, intelligent people occasionally say wrong things; in fact, because of selection effects, it might even be that most of the wrong things I see & hear in real life come from sane, intelligent people. So even if I were to decide that someone who’s just made a wrong-sounding assertion were sane & intelligent, that wouldn’t lead me to treat the assertion substantially more charitably than I otherwise would (and I suspect that the kind of person who likes the(ir conception of the) POC might well say I were being “uncharitable”).
Edit: I changed “To my mind” to “As I operationalize it”. Also, I guess a shorter form of this comment would be: operationalized like that, I think I effectively am applying the POC already, but it doesn’t feel like it from the inside, and I doubt it looks like it from the outside.
You have uncharutably interpreted my formulation to mean ’treat everyone’s comments as though they were made by a sane intelligent person who may .or may have been having an off day”. What kind of guideline is that?
The charitable version would have been “treat everyone’s comments as though they were made by someone sane and intelligent at the time”.
(I’m giving myself half a point for anticipating that someone might reckon I was being uncharitable.)
You have uncharutably interpreted my formulation to mean ’treat everyone’s comments as though they were made by a sane intelligent person who may .or may have been having an off day”. What kind of guideline is that?
A realistic one.
The charitable version would have been “treat everyone’s comments as though they were made by someone sane and intelligent at the time”.
The thing is, that version actually sounds less charitable to me than my interpretation. Why? Well, I see two reasonable ways to interpret your latest formulation.
The first is to interpret “sane and intelligent” as I normally would, as a property of the person, in which case I don’t understand how appending “at the time” makes a meaningful difference. My earlier point that sane, intelligent people say wrong things still applies. Whispering in my ear, “no, seriously, that person who just said the dumb-sounding thing is sane and intelligent right now” is just going to make me say, “right, I’m not denying that; as I said, sanity & intelligence aren’t inconsistent with saying something dumb”.
The second is to insist that “at the time” really is doing some semantic work here, indicating that I need to interpret “sane and intelligent” differently. But what alternative interpretation makes sense in this context? The obvious alternative is that “at the time” is drawing my attention to whatever wrong-sounding comment was just made. But then “sane and intelligent” is really just a camouflaged assertion of the comment’s worthiness, rather than the claimant’s, which reduces this formulation of the POC to “treat everyone’s comments as though the comments are cogent”.
The first interpretation is surely not your intended one because it’s equivalent to one you’ve ruled out. So presumably I have to go with the second interpretation, but it strikes me as transparently uncharitable, because it sounds like a straw version of the POC (“oh, so I’m supposed to treat all comments as cogent, even if they sound idiotic?”).
The third alternative, of course, is that I’m overlooking some third sensible interpretation of your latest formulation, but I don’t see what it is; your comment’s too pithy to point me in the right direction.
But then “sane and intelligent” is really just a camouflaged assertion of the comment’s worthiness, rather than the claimant’s, which reduces this formulation of the POC to “treat everyone’s comments as though the comments are cogent”. [..] (“oh, so I’m supposed to treat all comments as cogent, even if they sound idiotic?”
Yep.
You have assumed that cannot be the correct interpretation of the PoC, without saying why. In light of your other comments, it could well be that you are assuming that the PoC can only be true by correspondence to reality or false, by lack of correspondence. But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.
But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.
OK, but that’s not an adequate basis for recommending a given norm/guideline/heuristic. One has to at least sketch an answer to the question, drawing on evidence and/or argument (as RobinZ sought to).
You have assumed that cannot be the correct interpretation of the PoC, without saying why.
Well, because it’s hard for me to believe you really believe that interpretation and understand it in the same way I would naturally operationalize it: namely, noticing and throwing away any initial suspicion I have that a comment’s wrong, and then forcing myself to pretend the comment must be correct in some obscure way.
As soon as I imagine applying that procedure to a concrete case, I cringe at how patently silly & unhelpful it seems. Here’s a recent-ish, specific example of me expressing disagreement with a statement I immediately suspected was incorrect.
What specifically would I have done if I’d treated the seemingly patently wrong comment as cogent instead? Read the comment, thought “that can’t be right”, then shaken my head and decided, “no, let’s say that is right”, and then...? Upvoted the comment? Trusted but verified (i.e. not actually treated the comment as cogent)? Replied with “I presume this comment is correct, great job”? Surely these are not courses of action you mean to recommend (the first & third because they actively support misinformation, the second because I expect you’d find it insufficiently charitable). Surely I am being uncharitable in operationalizing your recommendation this way...even though that does seem to me the most literal, straightforward operationalization open to me. Surely I misunderstand you. That’s why I assumed “that cannot be the correct interpretation” of your POC.
Well, because it’s hard for me to believe you really believe that interpretation and understand it in the same way I would naturally operationalize it: namely, noticing and throwing away any initial suspicion I have that a comment’s wrong, and then forcing myself to pretend the comment must be correct in some obscure way.
If I may step in at this point; “cogent” does not mean “true”. The principle of charity (as I understand it) merely recommends treating any commenter as reasonably sane and intelligent. This does not mean he can’t be wrong—he may be misinformed, he may have made a minor error in reasoning, he may simply not know as much about the subject as you do. Alternatively, you may be misinformed, or have made a minor error in reasoning, or not know as much about the subject as the other commenter...
So the correct course of action then, in my opinion, is to find the source of error and to be polite about it. The example post you linked to was a great example—you provided statistics, backed them up, and linked to your sources. You weren’t rude about it, you simply stated facts. As far as I could see, you treated RomeoStevens as sane, intelligent, and simply lacking in certain pieces of pertinent historical knowledge—which you have now provided.
(As to what RomeoStevens said—it was cogent. That is to say, it was pertinent and relevant to the conversation at the time. That it was wrong does not change the fact that it was cogent; if it had been right it would have been a worthwhile point to make.)
If I may step in at this point; “cogent” does not mean “true”.
Yes, and were I asked to give synonyms for “cogent”, I’d probably say “compelling” or “convincing” [edit: rather than “true”]. But an empirical claim is only compelling or convincing (and hence may only be cogent) if I have grounds for believing it very likely true. Hence “treat all comments as cogent, even if they sound idiotic” translates [edit: for empirical comments, at least] to “treat all comments as if very likely true, even if they sound idiotic”.
Now you mention the issue of relevance, I think that, yeah, I agree that relevance is part of the definition of “cogent”, but I also reckon that relevance is only a necessary condition for cogency, not a sufficient one. And so...
As to what RomeoStevens said—it was cogent. That is to say, it was pertinent and relevant to the conversation at the time.
...I have to push back here. While pertinent, the comment was not only wrong but (to me) obviously very likely wrong, and RomeoStevens gave no evidence for it. So I found it unreasonable, unconvincing, and unpersuasive — the opposite of dictionary definitions of “cogent”. Pertinence & relevance are only a subset of cogency.
The principle of charity (as I understand it) merely recommends treating any commenter as reasonably sane and intelligent. This does not mean he can’t be wrong—he may be misinformed, he may have made a minor error in reasoning, he may simply not know as much about the subject as you do.
That’s why I wrote that that version of the POC strikes me as watered down; someone being “reasonably sane and intelligent” is totally consistent with their just having made a trivial blunder, and is (in my experience) only weak evidence that they haven’t just made a trivial blunder, so “treat commenters as reasonably sane and intelligent” dissolves into “treat commenters pretty much as I’d treat anyone”.
Hence “treat all comments as cogent, even if they sound idiotic” translates [edit: for empirical comments, at least] to “treat all comments as if very likely true, even if they sound idiotic”.
Then “cogent” was probably the wrong word to use.
I’d need a word that means pertinent, relevant, and believed to have been most likely true (or at least useful to say) by the person who said it; but not necessarily actually true.
While pertinent, the comment was not only wrong but (to me) obviously very likely wrong, and RomeoStevens gave no evidence for it. So I found it unreasonable, unconvincing, and unpersuasive — the opposite of dictionary definitions of “cogent”. Pertinence & relevance are only a subset of cogency.
I think at this point, so as not to get stuck on semantics, we should probably taboo the word ‘cogent’.
(Having said that, I do agree anyone with access to the statistics you quoted would most likely find RomeoSteven’s comments unreasonable, unconvincing and unpersuasive).
so “treat commenters as reasonably sane and intelligent” dissolves into “treat commenters pretty much as I’d treat anyone”.
Then you may very well be effectively applying the principle already. Looking at your reply to RomeoStevens supports this assertion.
TheAncientGeek assented to that choice of word, so I stuck with it. His conception of the POC might well be different from yours and everyone else’s (which is a reason I’m trying to pin down precisely what TheAncientGeek means).
Fair enough, I was checking different dictionaries (and I’ve hitherto never noticed other people using “cogent” for “pertinent”).
Then you may very well be effectively applying the principle already. Looking at your reply to RomeoStevens supports this assertion.
Maybe, though I’m confused here by TheAncientGeek saying in one breath that I applied the POC to RomeoStevens, but then agreeing (“Thats exactly what I mean.”) in the next breath with a definition of the POC that implies I didn’t apply the POC to RomeoStevens.
I think that you and I are almost entirely in agreement, then. (Not sure about TheAncientGeek).
Maybe, though I’m confused here by TheAncientGeek saying in one breath that I applied the POC to RomeoStevens, but then agreeing (“Thats exactly what I mean.”) in the next breath with a definition of the POC that implies I didn’t apply the POC to RomeoStevens.
I think you’re dealing with double-illusion-of-transparency issues here. He gave you a definition (“treat everyone’s comments as though they were made by someone sane and intelligent at the time”) by which he meant some very specific concept which he best approximated by that phrase (call this Concept A). You then considered this phrase, and mapped it to a similar-but-not-the-same concept (Concept B) which you defined and tried to point out a shortcoming in (“namely, noticing and throwing away any initial suspicion I have that a comment’s wrong, and then forcing myself to pretend the comment must be correct in some obscure way.”).
Now, TheAncientGeek is looking at your words (describing Concept B) and reading into them the very similar Concept A; where your post in response to RomeoStevens satisfies Concept A but not Concept B.
Nailing down the difference between A and B will be extremely tricky and will probably require both of you to describe your concepts in different words several times. (The English language is a remarkably lossy means of communication).
Your diagnosis sounds all too likely. I’d hoped to minimize the risk of this kind of thing by concretizing and focusing on a specific, publicly-observable example, but that might not have helped.
Yes, that was an example of PoC, because satt assumed RomeoStevens had failed to look up the figures, rather than insanely believing that 120,000ish < 500ish.
But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.
OK, but that’s not an adequate basis for recommending a given norm/guideline/heuristic. One has to at least sketch an answer to the question, drawing on evidence and/or argument
Yes, but that’s beside the original point. What you call a realistic guideline doesnt work as a guideline at all, and therefore isnt a a charitable interpretation of the PoC.
Justifying that PoC as something that works at what it is supposed to do, is a question that can be answered, but it is a separate question.
namely, noticing and throwing away any initial suspicion I have that a comment’s wrong, and then forcing myself to pretend the comment must be correct in some obscure way.
Thats exactly what I mean.
What specifically would I have done if I’d treated the seemingly patently wrong comment as cogent instead?
Cogent doesn’t mean right. You actually succeeded in treating it as wrong for sane reasons, ie failure to check data.
But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. [...]
OK, but [...]
Yes, but that’s beside the original point.
You brought it up!
What you call a realistic guideline doesnt work as a guideline at all, and therefore isnt a a charitable interpretation of the PoC.
I continue tothink that the version I called realistic is no less workable than your version.
Justifying that PoC as something that works at what it is supposed to do, is a question that can be answered, but it is a separate question.
Again, it’s a question you introduced. (And labelled “the question”.) But I’m content to put it aside.
noticing and throwing away any initial suspicion I have that a comment’s wrong, and then forcing myself to pretend the comment must be correct in some obscure way.
Thats exactly what I mean.
But surely it isn’t. Just 8 minutes earlier you wrote that a case where I did the opposite was an “example of PoC”.
But not one that tells you unambiguously what to do, ie not a usable guideline at all.
There’s a lot of complaint about this heuristic along the lines that it doesn’t guarantee perfect results...ie, its a heuristic
And now there is the complaint that its not realistic, it doesn’t reflect reality.
Ideal rationalists can stop reading now.
Everybody else: you’re biased. Specifically, overconfident,. Overconfidence makes people overestimate their ability to understand what people are saying, and underestimate the rationality of others. The PoC is a heuristic which corrects those. As a heuristic, an approximate method, it i is based on the principle that overshooting the amount of sense people are making is better than undershooting. Overshooting would be a problem, if there were some goldilocks alternative, some way of getting things exactly right. There isn’t. The voice in your head that tells you you are doing just fine its the voice of your bias.
But not one that tells you unambiguously what to do, ie not a usable guideline at all.
I don’t see how this applies any more to the “may .or may have been having an off day”″ version than it does to your original. They’re about as vague as each other.
Overconfidence makes people overestimate their ability to understand what people are saying, and underestimate the rationality of others. The PoC is a heuristic which corrects those. As a heuristic, an approximate method, it i is based on the principle that overshooting the amount of sense people are making is better than undershooting.
Understood. But it’s not obvious to me that “the principle” is correct, nor is it obvious that a sufficiently strong POC is better than my more usual approach of expressing disagreement and/or asking sceptical questions (if I care enough to respond in the first place).
But not one that tells you unambiguously what to do, ie not a usable guideline at all.
I don’t see how this applies any more to the “may .or may have been having an off day”″ version than it does to your original. They’re about as vague as each other.
Mine implies a heuristic of “make repeated attempts at re-intepreting the comment using different background assumptions”. What does yours imply?
Understood. But it’s not obvious to me that “the principle” is correct,
As I have explained, it provides its own evidence.
nor is it obvious that a sufficiently strong POC is better than my more usual approach of expressing disagreement and/or asking sceptical questions (if I care enough to respond in the first place).
Neither of those is much good if interpreting someone who died 100 years ago.
Mine implies a heuristic of “make repeated attempts at re-intepreting the comment using different background assumptions”.
I don’t see how “treat everyone’s comments as though they were made by a sane , intelligent, person” entails that without extra background assumptions. And I expect that once those extra assumptions are spelled out, the “may .or may have been having an off day” version will imply the same action(s) as your original version.
As I have explained, it provides its own evidence.
Well, when I’ve disagreed with people in discussions, my own experience has been that behaving according to my baseline impression of how much sense they’re making gets me closer to understanding than consciously inflating my impression of how much sense they’re making.
Neither of those is much good if interpreting someone who died 100 years ago.
A fair point, but one of minimal practical import. Almost all of the disagreements which confront me in my life are disagreements with live people.
it means “treat everyone’s comments as though they were made by a sane , intelligent, person”.
I don’t like this rule. My approach is simpler: attempt to understand what the person means. This does not require me to treat him as sane or intelligent.
The PoC is a way of breaking down “understand what the other person says” into smaller steps, not .something entirely different, Treating your own mental processes as a black box that always delivers the right answer is a great way to stay in the grip of bias.
The prior comment leads directly into this one: upon what grounds do I assert that an inexpensive test exists to change my beliefs about the rationality of an unfamiliar discussant? I realize that it is not true in the general case that the plural of anecdote is data, and much the following lacks citations, but:
Many people raised to believe that evolution is false because it contradicts their religion change their minds in their first college biology class. (I can’t attest to this from personal experience—this is something I’ve seen frequently reported or alluded to via blogs like Slacktivist.)
An intelligent, well-meaning, LessWrongian fellow was (hopefully-)almost driven out of my local Less Wrong meetup in no small part because a number of prominent members accused him of (essentially) being a troll. In the course of a few hours conversation between myself and a couple others focused on figuring out what he actually meant, I was able to determine that (a) he misunderstood the subject of conversation he had entered, (b) he was unskilled at elaborating in a way that clarified his meaning when confusion occurred, and (c) he was an intelligent, well-meaning, LessWrongian fellow whose participation in future meetups I would value.
I am unable to provide the details of this particular example (it was relayed to me in confidence), but an acquaintance of mine was a member of a group which was attempting to resolve an elementary technical challenge—roughly the equivalent of setting up a target-shooting range with a safe backstop in terms of training required. A proposal was made that was obviously unsatisfactory—the equivalent of proposing that the targets be laid on the ground and everyone shoot straight down from a second-story window—and my acquaintance’s objection to it on common-sense grounds was treated with a response equivalent to, “You’re Japanese, what would you know about firearms?” (In point of fact, while no metaphorical gunsmith, my acquaintance’s knowledge was easily sufficient to teach a Boy Scout merit badge class.)
In my first experience on what was then known as the Internet Infidels Discussion Board, my propensity to ask “what do you mean by x” sufficed to transform a frustrated, impatient discussant into a cheerful, enthusiastic one—and simultaneously demonstrate that said discussant’s arguments were worthless in a way which made it easy to close the argument.
In other words, I do not often see the case in which performing the tests implied by the principle of charity—e.g. “are you saying [paraphrase]?”—are wasteful, and I frequently see cases where failing to do so has been.
What you are talking about doesn’t fall under the principle of charity (in my interpretation of it). It falls under the very general rubric of “don’t be stupid yourself”.
In particular, considering that the speaker expresses his view within a framework which is different from your default framework is not an application of the principle of charity—it’s an application of the principle “don’t be stupid, of course people talk within their frameworks, not within your framework”.
I might be arguing for something different than your principle of charity. What I am arguing for—and I realize now that I haven’t actually explained a procedure, just motivations for one—is along the following lines:
When somebody says something prima facie wrong, there are several possibilities, both regarding their intended meaning:
They may have meant exactly what you heard.
They may have meant something else, but worded it poorly.
They may have been engaging in some rhetorical maneuver or joke.
They may have been deceiving themselves.
They may have been intentionally trolling.
They may have been lying.
...and your ability to infer such:
Their remark may resemble some reasonable assertion, worded badly.
Their remark may be explicable as ironic or joking in some sense.
Their remark may conform to some plausible bias of reasoning.
Their remark may seem like a lie they would find useful.*
Their remark may represent an attempt to irritate you for their own pleasure.*
Their remark may simply be stupid.
Their remark may allow more than one of the above interpretations.
What my interpretation of the principle of charity suggests as an elementary course of action in this situation is, with an appropriate degree of polite confusion, to ask for clarification or elaboration, and to accompany this request with paraphrases of the most likely interpretations you can identify of their remarks excluding the ones I marked with asterisks.
Depending on their actual intent, this has a good chance of making them:
Elucidate their reasoning behind the unbelievable remark (or admit to being unable to do so);
Correct their misstatement (or your misinterpretation—the difference is irrelevant);
Admit to their failed humor;
Admit to their being unable to support their assertion, back off from it, or sputter incoherently;
Grow impatient at your failure to rise to their goading and give up; or
Back off from (or admit to, or be proven guilty of) their now-unsupportable deception.
In the first three or four cases, you have managed to advance the conversation with a well-meaning discussant without insult; in the latter two or three, you have thwarted the goals of an ill-intentioned one—especially, in the last case, because you haven’t allowed them the option of distracting everyone from your refutations by claiming you insulted them. (Even if they do so claim, it will be obvious that they have no just cause to be.)
I say this falls under the principle of charity because it involves (a) granting them, at least rhetorically, the best possible motives, and (b) giving them enough of your time and attention to seek engagement with their meaning, not just a lazy gloss of their words.
A small addendum, that I realized I omitted from my prior arguments in favor of the principle of charity:
Because I make a habit of asking for clarification when I don’t understand, offering clarification when not understood, and preferring “I don’t agree with your assertion” to “you are being stupid”, people are happier to talk to me. Among the costs of always responding to what people say instead of your best understanding of what they mean—especially if you are quick to dismiss people when their statements are flawed—is that talking to you becomes costly: I have to word my statements precisely to ensure that I have not said something I do not mean, meant something I did not say, or made claims you will demand support for without support. If, on the other hand, I am confident that you will gladly allow me to correct my errors of presentation, I can simply speak, and fix anything I say wrong as it comes up.
Which, in turn, means that I can learn from a lot of people who would not want to speak to me otherwise.
responding to what people say instead of your best understanding of what they mean
Again: I completely agree that you should make your best effort to understand what other people actually mean. I do not call this charity—it sounds like SOP and “just don’t be an idiot yourself” to me.
I don’t see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not.
You’re right: it’s not self-evident. I’ll go ahead and post a followup comment discussing what sort of evidential support the assertion has.
As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn’t help me reliably distinguish anything either.
My usage of the terms “prior” and “posterior” was obviously mistaken. What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it’s perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability. I have high confidence that an inexpensive test—lifting the dice cup—will change my beliefs about the value of the die roll by many orders of magnitude, and low confidence that any comparable test exists to affect my confidence regarding the scientific theory.
What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it’s perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability.
I think you are talking about what’s in local parlance is called a “weak prior” vs a “strong prior”. Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.
In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior—the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior—it will take much convincing evidence to persuade you that the theory is not correct after all.
Of course, the posterior of a previous update becomes the prior of the next update.
Using this language, then, you are saying that prima facie evidence of someone’s stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.
Using this language, then, you are saying that prima facie evidence of someone’s stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.
Oh, dear—that’s not what I meant at all. I meant that—absent a strong prior—the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent. It’s entirely possible that ten minutes of conversation will suffice to make a strong prior out of this weaker one—there’s someone arguing for dualism on a webcomic forum I (in)frequent along the same lines as Chalmers “hard problem of consciousness”, and it took less than ten posts to establish pretty confidently that the same refutations would apply—but as the history of DIPS (defense-independent pitching statistics) shows, it’s entirely possible for an idea to be as correct as “the earth is a sphere, not a plane” and nevertheless be taken as prima facie absurd.
(As the metaphor implies, DIPS is not quite correct, but it would be more accurate to describe its successors as “fixing DIPS” than as “showing that DIPS was completely wrongheaded”.)
I meant that—absent a strong prior—the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent.
Oh, I agree with that.
What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid. The principle of charity should not prevent that from happening. Of course evidence of stupidity should not make you close the case, declare someone irretrievably stupid, and stop considering any further evidence.
As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.
What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid.
...in the context during which they exhibited the behavior which generated said evidence, of course. In broader contexts, or other contexts? To a much lesser extent, and not (usually) strongly in the strong-prior sense, but again, yes. That you should always be capable of considering further evidence is—I am glad to say—so universally accepted a proposition in this forum that I do not bother to enunciate it, but I take no issue with drawing conclusions from a sufficient body of evidence.
Come to think, you might be amused by this fictional dialogue about a mendacious former politician, illustrating the ridiculousness of conflating “never assume that someone is arguing in bad faith” and “never assert that someone is arguing in bad faith”. (The author also posted a sequel, if you enjoy the first.)
As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.
I’m afraid that I would have about as much luck barking like a duck as enunciating how I evaluate the intelligence (or reasonableness, or honesty, or...) of those I converse with. YMMV, indeed.
Using this language, then, you are saying that prima facie evidence of someone’s stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being. And I don’t see why this should be so.
The fundamental attribution error is about underestimating the importance of external drivers (the particular situation, random chance, etc.) and overestimating the importance of internal factors (personality, beliefs, etc.) as an explanation for observed actions.
If a person in a discussion is spewing nonsense, it is rare that external factors are making her do it (other than a variety of mind-altering chemicals). The indicators of stupidity are NOT what position a person argues or how much knowledge about the subject does she has—it’s how she does it. And inability e.g. to follow basic logic is hard to attribute to external factors.
This discussion has got badly derailed. You are taking it that there is some robust fact about someones lack of lrationality or intelligence which may or may not be explained by internal or external factors.
The point is that you cannot make a reliable judgement about someone’s rationality or intelligence unless you have understood that they are saying,....and you cannot reliably understand what they ares saying unl ess you treat it as if it were the product of a rational and intelligent person. You can go to “stupid”when all attempts have failed, but not before.
I think it’s true, on roughly these grounds: taking yourself to understand what someone is saying entails thinking that almost all of their beliefs (I mean ‘belief’ in the broad sense, so as to include my beliefs about the colors of objects in the room) are true. The reason is that unless you assume almost all of a person’s (relevant) beliefs are true, the possibility space for judgements about what they mean gets very big, very fast. So if ‘generally understanding what someone is telling you’ means having a fairly limited possibility space, you only get this on the assumption that the person talking to you has mostly true beliefs. This, of course, doesn’t mean they have to be rational in the LW sense, or even very intelligent. The most stupid and irrational (in the LW sense) of us still have mostly true beliefs.
I guess the trick is to imagine what it would be to talk to someone who you thought had on the whole false beliefs. Suppose they said ‘pass me the hammer’. What do you think they meant by that? Assuming they have mostly or all false beliefs relevant to the utterance, they don’t know what a hammer is or what ‘passing’ involves. They don’t know anything about what’s in the room, or who you are, or what you are, or even if they took themselves to be talking to you, or talking at all. The possibility space for what they took themselves to be saying is too large to manage, much larger than, for example, the possibility space including all and only every utterance and thought that’s ever been had by anyone. We can say things like ‘they may have thought they were talking about cats or black holes or triangles’ but even that assumes vastly more truth and reason in the person that we’ve assumed we can anticipate.
Generally speaking, understanding what a person means implies reconstructing their framework of meaning and reference that exists in their mind as the context to what they said.
Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.
Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.
Well, there are two questions here: 1) is it in principle necessary to assume your interlocutors are sane and rational, and 2) is it as a matter of practical necessity a fact that we always do assume our interlocutors are sane and rational. I’m not sure about the first one, but I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they’re broadly sane, rational, and have mostly true beliefs. I’d be interested to know which of these you’re arguing about.
Also, we should probably taboo ‘sane’ and ‘rational’. People around here have a tendency to use these words in an exaggerated way to mean that someone has a kind of specific training in probability theory, statistics, biases, etc. Obviously people who have none of these things, like people living thousands of years ago, were sane and rational in the conventional sense of these terms, and they had mostly true beliefs even by any standard we would apply today.
I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they’re broadly sane, rational, and have mostly true beliefs.
I don’t think so. Two counter-examples:
I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase “Jesus’ self-sacrifice washes away the original sin” without accepting that Christianity is “mostly true” or “rational”.
Consider a psychotherapist talking to a patient, let’s say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.
I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase “Jesus’ self-sacrifice washes away the original sin” without accepting that Christianity is “mostly true” or “rational”.
You’re not being imaginative enough: you’re thinking about someone with almost all true beliefs (including true beliefs about what Christians tend to say), and a couple of sort of stand out false beliefs about how the universe works as a whole. I want you to imagine talking to someone with mostly false beliefs about the subject at hand. So you can’t assume that by ‘Jesus’ self-sacrifice washes away the original sin’ that they’re talking about anything you know anything about because you can’t assume they are connecting with any theology you’ve ever heard of. Or even that they’re talking about theology. Or even objects or events in any sense you’re familiar with.
Consider a psychotherapist talking to a patient, let’s say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.
I think, again, delusional people are remarkable for having some unaccountably false beliefs, not for having mostly false beliefs. People with mostly false beliefs, I think, wouldn’t be recognizable even as being conscious or aware of their surroundings (because they’re not!).
Well, my point is that as a matter of course, you assume everyone you talk to has mostly true beliefs, and for the most part thinks rationally. We’re talking about ‘people’ with mostly or all false beliefs just to show that we don’t have any experience with such creatures.
Bigger picture: the principle of charity, that is the assumption that whoever you are talking to is mostly right and mostly rational, isn’t something you ought to hold, it’s something you have no choice but to hold. The principle of charity is a precondition on understanding anyone at all, even recognizing that they have a mind.
People will have mostly true beliefs, but they might not have true beliefs in the areas under concern. For obvious reasons, people’s irrationality is likely to be disproportionately present in the beliefs with which they disagree with others. So the fact that you need to be charitable in assuming people have mostly true beliefs may not be practically useful—I’m sure a creationist rationally thinks water is wet, but if I’m arguing with him, that subject probably won’t come up as much as creationism.
That’s true, but I feel like a classic LW point can be made here: suppose it turns out some people can do magic. That might seem like a big change, but in fact magic will then just be subject to the same empirical investigation as everything else, and ultimately the same integration into physical theory the same as everything else.
So while I agree with you that when we specify a topic, we can have broader disagreement, that disagreement is built on and made possible by very general agreement about everything else. Beliefs are holistic, not atomic, and we can’t partition them off while making any sense of them. We’re never just talking about some specific subject matter, but rather emphasizing some subject matter on the background of all our other beliefs (most of which must be true).
The thought, in short, is that beliefs are of a nature to be true, in the way dogs naturally have four legs. Some don’t, because something went wrong, but we can only understand the defect in these by having the basic nature of beliefs, namely truth, in the background.
That might seem like a big change, but in fact magic will then just be subject to the same empirical investigation as everything else, and ultimately the same integration into physical theory the same as everything else.
That could be true but doesn’t have to be true. Our ontological assumptions might also turn out to be mistaken.
That could be true but doesn’t have to be true. Our ontological assumptions might also turn out to be mistaken.
True, and a discovery like that might require us to make some pretty fundamental changes. But I don’t think Morpheus could be right about the universe’s relation to math. No universe, I take it, ‘runs’ on math in anything but the loosest figurative sense. The universe we live in is subject to mathematical analysis, and what reason could we have for thinking any universe could fail to be so? I can’t say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding. Even if someone has some very false beliefs, their beliefs are false not just jibber-jabber (and if they are just jibber-jabber then you’re not talking to a person). Even false beliefs are going to have a rational structure.
I can’t say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
That is a fact about you, not a fact about the universe. Nobody could imagine light being both a particle and a wave, for example, until their study of nature forced them to.
People could imagine such a thing before studying nature showed they needed to; they just didn’t. I think there’s a difference between a concept that people only don’t imagine, and a concept that people can’t imagine. The latter may mean that the concept is incoherent or has an intrinsic flaw, which the former doesn’t.
People could imagine such a thing before studying nature showed they needed to; they just didn’t. I think there’s a difference between a concept that people only don’t imagine, and a concept that people can’t imagine.
In the interest of not having this discussion degenerate into an argument about what “could” means, I would like to point out that your and hen’s only evidence that you couldn’t imagine a world that doesn’t run on math is that you haven’t.
For one thing, “math” trivially happens to run on world, and corresponds to what happens when you have a chain of interactions. Specifically to how one chain of physical interactions (apples being eaten for example) combined with another that looks dissimilar (a binary adder) ends up with conclusion that apples were counted correctly, or how the difference in count between the two processes of counting (none) corresponds to another dissimilar process (the reasoning behind binary arithmetic).
As long as there’s any correspondences at all between different physical processes, you’ll be able to kind of imagine that world runs on world arranged differently, and so it would appear that world “runs on math”.
If we were to discover some new laws of physics that were producing incalculable outcomes, we would just utilize those laws in some sort of computer and co-opt them as part of “math”, substituting processes for equivalent processes. That’s how we came up with math in the first place.
edit: to summarize, I think “the world runs on math” is a really confused way to look at how world relates to the practice of mathematics inside of it. I can perfectly well say that the world doesn’t run on math any more than the radio waves are transmitted by mechanical aether made of gears, springs, and weights, and have exact same expectations about everything.
It seems to me that as long as there’s anything that is describable in the loosest sense, that would be taken to be true.
I mean, look at this, some people believe literally that our universe is a “mathematical object”, what ever that means (tegmarkery), and we haven’t even got a candidate TOE that works.
edit: I think the issue is that Morpheus confuses “made of gears” with “predictable by gears”. Time is not made of gears, and neither are astronomical objects, but a clock is very useful nonetheless.
I don’t see why “describable” would necessarily imply “describable mathematically”. I can imagine a qualia only universe,and I can imagine the ability describe qualia. As things stand, there are a number of things that can’t be described mathematically
What’s your evidence for this? Keep in mind that the history of science is full of people asserting that X has to be the case because they couldn’t imagine the world being otherwise, only for subsequent discoveries to show that X is not in fact the case.
Well, the most famous (or infamous) is Kant’s argument the space must be flat (in the Euclidean sense) because the human mind is incapable of imagining it to be otherwise.
Another example was Lucretius’s argument against the theory that the earth is round: if the earth were round and things fell towards its center than in which direction would an object at the center fall?
Not to mention the standard argument against the universe having a beginning “what happened before it?”
I don’t intend to bicker, I think your point is a good one independently of these examples. In any case, I don’t think at least the first two of these examples of the phenomenon you’re talking about.
Well, the most famous (or infamous) is Kant’s argument the space must be flat (in the Euclidean sense) because the human mind is incapable of imagining it to be otherwise.
I think this comes up in the sequences as an example of the mind-projection fallacy, but that’s not right. Kant did not take himself to be saying anything about the world outside the mind when he said that space was flat. He only took himself to be talking about the world as it appears to us. Space, so far as Kant was concerned, was part of the structure of perception, not the universe. So in the Critique of Pure Reason, he says:
...if we remove our own subject or even only the subjective constitution of the senses in general, then all constitution, all relations of objects in space and time, indeed space and time themselves would disappear, and as appearances they cannot exist in themselves, but only in us. What may be the case with objects in themselves and abstracted from all this receptivity of our sensibility remains entirely unknown to us. (A42/B59–60)
So Kant is pretty explicit that he’s not making a claim about the world, but about the way we percieve it. Kant would very likely poke you in chest and say “No you’re committing the mind-projection fallacy for thinking that space is even in the world, rather than just a form of perception. And don’t tell me about the mind-projection fallacy anyway, I invented that whole move.”
Another example was Lucretius’s argument against the theory that the earth is round: if the earth were round and things fell towards its center than in which direction would an object at the center fall?
This also isn’t an example, because the idea of a spherical world had in fact been imagined in detail by Plato (with whom Lucretius seems to be arguing), Aristotle, and many of Lucretius’ contemporaries and predecessors. Lucretius’ point couldn’t have been that a round earth is unimaginable, but that it was inconsistent with an analysis of the motions of simple bodies in terms of up and down: you can’t say that fire is of a nature to go up if up is entirely relative. Or I suppose, you can say that but you’d have to come up with a more complicated account of natures.
Kant did not take himself to be saying anything about the world outside the mind when he said that space was flat. He only took himself to be talking about the world as it appears to us. Space, so far as Kant was concerned, was part of the structure of perception, not the universe.
And in particular he claimed that this showed it had to be Euclidean because humans couldn’t imagine it otherwise. Well, we now know it’s not Euclidean and people can imagine it that way (I suppose you could dispute this, but that gets into exactly what we mean by “imagine” and attempting to argue about other people’s qualia).
And in particular he claimed that this showed it had to be Euclidean because humans couldn’t imagine it otherwise.
No, he never says that. Feel free to cite something from Kant’s writing, or the SEP or something. I may be wrong, but I just read though the Aesthetic again, and I couldn’t find anything that would support your claim.
EDIT: I did find one passage that mentions imagination:
Space then is a necessary representation a priori, which serves for the foundation of all external intuitions. We never can imagine or make representation to ourselves of the non-existence of space, though we may easily enough think that no objects are found in it.
I’ve edited my post accordingly, but my point remains the same. Notice that Kant does not mention the flatness of space, nor is it at all obvious that he’s inferring anything from our inability to imagine the non-existence of space. END EDIT.
You gave Kant’s views about space as an example of someone saying ‘because we can’t imagine it otherwise, the world must be such and such’. Kant never says this. What he says is that the principles of geometry are not derived simply from the analysis of terms, nor are they empirical. Kant is very, very, explicit...almost annoyingly repetitive, that he is not talking about the world, but about our perceptive faculties. And if indeed we cannot imagine x, that does seem to me to be a good basis from which to draw some conclusions about our perceptive faculties.
I have no idea what Kant would say about whether or not we can imagine non-Euclidian space (I have no idea myself if we can) but the matter is complicated because ‘imagination’ is a technical term in his philosophy. He thought space was an infinite Euclidian magnitude, but Euclidian geometry was the only game in town at the time.
Anyway he’s not a good example. As I said before, I don’t mean to dispute the point the example was meant to illustrate. I just wanted to point out that this is an incorrect view of Kant’s claims about space. It’s not really very important what he thought about space though.
There’s a difference between “can’t imagine” in a colloquial sense, and actual inability to imagine. There’s also a difference between not being able to think of how something fits into our knowledge about the universe (for instance, not being able to come up with a mechanism or not being able to see how the evidence supports it) and not being able to imagine the thing itself.
There also aren’t as many examples of this in the history of science as you probably think. Most of the examples that come to people’s mind involve scientists versus noscientists.
I can’t say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
To which you replied that this is a fact about me, not the universe. But I explicitly say that its not a fact about the universe! My evidence for this is the only evidence that could be relevant: my experience with literature, science fiction, talking to people, etc.
Nor is it relevant that science is full of people that say that something has to be true because they can’t imagine the world otherwise. Again, I’m not making a claim about the world, I’m making a claim about the way we have imagined, or now imagine the world to be. I would be very happy to be pointed toward a hypothetical universe that isn’t subject to mathematical analysis and which contains thinking animals.
So before we go on, please tell me what you think I’m claiming? I don’t wish to defend any opinions but my own.
Hen, I told you how I imagine such a universe, and you told me I couldn’t be imagining it! Maybe you could undertake not to gainsay further hypotheses.
I found your suggestion to be implausible for two reasons: first, I don’t think the idea of epistemically significant qualia is defensible, and second, even on the condition that it is, I don’t think the idea of a universe of nothing but a single quale (one having epistemic significance) is defensible. Both of these points would take some time to work out, and it struck me in our last exchange that you had neither the patience nor the good will to do so, at least not with me. But I’d be happy to discuss the matter if you’re interested in hearing what I have to say.
So before we go on, please tell me what you think I’m claiming?
You said:
I just also think it’s a necessary fact.
I’m not sure what you mean by “necessary”, but the most obvious interpretation is that you think it’s necessarily impossible for the world to not be run by math or at least for humans to understand a world that doesn’t.
it’s [probably] impossible for humans to understand a world that [isn’t subject to mathematical analysis].
This is my claim, and here’s the thought: thinking things are natural, physical objects and they necessarily have some internal complexity. Further, thoughts have some basic complexity: I can’t engage in an inference with a single term.
Any universe which would not in principle be subject to mathematical analysis is a universe in which there is no quantity of anything. So it can’t, for example, involve any space or time, no energy or mass, no plurality of bodies, no forces, nothing like that. It admits of no analysis in terms of propositional logic, so Bayes is right out, as is any understanding of causality. This, it seems to me, would preclude the possibility of thought altogether. It may be that the world we live in is actually like that, and all its multiplicity is merely the contribution of our minds, so I won’t venture a claim about the world as such. So far as I know, the fact that worlds admit of mathematical analysis is a fact about thinking things, not worlds.
thinking things are natural, physical objects and they necessarily have some internal complexity. Further, thoughts have some basic complexity: I can’t engage in an inference with a single term.
What do you mean by “complexity”? I realize you have an intuitive idea, but it could very well be that your idea doesn’t make sense when applied to whatever the real universe is.
Any universe which would not in principle be subject to mathematical analysis is a universe in which there is no quantity of anything.
Um, that seems like a stretch. Just because some aspects of the universe are subject to mathematical analysis doesn’t necessarily mean the whole universe is.
What do you mean by “complexity”? I realize you have an intuitive idea, but it could very well be that your idea doesn’t make sense when applied to whatever the real universe is.
For my purposes, complexity is: involving (in the broadest sense of that word) more than one (in the broadest sense of that word) thing (in the broadest sense of that word). And remember, I’m not talking about the real universe, but about the universe as it appears to creatures capable of thinking.
Um, that seems like a stretch. Just because some aspects of the universe are subject to mathematical analysis doesn’t necessarily mean the whole universe is.
I think it does, if you’re granting me that such a world could be distinguished into parts. It doesn’t mean we could have the rich mathematical understanding of laws we do now, but that’s a higher bar than I’m talking about.
You can always “use” analysis the issue is whether it gives you correct answers. It only gives you the correct answer if the universe obeys certain axioms.
Well, this gets us back to the topic that spawned this whole discussion: I’m not sure we can separate the question ‘can we use it’ from ‘does it give us true results’ with something like math. If I’m right that people always have mostly true beliefs, then when we’re talking about the more basic ways of thinking (not Aristotelian dynamics, but counting, arithmetic, etc.) the fact that we can use them is very good evidence that they mostly return true results. So if you’re right that you can always use, say, arithmetic, then I think we should conclude that a universe is always subject to analysis by arithmetic.
You may be totally wrong that you can always use these things, of course. But I think you’re probably right and I can’t make sense of any suggestion to the contrary that I’ve heard yet.
More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding.
The idea of rational understanding rests on the fact that you are separated from the object that you are trying to understand and the object itself doesn’t change if you change your understanding of it.
Then there the halting problem. There are a variety of problems that are NP. Those problems can’t be understood by doing a few experiments and then extrapolating general rules from your experiments.
I’m not quite firm with the mathematical terminology but I think NP problems are not subject to thinks like calculus that are covered in what Wikipedia describes as Mathematical analysis.
Heinz von Förster makes the point that children have to be taught that “green” is no valid answer for the question: “What’s 2+2?”. I personally like his German book titled: “Truth is the invention of a liar”. Heinz von Förster headed the started the Biological Computer Laboratory in 1958 and came up with concepts like second-order cybernetics.
As far as fictional worlds go, Terry Pratchetts discworld runs on narrativium instead of math.
More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding.
That’s true as long there no revelations of truth by Gods or other magical processes. In an universe where you can get the truth through magical tarot reading that assumption is false.
The idea of rational understanding rests on the fact that you are separated from the object that you are trying to understand and the object itself doesn’t change if you change your understanding of it.
That’s not obvious to me. Why do you think this?
That’s true as long there no revelations of truth by Gods or other magical processes. In an universe where you can get the truth through magical tarot reading that assumption is false.
I also don’t understand this inference. Why do you think revelations of truth by Gods or other magical processes, or tarot readings, mean that such a universe would a) be knowable, and b) not be subject to rational analysis?
It might depend a bit of what you mean with rationality. You lose objectivity.
Let’s say I’m hypnotize someone. I’m in a deep state of rapport. That means my emotional state matters a great deal. If I label something that the person I’m talking to as unsuccessful, anxiety raises in myself. That anxiety will screw with the result I want to achieve. I’m better of if I blank my mind instead of engaging in rational analysis of what I’m doing.
I also don’t understand this inference. Why do you think revelations of truth by Gods or other magical processes, or tarot readings, mean that such a universe would a) be knowable, and b) not be subject to rational analysis?
Logically A → B is not the same thing as B → A.
I said that it’s possible for there to be knowledge that you can only get through a process besides rational analysis if you allow “magic”.
If I label something that the person I’m talking to as unsuccessful, anxiety raises in myself. That anxiety will screw with the result I want to achieve.
I’m a little lost. So do you think these observations challenge the idea that in order to understand anyone, we need to assume they’ve got mostly true beliefs, and make mostly rational inferences?
It’s not my phrase, and I don’t particularly like it myself. If you’re asking whether or not qualia are quanta, then I guess the answer is no, but in the sense that the measured is not the measure. It’s a triviality that I can ask you how much pain you feel on a scale of 1-10, and get back a useful answer. I can’t get at what the experience of pain itself is with a number or whatever, but then, I can’t get at what the reality of a block of wood is with a ruler either.
I rather think I do. If you told me you could imagine a euclidian triangle with more or less than 180 internal degrees, I would rightly say ‘No you can’t’. It’s simply not true that we can imagine or conceive of anything we can put into (or appear to put into) words. And I don’t think it’s possible to imagine away things like space and time and keep hold of the idea that you’re imagining a universe, or an experience, or anything like that. Time especially, and so long as I have time, I have quantity.
I don’t know where you are getting yourmfacts from, but it is well known that people’s abilities at visualization vary considerably, so where’s the “we”?
Having studied non euclidean geometry, I can easily imagine a triangle whose angles .sum to more than180 (hint: it’s inscribed on the surface of a sphere)
Saying that non spatial or in temporal universes aren’t really universes is a True Scotsman fallacy.
Non spatial and non temporal models have been serious proposed by physicists; perhaps you should talk to them.
It depends on what you mean by “imagine”. I can’t imagine a Euclidian triangle with less than 180 degrees in the sense of having a visual representation in my mind that I could then reproduce on a piece of paper. On the other hand, I can certainly imagine someone holding up a measuring device to a vague figure on a piece of paper and saying “hey, I don’t get 180 degrees when I measure this”.
Of course, you could say that the second one doesn’t count since you’re not “really” imagining a triangle unless you imagine a visual representation, but if you’re going to say that you need to remember that all nontrivial attempts to imagine things don’t include as much detail as the real thing. How are you going to define it so that eliminating some details is okay and eliminating other details isn’t?
(And if you try that, then explain why you can’t imagine a triangle whose angles add up to 180.05 degrees or some other amount that is not 180 but is close enough that you wouldn’t be able to tell the difference in a mental image. And then ask yourself “can I imagine someone writing a proof that a Euclidian triangle’s angles don’t add up to 180 degrees?” without denying that you can imagine people writing proofs at all.)
These are good questions, and I think my general answer is this: in the context of this similar arguments, being able to imagine something is sometimes taken as evidence that it’s at least a logical possibility. I’m fine with that, but it needs to be imagined in enough detail to capture the logical structure of the relevant possibility. If someone is going to argue, for example, that one can imagine a euclidian triangle with more or less than 180 internal degrees, the imagined state of affairs must have as least as much logical detail as does a euclidian triangle with 180 internal degrees. Will that exclude your ‘vague shape’ example, and probably your ‘proof’ example?
Will that exclude your ‘vague shape’ example, and probably your ‘proof’ example?
It would exclude the vague shape example but I think it fails for the proof example.
Your reasoning suggests that if X is false, it would be impossible for me to imagine someone proving X. I think that is contrary to what most people mean when they say they can imagine something.
It’s not clear what your reasoning implies when X is true. Either
I cannot imagine someone proving X unless I can imagine all the steps in the proof
I can imagine someone proving X as long as X is true, since having a proof would be a logical possibility as long as X is true
1) is also contrary to what most people think of as imagining. 2) would mean that it is possible me to not know whether or not I am imagining something. (I imagine someone proving X and I don’t know if X is true. 2) means that if X is true I’m “really imagining” it and that if X is false, I am not.)
Your reasoning suggests that if X is false, it would be impossible for me to imagine someone proving X. I think that is contrary to what most people mean when they say they can imagine something.
Well, say I argue that it’s impossible to write a story about a bat. It seems like it should be unconvincing for you to say ‘But I can imagine someone writing a story about a bat...see, I’m imagining Tom, who’s just written a story about a bat.’ Instead, you’d need to imagine the story itself. I don’t intend to talk about the nature of the imagination here, only to say that as a rule, showing that something is logically possible by way of imagining it requires that it have enough logical granularity to answer the challenge.
So I don’t doubt that you could imagine someone proving that E-triangles have more than 180 internal degrees, but I am saying that not all imaginings are contenders in an argument about logical possibility. Only those ones which have sufficient logical granularity do.
I would understand “I can imagine...” in such a context to mean that it doesn’t contain flaws that are basic enough to prevent me from coming up with a mental picture or short description. Not that it doesn’t contain any flaws at all. It wouldn’t make sense to have “I can imagine X” mean “there are no flaws in X”—that would make “I can imagine X” equivalent to just asserting X.
The issue isn’t flaws or flawlessness. In my bat example, you could perfectly imagine Tom sitting in an easy chair with a glass of scotch saying to himself, ‘I’m glad I wrote that story about the bat’. But that wouldn’t help. I never said it’s impossible for Tom to sit in a chair and say that, I said that it was impossible to write a story about a bat.
The issue isn’t logical detail simpliciter, but logical detail relative to the purported impossibility. In the triangle case, you have to imagine, not Tom sitting in his chair thinking ‘I’m glad I proved that E-triangles have more than 180 internal degrees’ (no one could deny that that is possible) but rather the figure itself. It can be otherwise as vague and flawed as you like, so long as the relevant bits are there. Very likely, imagining the proof in the relevant way would require producing it.
And you are asserting something, you’re asserting the possibility of something in virtue of the fact that it is in some sense actual. To say that something is logically impossible is to say that it can’t exist anywhere, ever, not even in a fantasy. To imagine up that possibility is to make it sufficiently real to refute the claim of possibility, but only if you imagine, and thus make real, the precise thing being claimed to be impossible.
Are you sure it is logically impossible to have [spaceless] and timeless universes?
Dear me no! I have no idea if such a universe is impossible. I’m not even terribly confident that this universe has space or time.
I am pretty sure that space and time (or something like them) are a necessary condition on experience, however. Maybe they’re just in our heads, but it’s nevertheless necessary that they, or something like them, be in our heads. Maybe some other kind of creature thinks in terms of space, time, and fleegle, or just fleegle, time, and blop, or just blop and nizz. But I’m confident that such things will all have some common features, namely being something like a context for a multiplicity. I mean in the way time is a context for seeing this, followed by that, and space is a context for seeing this in that in some relation, etc.
Without something like this, it seems to me experience would always (except there’s no time) only be of one (except an idea of number would never come up) thing, in which case it wouldn’t be rich enough to be an experience. Or experience would be of nothing, but that’s the same problem.
So there might be universes of nothing but qualia (or, really, quale) but it wouldn’t be a universe in which there are any experiencing or thinking things. And if that’s so, the whole business is a bit incoherent, since we need an experiencer to have a quale.
we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
That depends on your definition of “math”.
For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green? I don’t see why not.
For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green? I don’t see why not.
I think you’re conflating the physical operation that we correlate with addition and the mathematical structure. ‘Green’ I’m not seeing, but I could write a computer program modeling a universe in which placing a pair of stones in a container that previously held a pair of stones does not always lead to that container holding a quadruplet of stones. In such a universe, the mathematical structure we call ‘addition’ would not be useful, but that doesn’t say that the formalized reasoning structure we call ‘math’ would not exist, or could not be employed.
(In fact, if it’s a computer program, it is obvious that its nature is susceptible to mathematical analysis.)
For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green?
I guess I could make it appear that way, sure, though I don’t know if I could then recognize anything in my simulation as thinking or doing math. But in any case, that’s not a universe in which 2+2=green, it’s a universe in which it appears to. Maybe I’m just not being imaginative enough, and so you may need to help me flesh out the hypothetical.
But it sounds to me like you’re talking about the manipulation of signs, not about numbers themselves. We could make the set of signs ‘2+2=’ end any way we like, but that doesn’t mean we’re talking about numbers. I donno, I think you’re being too cryptic or technical or something for me, I don’t really understand the point you’re trying to make.
Math is what happens when you take your original working predictive toolkit (like counting sheep) and let it run on human wetware disconnected from its original goal of having to predict observables. Thus some form of math would arise in any somewhat-predictable universe evolving a calculational substrate.
Math is what happens when you take your original working predictive toolkit (like counting sheep) and let it run on human wetware disconnected from its original goal of having to predict observables.
That’s an interesting problem. Do we have math because we make abstractions about the multitude of things around us, or must we already have some idea of math in the abstract just to recognize the multitude as a multitude? But I think I agree with the gist of what you’re saying.
Just like I think of language as meta-grunting, I think of math as meta-counting. Some animals can count, and possibly add and subtract a bit, but abstracting it away from the application for the fun of it is what humans do.
Mixing truth and rationality is a failure mode. To know whether someone statement is true , you have to understand it,ad to understand it, you have to assume the speaker’s rationality.
It’s also a failure mode to attach “Irrational” directly to beliefs. A belief is rational if it can be supported by an argument, and you don’t carry the space of all possible arguments round jn your head,
(1) a belief is rational if it can be supported by a sound argument
(2) a belief is rational if it can be supported by a valid argument with probable premises
(3) a belief is rational if it can be supported by an inductively strong argument with plausible premises
(4) a belief is rational if it can be supported by an argument that is better than any counterarguments the agent knows of
etc...
Although personally, I think it is more helpful to think of rationality as having to do with how beliefs cohere with other beliefs and about how beliefs change when new information comes in than about any particular belief taken in isolation.
I can’t but note that the world “reality” is conspicuously absent here...
Arguments of type (1) necessarily track reality (it is pretty much defined this way), (2) may or may not depending on the quality of the premises, (3) often does, and sometimes you just can’t do any better than (4) with available information and corrupted hardware.
Just because I didn’t use the word “reality” doesn’t really mean much.
A definition of “rational argument” that explicitly referred to “reality” would be a lot less useful, since checking which arguments are rational is one of the steps in figuring what’ real.
checking which arguments are rational is one of the steps in figuring what’ real
I am not sure this is (necessarily) the case, can you unroll?
Generally speaking, arguments live in the map and, in particular, in high-level maps which involve abstract concepts and reasoning. If I check the reality of the stone by kicking it and seeing if my toe hurts, no arguments are involved. And from the other side, classical logic is very much part of “rational arguments” and yet needs not correspond to reality.
If I check the reality of the stone by kicking it and seeing if my toe hurts, no arguments are involved.
That tends to work less well for things that one can’t directly observe, e.g., how old is the universe, or things where there is confounding noise, e.g., does this drug help.
If you would more reliably understand what people mean by specifically treating it as the product of a rational and intelligent person, then executing that hack should lead to your observing a much higher rate of rationality and intelligence in discussions than you would previously have predicted. If the thesis is true, many remarks which, using your earlier methodology, you would have dismissed as the product of diseased reasoning will prove to be sound upon further inquiry.
If, however, you execute the hack for a few months and discover no change in the rate at which you discover apparently-wrong remarks to admit to sound interpretations, then TheAncientGeek’s thesis would fail the test.
True, although being told less often that you are missing the point isn’t, in and of itself, all that valuable; the value is in getting the point of those who otherwise would have given up on you with a remark along those lines.
(Note that I say “less often”; I was recently told that this criticism of Tom Godwin’s “The Cold Equations”, which I had invoked in a discussion of “The Ones Who Walk Away From Omelas”, missed the point of the story—to which I replied along the lines of, “I get the point, but I don’t agree with it.”)
That looks like a test of my personal ability to form correct first-impression estimates.
Also “will prove to be sound upon further inquiry” is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X “sound”? X-/
That looks like a test of my personal ability to form correct first-impression estimates.
Precisely.
Also “will prove to be sound upon further inquiry” is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X “sound”? X-/
Ah, I see. “Sound” is not the right word for what I mean; what I would expect to occur if the thesis is correct is that statements will prove to be apposite or relevant or useful—that is to say, valuable contributions in the context within which they were uttered. In the case of X, this would hold if the person proposing X believed that those conditions applied in the case described.
A concrete example would be someone who said, “you can divide by zero here” in reaction to someone being confused by a definition of the derivative of a function in terms of the limit of a ratio.
Because you are not engaged in establishing facts about how smart someone is, you are instead trying to establish facts about what they mean by what they say.
I see that my conception of the “principle of charity” is either non-trivial to articulate or so inchoate as to be substantially altered by my attempts to do so. Bearing that in mind:
The principle of charity isn’t a propositional thesis, it’s a procedural rule, like the presumption of innocence. It exists because the cost of false positives is high relative to the cost of reducing false positives: the shortest route towards correctness in many cases is the instruction or argumentation of others, many of whom would appear, upon initial contact, to be stupid, mindkilled, dishonest, ignorant, or otherwise unreliable sources upon the subject in question. The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.
My remark took the above as a basis and proposed behavior to execute in cases where the initial remark strongly suggests that the speaker is thinking irrationally (e.g. an assertion that the modern evolutionary synthesis is grossly incorrect) and your estimate of the time required to evaluate the actual state of the speaker’s reasoning processes was more than you are willing to spend. In such a case, what the principle of charity implies are two things:
You should consider the nuttiness of the speaker as being an open question with a large prior probability, akin to your belief prior to lifting a dice cup that you have not rolled double-sixes, rather than a closed question with a large posterior probability, akin to your belief that the modern evolutionary synthesis is largely correct.
You should withdraw from the conversation in such a fashion as to emphasize that you are in general willing to put forth the effort to understand what they are saying, but that the moment is not opportune.
Minor tyop fix T1503-4.
I don’t see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not.
You are saying (a bit later in your post) that the principle of charity implies two things. The second one is a pure politeness rule and it doesn’t seem to me that the fashion of withdrawing from a conversation will help me “reliably distinguish” anything.
As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn’t help me reliably distinguish anything either.
In fact, I don’t see why there should be a particular exception here (“a procedural rule”) to the bog-standard practice of updating on evidence. If my updating process is incorrect, I should fix it and not paper it over with special rules for seemingly-stupid people. If it is reasonably OK, I should just go ahead and update. That will not necessarily result in either a “closed question” or a “large posterior”—it all depends on the particulars.
I’ll say it again: POC doesn’t mean “believe everyone is sane and intelligent”, it means “treat everyone’s comments as though they were made by a sane , intelligent, person”.
Ie, its a defeasible assumption. If you fail, you have evidence that it was a dumb comment. Ift you succeed, you have evidence it wasn’t. Either way, you have evidence, and you are not sitting in an echo chamber where your beliefs about people’s dumbness go forever untested, because you reject out of hand anything that sounds superficially dumb, .or was made by someone you have labelled , however unjustly,as dumb.
That’s fine. I have limited information processing capacity—my opportunity costs for testing other people’s dumbness are fairly high.
In the information age I don’t see how anyone can operate without the “this is too stupid to waste time on” pre-filter.
The PoC tends to be advised in the context of philosophy, where there is a background assumption of infinite amounts of time to consider things, The resource-constrained version would be to interpret comments charitably once you have, for whatever reason, got into a discussion....with the corollary of reserving some space for “I might be wrong” where you haven’t had the resources to test the hypothesis.
LOL. While ars may be longa, vita is certainly brevis. This is a silly assumption, better suited for theology, perhaps—it, at least, promises infinte time. :-)
If I were living in English countryside around XVIII century I might have had a different opinion on the matter, but I do not.
It’s not a binary either-or situation. I am willing to interpret comments charitably according to my (updateable) prior of how knowledgeable, competent, and reasonable the writer is. In some situations I would stop and ponder, in others I would roll my eyes and move on.
Users report that charitable interpretation gives you more evidence for updating than you would have otherwise.
Are you already optimal? How do you know?
As I operationalize it, that definition effectively waters down the POC to a degree I suspect most POC proponents would be unhappy with.
Sane, intelligent people occasionally say wrong things; in fact, because of selection effects, it might even be that most of the wrong things I see & hear in real life come from sane, intelligent people. So even if I were to decide that someone who’s just made a wrong-sounding assertion were sane & intelligent, that wouldn’t lead me to treat the assertion substantially more charitably than I otherwise would (and I suspect that the kind of person who likes the(ir conception of the) POC might well say I were being “uncharitable”).
Edit: I changed “To my mind” to “As I operationalize it”. Also, I guess a shorter form of this comment would be: operationalized like that, I think I effectively am applying the POC already, but it doesn’t feel like it from the inside, and I doubt it looks like it from the outside.
You have uncharutably interpreted my formulation to mean ’treat everyone’s comments as though they were made by a sane intelligent person who may .or may have been having an off day”. What kind of guideline is that?
The charitable version would have been “treat everyone’s comments as though they were made by someone sane and intelligent at the time”.
(I’m giving myself half a point for anticipating that someone might reckon I was being uncharitable.)
A realistic one.
The thing is, that version actually sounds less charitable to me than my interpretation. Why? Well, I see two reasonable ways to interpret your latest formulation.
The first is to interpret “sane and intelligent” as I normally would, as a property of the person, in which case I don’t understand how appending “at the time” makes a meaningful difference. My earlier point that sane, intelligent people say wrong things still applies. Whispering in my ear, “no, seriously, that person who just said the dumb-sounding thing is sane and intelligent right now” is just going to make me say, “right, I’m not denying that; as I said, sanity & intelligence aren’t inconsistent with saying something dumb”.
The second is to insist that “at the time” really is doing some semantic work here, indicating that I need to interpret “sane and intelligent” differently. But what alternative interpretation makes sense in this context? The obvious alternative is that “at the time” is drawing my attention to whatever wrong-sounding comment was just made. But then “sane and intelligent” is really just a camouflaged assertion of the comment’s worthiness, rather than the claimant’s, which reduces this formulation of the POC to “treat everyone’s comments as though the comments are cogent”.
The first interpretation is surely not your intended one because it’s equivalent to one you’ve ruled out. So presumably I have to go with the second interpretation, but it strikes me as transparently uncharitable, because it sounds like a straw version of the POC (“oh, so I’m supposed to treat all comments as cogent, even if they sound idiotic?”).
The third alternative, of course, is that I’m overlooking some third sensible interpretation of your latest formulation, but I don’t see what it is; your comment’s too pithy to point me in the right direction.
Yep.
You have assumed that cannot be the correct interpretation of the PoC, without saying why. In light of your other comments, it could well be that you are assuming that the PoC can only be true by correspondence to reality or false, by lack of correspondence. But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.
OK, but that’s not an adequate basis for recommending a given norm/guideline/heuristic. One has to at least sketch an answer to the question, drawing on evidence and/or argument (as RobinZ sought to).
Well, because it’s hard for me to believe you really believe that interpretation and understand it in the same way I would naturally operationalize it: namely, noticing and throwing away any initial suspicion I have that a comment’s wrong, and then forcing myself to pretend the comment must be correct in some obscure way.
As soon as I imagine applying that procedure to a concrete case, I cringe at how patently silly & unhelpful it seems. Here’s a recent-ish, specific example of me expressing disagreement with a statement I immediately suspected was incorrect.
What specifically would I have done if I’d treated the seemingly patently wrong comment as cogent instead? Read the comment, thought “that can’t be right”, then shaken my head and decided, “no, let’s say that is right”, and then...? Upvoted the comment? Trusted but verified (i.e. not actually treated the comment as cogent)? Replied with “I presume this comment is correct, great job”? Surely these are not courses of action you mean to recommend (the first & third because they actively support misinformation, the second because I expect you’d find it insufficiently charitable). Surely I am being uncharitable in operationalizing your recommendation this way...even though that does seem to me the most literal, straightforward operationalization open to me. Surely I misunderstand you. That’s why I assumed “that cannot be the correct interpretation” of your POC.
If I may step in at this point; “cogent” does not mean “true”. The principle of charity (as I understand it) merely recommends treating any commenter as reasonably sane and intelligent. This does not mean he can’t be wrong—he may be misinformed, he may have made a minor error in reasoning, he may simply not know as much about the subject as you do. Alternatively, you may be misinformed, or have made a minor error in reasoning, or not know as much about the subject as the other commenter...
So the correct course of action then, in my opinion, is to find the source of error and to be polite about it. The example post you linked to was a great example—you provided statistics, backed them up, and linked to your sources. You weren’t rude about it, you simply stated facts. As far as I could see, you treated RomeoStevens as sane, intelligent, and simply lacking in certain pieces of pertinent historical knowledge—which you have now provided.
(As to what RomeoStevens said—it was cogent. That is to say, it was pertinent and relevant to the conversation at the time. That it was wrong does not change the fact that it was cogent; if it had been right it would have been a worthwhile point to make.)
Yes, and were I asked to give synonyms for “cogent”, I’d probably say “compelling” or “convincing” [edit: rather than “true”]. But an empirical claim is only compelling or convincing (and hence may only be cogent) if I have grounds for believing it very likely true. Hence “treat all comments as cogent, even if they sound idiotic” translates [edit: for empirical comments, at least] to “treat all comments as if very likely true, even if they sound idiotic”.
Now you mention the issue of relevance, I think that, yeah, I agree that relevance is part of the definition of “cogent”, but I also reckon that relevance is only a necessary condition for cogency, not a sufficient one. And so...
...I have to push back here. While pertinent, the comment was not only wrong but (to me) obviously very likely wrong, and RomeoStevens gave no evidence for it. So I found it unreasonable, unconvincing, and unpersuasive — the opposite of dictionary definitions of “cogent”. Pertinence & relevance are only a subset of cogency.
That’s why I wrote that that version of the POC strikes me as watered down; someone being “reasonably sane and intelligent” is totally consistent with their just having made a trivial blunder, and is (in my experience) only weak evidence that they haven’t just made a trivial blunder, so “treat commenters as reasonably sane and intelligent” dissolves into “treat commenters pretty much as I’d treat anyone”.
Then “cogent” was probably the wrong word to use.
I’d need a word that means pertinent, relevant, and believed to have been most likely true (or at least useful to say) by the person who said it; but not necessarily actually true.
Okay, I appear to have been using a different definition (see definition two).
I think at this point, so as not to get stuck on semantics, we should probably taboo the word ‘cogent’.
(Having said that, I do agree anyone with access to the statistics you quoted would most likely find RomeoSteven’s comments unreasonable, unconvincing and unpersuasive).
Then you may very well be effectively applying the principle already. Looking at your reply to RomeoStevens supports this assertion.
TheAncientGeek assented to that choice of word, so I stuck with it. His conception of the POC might well be different from yours and everyone else’s (which is a reason I’m trying to pin down precisely what TheAncientGeek means).
Fair enough, I was checking different dictionaries (and I’ve hitherto never noticed other people using “cogent” for “pertinent”).
Maybe, though I’m confused here by TheAncientGeek saying in one breath that I applied the POC to RomeoStevens, but then agreeing (“Thats exactly what I mean.”) in the next breath with a definition of the POC that implies I didn’t apply the POC to RomeoStevens.
I think that you and I are almost entirely in agreement, then. (Not sure about TheAncientGeek).
I think you’re dealing with double-illusion-of-transparency issues here. He gave you a definition (“treat everyone’s comments as though they were made by someone sane and intelligent at the time”) by which he meant some very specific concept which he best approximated by that phrase (call this Concept A). You then considered this phrase, and mapped it to a similar-but-not-the-same concept (Concept B) which you defined and tried to point out a shortcoming in (“namely, noticing and throwing away any initial suspicion I have that a comment’s wrong, and then forcing myself to pretend the comment must be correct in some obscure way.”).
Now, TheAncientGeek is looking at your words (describing Concept B) and reading into them the very similar Concept A; where your post in response to RomeoStevens satisfies Concept A but not Concept B.
Nailing down the difference between A and B will be extremely tricky and will probably require both of you to describe your concepts in different words several times. (The English language is a remarkably lossy means of communication).
Your diagnosis sounds all too likely. I’d hoped to minimize the risk of this kind of thing by concretizing and focusing on a specific, publicly-observable example, but that might not have helped.
Yes, that was an example of PoC, because satt assumed RomeoStevens had failed to look up the figures, rather than insanely believing that 120,000ish < 500ish.
Yes, but that’s beside the original point. What you call a realistic guideline doesnt work as a guideline at all, and therefore isnt a a charitable interpretation of the PoC.
Justifying that PoC as something that works at what it is supposed to do, is a question that can be answered, but it is a separate question.
Thats exactly what I mean.
Cogent doesn’t mean right. You actually succeeded in treating it as wrong for sane reasons, ie failure to check data.
You brought it up!
I continue to think that the version I called realistic is no less workable than your version.
Again, it’s a question you introduced. (And labelled “the question”.) But I’m content to put it aside.
But surely it isn’t. Just 8 minutes earlier you wrote that a case where I did the opposite was an “example of PoC”.
See my response to CCC.
But not one that tells you unambiguously what to do, ie not a usable guideline at all.
There’s a lot of complaint about this heuristic along the lines that it doesn’t guarantee perfect results...ie, its a heuristic
And now there is the complaint that its not realistic, it doesn’t reflect reality.
Ideal rationalists can stop reading now.
Everybody else: you’re biased. Specifically, overconfident,. Overconfidence makes people overestimate their ability to understand what people are saying, and underestimate the rationality of others. The PoC is a heuristic which corrects those. As a heuristic, an approximate method, it i is based on the principle that overshooting the amount of sense people are making is better than undershooting. Overshooting would be a problem, if there were some goldilocks alternative, some way of getting things exactly right. There isn’t. The voice in your head that tells you you are doing just fine its the voice of your bias.
I don’t see how this applies any more to the “may .or may have been having an off day”″ version than it does to your original. They’re about as vague as each other.
Understood. But it’s not obvious to me that “the principle” is correct, nor is it obvious that a sufficiently strong POC is better than my more usual approach of expressing disagreement and/or asking sceptical questions (if I care enough to respond in the first place).
Mine implies a heuristic of “make repeated attempts at re-intepreting the comment using different background assumptions”. What does yours imply?
As I have explained, it provides its own evidence.
Neither of those is much good if interpreting someone who died 100 years ago.
I don’t see how “treat everyone’s comments as though they were made by a sane , intelligent, person” entails that without extra background assumptions. And I expect that once those extra assumptions are spelled out, the “may .or may have been having an off day” version will imply the same action(s) as your original version.
Well, when I’ve disagreed with people in discussions, my own experience has been that behaving according to my baseline impression of how much sense they’re making gets me closer to understanding than consciously inflating my impression of how much sense they’re making.
A fair point, but one of minimal practical import. Almost all of the disagreements which confront me in my life are disagreements with live people.
I don’t like this rule. My approach is simpler: attempt to understand what the person means. This does not require me to treat him as sane or intelligent.
How do you know how many mistakes you are or aren’t making?
The PoC is a way of breaking down “understand what the other person says” into smaller steps, not .something entirely different, Treating your own mental processes as a black box that always delivers the right answer is a great way to stay in the grip of bias.
The prior comment leads directly into this one: upon what grounds do I assert that an inexpensive test exists to change my beliefs about the rationality of an unfamiliar discussant? I realize that it is not true in the general case that the plural of anecdote is data, and much the following lacks citations, but:
Many people raised to believe that evolution is false because it contradicts their religion change their minds in their first college biology class. (I can’t attest to this from personal experience—this is something I’ve seen frequently reported or alluded to via blogs like Slacktivist.)
An intelligent, well-meaning, LessWrongian fellow was (hopefully-)almost driven out of my local Less Wrong meetup in no small part because a number of prominent members accused him of (essentially) being a troll. In the course of a few hours conversation between myself and a couple others focused on figuring out what he actually meant, I was able to determine that (a) he misunderstood the subject of conversation he had entered, (b) he was unskilled at elaborating in a way that clarified his meaning when confusion occurred, and (c) he was an intelligent, well-meaning, LessWrongian fellow whose participation in future meetups I would value.
I am unable to provide the details of this particular example (it was relayed to me in confidence), but an acquaintance of mine was a member of a group which was attempting to resolve an elementary technical challenge—roughly the equivalent of setting up a target-shooting range with a safe backstop in terms of training required. A proposal was made that was obviously unsatisfactory—the equivalent of proposing that the targets be laid on the ground and everyone shoot straight down from a second-story window—and my acquaintance’s objection to it on common-sense grounds was treated with a response equivalent to, “You’re Japanese, what would you know about firearms?” (In point of fact, while no metaphorical gunsmith, my acquaintance’s knowledge was easily sufficient to teach a Boy Scout merit badge class.)
In my first experience on what was then known as the Internet Infidels Discussion Board, my propensity to ask “what do you mean by x” sufficed to transform a frustrated, impatient discussant into a cheerful, enthusiastic one—and simultaneously demonstrate that said discussant’s arguments were worthless in a way which made it easy to close the argument.
In other words, I do not often see the case in which performing the tests implied by the principle of charity—e.g. “are you saying [paraphrase]?”—are wasteful, and I frequently see cases where failing to do so has been.
What you are talking about doesn’t fall under the principle of charity (in my interpretation of it). It falls under the very general rubric of “don’t be stupid yourself”.
In particular, considering that the speaker expresses his view within a framework which is different from your default framework is not an application of the principle of charity—it’s an application of the principle “don’t be stupid, of course people talk within their frameworks, not within your framework”.
I might be arguing for something different than your principle of charity. What I am arguing for—and I realize now that I haven’t actually explained a procedure, just motivations for one—is along the following lines:
When somebody says something prima facie wrong, there are several possibilities, both regarding their intended meaning:
They may have meant exactly what you heard.
They may have meant something else, but worded it poorly.
They may have been engaging in some rhetorical maneuver or joke.
They may have been deceiving themselves.
They may have been intentionally trolling.
They may have been lying.
...and your ability to infer such:
Their remark may resemble some reasonable assertion, worded badly.
Their remark may be explicable as ironic or joking in some sense.
Their remark may conform to some plausible bias of reasoning.
Their remark may seem like a lie they would find useful.*
Their remark may represent an attempt to irritate you for their own pleasure.*
Their remark may simply be stupid.
Their remark may allow more than one of the above interpretations.
What my interpretation of the principle of charity suggests as an elementary course of action in this situation is, with an appropriate degree of polite confusion, to ask for clarification or elaboration, and to accompany this request with paraphrases of the most likely interpretations you can identify of their remarks excluding the ones I marked with asterisks.
Depending on their actual intent, this has a good chance of making them:
Elucidate their reasoning behind the unbelievable remark (or admit to being unable to do so);
Correct their misstatement (or your misinterpretation—the difference is irrelevant);
Admit to their failed humor;
Admit to their being unable to support their assertion, back off from it, or sputter incoherently;
Grow impatient at your failure to rise to their goading and give up; or
Back off from (or admit to, or be proven guilty of) their now-unsupportable deception.
In the first three or four cases, you have managed to advance the conversation with a well-meaning discussant without insult; in the latter two or three, you have thwarted the goals of an ill-intentioned one—especially, in the last case, because you haven’t allowed them the option of distracting everyone from your refutations by claiming you insulted them. (Even if they do so claim, it will be obvious that they have no just cause to be.)
I say this falls under the principle of charity because it involves (a) granting them, at least rhetorically, the best possible motives, and (b) giving them enough of your time and attention to seek engagement with their meaning, not just a lazy gloss of their words.
Minor formatting edit.
Belatedly: I recently discovered that in 2011 I posted a link to an essay on debating charitably by pdf23ds a.k.a. Chris Capel—this is MichaelBishop’s summary and this is a repost of the text (the original site went down some time ago). I recall endorsing Capel’s essay unreservedly last time I read it; I would be glad to discuss the essay, my prior comments, or any differences that exist between the two if you wish.
A small addendum, that I realized I omitted from my prior arguments in favor of the principle of charity:
Because I make a habit of asking for clarification when I don’t understand, offering clarification when not understood, and preferring “I don’t agree with your assertion” to “you are being stupid”, people are happier to talk to me. Among the costs of always responding to what people say instead of your best understanding of what they mean—especially if you are quick to dismiss people when their statements are flawed—is that talking to you becomes costly: I have to word my statements precisely to ensure that I have not said something I do not mean, meant something I did not say, or made claims you will demand support for without support. If, on the other hand, I am confident that you will gladly allow me to correct my errors of presentation, I can simply speak, and fix anything I say wrong as it comes up.
Which, in turn, means that I can learn from a lot of people who would not want to speak to me otherwise.
Again: I completely agree that you should make your best effort to understand what other people actually mean. I do not call this charity—it sounds like SOP and “just don’t be an idiot yourself” to me.
You’re right: it’s not self-evident. I’ll go ahead and post a followup comment discussing what sort of evidential support the assertion has.
My usage of the terms “prior” and “posterior” was obviously mistaken. What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it’s perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability. I have high confidence that an inexpensive test—lifting the dice cup—will change my beliefs about the value of the die roll by many orders of magnitude, and low confidence that any comparable test exists to affect my confidence regarding the scientific theory.
I think you are talking about what’s in local parlance is called a “weak prior” vs a “strong prior”. Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.
In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior—the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior—it will take much convincing evidence to persuade you that the theory is not correct after all.
Of course, the posterior of a previous update becomes the prior of the next update.
Using this language, then, you are saying that prima facie evidence of someone’s stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.
And I don’t see why this should be so.
Oh, dear—that’s not what I meant at all. I meant that—absent a strong prior—the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent. It’s entirely possible that ten minutes of conversation will suffice to make a strong prior out of this weaker one—there’s someone arguing for dualism on a webcomic forum I (in)frequent along the same lines as Chalmers “hard problem of consciousness”, and it took less than ten posts to establish pretty confidently that the same refutations would apply—but as the history of DIPS (defense-independent pitching statistics) shows, it’s entirely possible for an idea to be as correct as “the earth is a sphere, not a plane” and nevertheless be taken as prima facie absurd.
(As the metaphor implies, DIPS is not quite correct, but it would be more accurate to describe its successors as “fixing DIPS” than as “showing that DIPS was completely wrongheaded”.)
Oh, I agree with that.
What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid. The principle of charity should not prevent that from happening. Of course evidence of stupidity should not make you close the case, declare someone irretrievably stupid, and stop considering any further evidence.
As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.
...in the context during which they exhibited the behavior which generated said evidence, of course. In broader contexts, or other contexts? To a much lesser extent, and not (usually) strongly in the strong-prior sense, but again, yes. That you should always be capable of considering further evidence is—I am glad to say—so universally accepted a proposition in this forum that I do not bother to enunciate it, but I take no issue with drawing conclusions from a sufficient body of evidence.
Come to think, you might be amused by this fictional dialogue about a mendacious former politician, illustrating the ridiculousness of conflating “never assume that someone is arguing in bad faith” and “never assert that someone is arguing in bad faith”. (The author also posted a sequel, if you enjoy the first.)
I’m afraid that I would have about as much luck barking like a duck as enunciating how I evaluate the intelligence (or reasonableness, or honesty, or...) of those I converse with. YMMV, indeed.
People tend to update too much in these circumstances: Fundamental attribution error
The fundamental attribution error is about underestimating the importance of external drivers (the particular situation, random chance, etc.) and overestimating the importance of internal factors (personality, beliefs, etc.) as an explanation for observed actions.
If a person in a discussion is spewing nonsense, it is rare that external factors are making her do it (other than a variety of mind-altering chemicals). The indicators of stupidity are NOT what position a person argues or how much knowledge about the subject does she has—it’s how she does it. And inability e.g. to follow basic logic is hard to attribute to external factors.
This discussion has got badly derailed. You are taking it that there is some robust fact about someones lack of lrationality or intelligence which may or may not be explained by internal or external factors.
The point is that you cannot make a reliable judgement about someone’s rationality or intelligence unless you have understood that they are saying,....and you cannot reliably understand what they ares saying unl ess you treat it as if it were the product of a rational and intelligent person. You can go to “stupid”when all attempts have failed, but not before.
I disagree, I don’t think this is true.
I think it’s true, on roughly these grounds: taking yourself to understand what someone is saying entails thinking that almost all of their beliefs (I mean ‘belief’ in the broad sense, so as to include my beliefs about the colors of objects in the room) are true. The reason is that unless you assume almost all of a person’s (relevant) beliefs are true, the possibility space for judgements about what they mean gets very big, very fast. So if ‘generally understanding what someone is telling you’ means having a fairly limited possibility space, you only get this on the assumption that the person talking to you has mostly true beliefs. This, of course, doesn’t mean they have to be rational in the LW sense, or even very intelligent. The most stupid and irrational (in the LW sense) of us still have mostly true beliefs.
I guess the trick is to imagine what it would be to talk to someone who you thought had on the whole false beliefs. Suppose they said ‘pass me the hammer’. What do you think they meant by that? Assuming they have mostly or all false beliefs relevant to the utterance, they don’t know what a hammer is or what ‘passing’ involves. They don’t know anything about what’s in the room, or who you are, or what you are, or even if they took themselves to be talking to you, or talking at all. The possibility space for what they took themselves to be saying is too large to manage, much larger than, for example, the possibility space including all and only every utterance and thought that’s ever been had by anyone. We can say things like ‘they may have thought they were talking about cats or black holes or triangles’ but even that assumes vastly more truth and reason in the person that we’ve assumed we can anticipate.
Generally speaking, understanding what a person means implies reconstructing their framework of meaning and reference that exists in their mind as the context to what they said.
Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.
Well, there are two questions here: 1) is it in principle necessary to assume your interlocutors are sane and rational, and 2) is it as a matter of practical necessity a fact that we always do assume our interlocutors are sane and rational. I’m not sure about the first one, but I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they’re broadly sane, rational, and have mostly true beliefs. I’d be interested to know which of these you’re arguing about.
Also, we should probably taboo ‘sane’ and ‘rational’. People around here have a tendency to use these words in an exaggerated way to mean that someone has a kind of specific training in probability theory, statistics, biases, etc. Obviously people who have none of these things, like people living thousands of years ago, were sane and rational in the conventional sense of these terms, and they had mostly true beliefs even by any standard we would apply today.
The answers to your questions are no and no.
I don’t think so. Two counter-examples:
I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase “Jesus’ self-sacrifice washes away the original sin” without accepting that Christianity is “mostly true” or “rational”.
Consider a psychotherapist talking to a patient, let’s say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.
You’re not being imaginative enough: you’re thinking about someone with almost all true beliefs (including true beliefs about what Christians tend to say), and a couple of sort of stand out false beliefs about how the universe works as a whole. I want you to imagine talking to someone with mostly false beliefs about the subject at hand. So you can’t assume that by ‘Jesus’ self-sacrifice washes away the original sin’ that they’re talking about anything you know anything about because you can’t assume they are connecting with any theology you’ve ever heard of. Or even that they’re talking about theology. Or even objects or events in any sense you’re familiar with.
I think, again, delusional people are remarkable for having some unaccountably false beliefs, not for having mostly false beliefs. People with mostly false beliefs, I think, wouldn’t be recognizable even as being conscious or aware of their surroundings (because they’re not!).
So why are we talking about them, then?
Well, my point is that as a matter of course, you assume everyone you talk to has mostly true beliefs, and for the most part thinks rationally. We’re talking about ‘people’ with mostly or all false beliefs just to show that we don’t have any experience with such creatures.
Bigger picture: the principle of charity, that is the assumption that whoever you are talking to is mostly right and mostly rational, isn’t something you ought to hold, it’s something you have no choice but to hold. The principle of charity is a precondition on understanding anyone at all, even recognizing that they have a mind.
People will have mostly true beliefs, but they might not have true beliefs in the areas under concern. For obvious reasons, people’s irrationality is likely to be disproportionately present in the beliefs with which they disagree with others. So the fact that you need to be charitable in assuming people have mostly true beliefs may not be practically useful—I’m sure a creationist rationally thinks water is wet, but if I’m arguing with him, that subject probably won’t come up as much as creationism.
That’s true, but I feel like a classic LW point can be made here: suppose it turns out some people can do magic. That might seem like a big change, but in fact magic will then just be subject to the same empirical investigation as everything else, and ultimately the same integration into physical theory the same as everything else.
So while I agree with you that when we specify a topic, we can have broader disagreement, that disagreement is built on and made possible by very general agreement about everything else. Beliefs are holistic, not atomic, and we can’t partition them off while making any sense of them. We’re never just talking about some specific subject matter, but rather emphasizing some subject matter on the background of all our other beliefs (most of which must be true).
The thought, in short, is that beliefs are of a nature to be true, in the way dogs naturally have four legs. Some don’t, because something went wrong, but we can only understand the defect in these by having the basic nature of beliefs, namely truth, in the background.
That could be true but doesn’t have to be true. Our ontological assumptions might also turn out to be mistaken.
To quote Eliezer:
True, and a discovery like that might require us to make some pretty fundamental changes. But I don’t think Morpheus could be right about the universe’s relation to math. No universe, I take it, ‘runs’ on math in anything but the loosest figurative sense. The universe we live in is subject to mathematical analysis, and what reason could we have for thinking any universe could fail to be so? I can’t say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding. Even if someone has some very false beliefs, their beliefs are false not just jibber-jabber (and if they are just jibber-jabber then you’re not talking to a person). Even false beliefs are going to have a rational structure.
That is a fact about you, not a fact about the universe. Nobody could imagine light being both a particle and a wave, for example, until their study of nature forced them to.
People could imagine such a thing before studying nature showed they needed to; they just didn’t. I think there’s a difference between a concept that people only don’t imagine, and a concept that people can’t imagine. The latter may mean that the concept is incoherent or has an intrinsic flaw, which the former doesn’t.
In the interest of not having this discussion degenerate into an argument about what “could” means, I would like to point out that your and hen’s only evidence that you couldn’t imagine a world that doesn’t run on math is that you haven’t.
For one thing, “math” trivially happens to run on world, and corresponds to what happens when you have a chain of interactions. Specifically to how one chain of physical interactions (apples being eaten for example) combined with another that looks dissimilar (a binary adder) ends up with conclusion that apples were counted correctly, or how the difference in count between the two processes of counting (none) corresponds to another dissimilar process (the reasoning behind binary arithmetic).
As long as there’s any correspondences at all between different physical processes, you’ll be able to kind of imagine that world runs on world arranged differently, and so it would appear that world “runs on math”.
If we were to discover some new laws of physics that were producing incalculable outcomes, we would just utilize those laws in some sort of computer and co-opt them as part of “math”, substituting processes for equivalent processes. That’s how we came up with math in the first place.
edit: to summarize, I think “the world runs on math” is a really confused way to look at how world relates to the practice of mathematics inside of it. I can perfectly well say that the world doesn’t run on math any more than the radio waves are transmitted by mechanical aether made of gears, springs, and weights, and have exact same expectations about everything.
“There is non trivial subset of maths whish describes physical law” might be better way of stating it
It seems to me that as long as there’s anything that is describable in the loosest sense, that would be taken to be true.
I mean, look at this, some people believe literally that our universe is a “mathematical object”, what ever that means (tegmarkery), and we haven’t even got a candidate TOE that works.
edit: I think the issue is that Morpheus confuses “made of gears” with “predictable by gears”. Time is not made of gears, and neither are astronomical objects, but a clock is very useful nonetheless.
I don’t see why “describable” would necessarily imply “describable mathematically”. I can imagine a qualia only universe,and I can imagine the ability describe qualia. As things stand, there are a number of things that can’t be described mathematically
Example?
Qualia, the passage of time, symbol grounding..
Absolutely, it’s a fact about me, that’s my point. I just also think it’s a necessary fact.
What’s your evidence for this? Keep in mind that the history of science is full of people asserting that X has to be the case because they couldn’t imagine the world being otherwise, only for subsequent discoveries to show that X is not in fact the case.
Name three (as people often say around here).
Well, the most famous (or infamous) is Kant’s argument the space must be flat (in the Euclidean sense) because the human mind is incapable of imagining it to be otherwise.
Another example was Lucretius’s argument against the theory that the earth is round: if the earth were round and things fell towards its center than in which direction would an object at the center fall?
Not to mention the standard argument against the universe having a beginning “what happened before it?”
I don’t intend to bicker, I think your point is a good one independently of these examples. In any case, I don’t think at least the first two of these examples of the phenomenon you’re talking about.
I think this comes up in the sequences as an example of the mind-projection fallacy, but that’s not right. Kant did not take himself to be saying anything about the world outside the mind when he said that space was flat. He only took himself to be talking about the world as it appears to us. Space, so far as Kant was concerned, was part of the structure of perception, not the universe. So in the Critique of Pure Reason, he says:
So Kant is pretty explicit that he’s not making a claim about the world, but about the way we percieve it. Kant would very likely poke you in chest and say “No you’re committing the mind-projection fallacy for thinking that space is even in the world, rather than just a form of perception. And don’t tell me about the mind-projection fallacy anyway, I invented that whole move.”
This also isn’t an example, because the idea of a spherical world had in fact been imagined in detail by Plato (with whom Lucretius seems to be arguing), Aristotle, and many of Lucretius’ contemporaries and predecessors. Lucretius’ point couldn’t have been that a round earth is unimaginable, but that it was inconsistent with an analysis of the motions of simple bodies in terms of up and down: you can’t say that fire is of a nature to go up if up is entirely relative. Or I suppose, you can say that but you’d have to come up with a more complicated account of natures.
And in particular he claimed that this showed it had to be Euclidean because humans couldn’t imagine it otherwise. Well, we now know it’s not Euclidean and people can imagine it that way (I suppose you could dispute this, but that gets into exactly what we mean by “imagine” and attempting to argue about other people’s qualia).
No, he never says that. Feel free to cite something from Kant’s writing, or the SEP or something. I may be wrong, but I just read though the Aesthetic again, and I couldn’t find anything that would support your claim.
EDIT: I did find one passage that mentions imagination:
I’ve edited my post accordingly, but my point remains the same. Notice that Kant does not mention the flatness of space, nor is it at all obvious that he’s inferring anything from our inability to imagine the non-existence of space. END EDIT.
You gave Kant’s views about space as an example of someone saying ‘because we can’t imagine it otherwise, the world must be such and such’. Kant never says this. What he says is that the principles of geometry are not derived simply from the analysis of terms, nor are they empirical. Kant is very, very, explicit...almost annoyingly repetitive, that he is not talking about the world, but about our perceptive faculties. And if indeed we cannot imagine x, that does seem to me to be a good basis from which to draw some conclusions about our perceptive faculties.
I have no idea what Kant would say about whether or not we can imagine non-Euclidian space (I have no idea myself if we can) but the matter is complicated because ‘imagination’ is a technical term in his philosophy. He thought space was an infinite Euclidian magnitude, but Euclidian geometry was the only game in town at the time.
Anyway he’s not a good example. As I said before, I don’t mean to dispute the point the example was meant to illustrate. I just wanted to point out that this is an incorrect view of Kant’s claims about space. It’s not really very important what he thought about space though.
There’s a difference between “can’t imagine” in a colloquial sense, and actual inability to imagine. There’s also a difference between not being able to think of how something fits into our knowledge about the universe (for instance, not being able to come up with a mechanism or not being able to see how the evidence supports it) and not being able to imagine the thing itself.
There also aren’t as many examples of this in the history of science as you probably think. Most of the examples that come to people’s mind involve scientists versus noscientists.
See my reply to army above.
Hold on now, you’re pattern matching me. I said:
To which you replied that this is a fact about me, not the universe. But I explicitly say that its not a fact about the universe! My evidence for this is the only evidence that could be relevant: my experience with literature, science fiction, talking to people, etc.
Nor is it relevant that science is full of people that say that something has to be true because they can’t imagine the world otherwise. Again, I’m not making a claim about the world, I’m making a claim about the way we have imagined, or now imagine the world to be. I would be very happy to be pointed toward a hypothetical universe that isn’t subject to mathematical analysis and which contains thinking animals.
So before we go on, please tell me what you think I’m claiming? I don’t wish to defend any opinions but my own.
Hen, I told you how I imagine such a universe, and you told me I couldn’t be imagining it! Maybe you could undertake not to gainsay further hypotheses.
I found your suggestion to be implausible for two reasons: first, I don’t think the idea of epistemically significant qualia is defensible, and second, even on the condition that it is, I don’t think the idea of a universe of nothing but a single quale (one having epistemic significance) is defensible. Both of these points would take some time to work out, and it struck me in our last exchange that you had neither the patience nor the good will to do so, at least not with me. But I’d be happy to discuss the matter if you’re interested in hearing what I have to say.
You said:
I’m not sure what you mean by “necessary”, but the most obvious interpretation is that you think it’s necessarily impossible for the world to not be run by math or at least for humans to understand a world that doesn’t.
This is my claim, and here’s the thought: thinking things are natural, physical objects and they necessarily have some internal complexity. Further, thoughts have some basic complexity: I can’t engage in an inference with a single term.
Any universe which would not in principle be subject to mathematical analysis is a universe in which there is no quantity of anything. So it can’t, for example, involve any space or time, no energy or mass, no plurality of bodies, no forces, nothing like that. It admits of no analysis in terms of propositional logic, so Bayes is right out, as is any understanding of causality. This, it seems to me, would preclude the possibility of thought altogether. It may be that the world we live in is actually like that, and all its multiplicity is merely the contribution of our minds, so I won’t venture a claim about the world as such. So far as I know, the fact that worlds admit of mathematical analysis is a fact about thinking things, not worlds.
What do you mean by “complexity”? I realize you have an intuitive idea, but it could very well be that your idea doesn’t make sense when applied to whatever the real universe is.
Um, that seems like a stretch. Just because some aspects of the universe are subject to mathematical analysis doesn’t necessarily mean the whole universe is.
For my purposes, complexity is: involving (in the broadest sense of that word) more than one (in the broadest sense of that word) thing (in the broadest sense of that word). And remember, I’m not talking about the real universe, but about the universe as it appears to creatures capable of thinking.
I think it does, if you’re granting me that such a world could be distinguished into parts. It doesn’t mean we could have the rich mathematical understanding of laws we do now, but that’s a higher bar than I’m talking about.
You can always “use” analysis the issue is whether it gives you correct answers. It only gives you the correct answer if the universe obeys certain axioms.
Well, this gets us back to the topic that spawned this whole discussion: I’m not sure we can separate the question ‘can we use it’ from ‘does it give us true results’ with something like math. If I’m right that people always have mostly true beliefs, then when we’re talking about the more basic ways of thinking (not Aristotelian dynamics, but counting, arithmetic, etc.) the fact that we can use them is very good evidence that they mostly return true results. So if you’re right that you can always use, say, arithmetic, then I think we should conclude that a universe is always subject to analysis by arithmetic.
You may be totally wrong that you can always use these things, of course. But I think you’re probably right and I can’t make sense of any suggestion to the contrary that I’ve heard yet.
One could mathematically describe things not analysable by arithmetic, though...
Fair point, arithmetic’s not a good example of a minimum for mathematical description.
The idea of rational understanding rests on the fact that you are separated from the object that you are trying to understand and the object itself doesn’t change if you change your understanding of it.
Then there the halting problem. There are a variety of problems that are NP. Those problems can’t be understood by doing a few experiments and then extrapolating general rules from your experiments. I’m not quite firm with the mathematical terminology but I think NP problems are not subject to thinks like calculus that are covered in what Wikipedia describes as Mathematical analysis.
Heinz von Förster makes the point that children have to be taught that “green” is no valid answer for the question: “What’s 2+2?”. I personally like his German book titled: “Truth is the invention of a liar”. Heinz von Förster headed the started the Biological Computer Laboratory in 1958 and came up with concepts like second-order cybernetics.
As far as fictional worlds go, Terry Pratchetts discworld runs on narrativium instead of math.
That’s true as long there no revelations of truth by Gods or other magical processes. In an universe where you can get the truth through magical tarot reading that assumption is false.
That’s not obvious to me. Why do you think this?
I also don’t understand this inference. Why do you think revelations of truth by Gods or other magical processes, or tarot readings, mean that such a universe would a) be knowable, and b) not be subject to rational analysis?
It might depend a bit of what you mean with rationality. You lose objectivity.
Let’s say I’m hypnotize someone. I’m in a deep state of rapport. That means my emotional state matters a great deal. If I label something that the person I’m talking to as unsuccessful, anxiety raises in myself. That anxiety will screw with the result I want to achieve. I’m better of if I blank my mind instead of engaging in rational analysis of what I’m doing.
Logically A → B is not the same thing as B → A.
I said that it’s possible for there to be knowledge that you can only get through a process besides rational analysis if you allow “magic”.
I’m a little lost. So do you think these observations challenge the idea that in order to understand anyone, we need to assume they’ve got mostly true beliefs, and make mostly rational inferences?
I don’t know what you mean by “run on math”. Do qualia run in math?
It’s not my phrase, and I don’t particularly like it myself. If you’re asking whether or not qualia are quanta, then I guess the answer is no, but in the sense that the measured is not the measure. It’s a triviality that I can ask you how much pain you feel on a scale of 1-10, and get back a useful answer. I can’t get at what the experience of pain itself is with a number or whatever, but then, I can’t get at what the reality of a block of wood is with a ruler either.
Then by imagining an all qualia universe, I can easily imagine a universe that doesn’t run on math, for some values of an”runs on math”
I don’t think you can imagine, or conceive of, an all qualia universe though.
You don’t get to tell me what I can imagine, though. All I have to do is imagine away the quantitative and structural aspects of my experience.
I rather think I do. If you told me you could imagine a euclidian triangle with more or less than 180 internal degrees, I would rightly say ‘No you can’t’. It’s simply not true that we can imagine or conceive of anything we can put into (or appear to put into) words. And I don’t think it’s possible to imagine away things like space and time and keep hold of the idea that you’re imagining a universe, or an experience, or anything like that. Time especially, and so long as I have time, I have quantity.
That looks likes typical mind fallacy
I don’t know where you are getting yourmfacts from, but it is well known that people’s abilities at visualization vary considerably, so where’s the “we”?
Having studied non euclidean geometry, I can easily imagine a triangle whose angles .sum to more than180 (hint: it’s inscribed on the surface of a sphere)
Saying that non spatial or in temporal universes aren’t really universes is a True Scotsman fallacy.
Non spatial and non temporal models have been serious proposed by physicists; perhaps you should talk to them.
It depends on what you mean by “imagine”. I can’t imagine a Euclidian triangle with less than 180 degrees in the sense of having a visual representation in my mind that I could then reproduce on a piece of paper. On the other hand, I can certainly imagine someone holding up a measuring device to a vague figure on a piece of paper and saying “hey, I don’t get 180 degrees when I measure this”.
Of course, you could say that the second one doesn’t count since you’re not “really” imagining a triangle unless you imagine a visual representation, but if you’re going to say that you need to remember that all nontrivial attempts to imagine things don’t include as much detail as the real thing. How are you going to define it so that eliminating some details is okay and eliminating other details isn’t?
(And if you try that, then explain why you can’t imagine a triangle whose angles add up to 180.05 degrees or some other amount that is not 180 but is close enough that you wouldn’t be able to tell the difference in a mental image. And then ask yourself “can I imagine someone writing a proof that a Euclidian triangle’s angles don’t add up to 180 degrees?” without denying that you can imagine people writing proofs at all.)
These are good questions, and I think my general answer is this: in the context of this similar arguments, being able to imagine something is sometimes taken as evidence that it’s at least a logical possibility. I’m fine with that, but it needs to be imagined in enough detail to capture the logical structure of the relevant possibility. If someone is going to argue, for example, that one can imagine a euclidian triangle with more or less than 180 internal degrees, the imagined state of affairs must have as least as much logical detail as does a euclidian triangle with 180 internal degrees. Will that exclude your ‘vague shape’ example, and probably your ‘proof’ example?
It would exclude the vague shape example but I think it fails for the proof example.
Your reasoning suggests that if X is false, it would be impossible for me to imagine someone proving X. I think that is contrary to what most people mean when they say they can imagine something.
It’s not clear what your reasoning implies when X is true. Either
I cannot imagine someone proving X unless I can imagine all the steps in the proof
I can imagine someone proving X as long as X is true, since having a proof would be a logical possibility as long as X is true
1) is also contrary to what most people think of as imagining. 2) would mean that it is possible me to not know whether or not I am imagining something. (I imagine someone proving X and I don’t know if X is true. 2) means that if X is true I’m “really imagining” it and that if X is false, I am not.)
Well, say I argue that it’s impossible to write a story about a bat. It seems like it should be unconvincing for you to say ‘But I can imagine someone writing a story about a bat...see, I’m imagining Tom, who’s just written a story about a bat.’ Instead, you’d need to imagine the story itself. I don’t intend to talk about the nature of the imagination here, only to say that as a rule, showing that something is logically possible by way of imagining it requires that it have enough logical granularity to answer the challenge.
So I don’t doubt that you could imagine someone proving that E-triangles have more than 180 internal degrees, but I am saying that not all imaginings are contenders in an argument about logical possibility. Only those ones which have sufficient logical granularity do.
I would understand “I can imagine...” in such a context to mean that it doesn’t contain flaws that are basic enough to prevent me from coming up with a mental picture or short description. Not that it doesn’t contain any flaws at all. It wouldn’t make sense to have “I can imagine X” mean “there are no flaws in X”—that would make “I can imagine X” equivalent to just asserting X.
The issue isn’t flaws or flawlessness. In my bat example, you could perfectly imagine Tom sitting in an easy chair with a glass of scotch saying to himself, ‘I’m glad I wrote that story about the bat’. But that wouldn’t help. I never said it’s impossible for Tom to sit in a chair and say that, I said that it was impossible to write a story about a bat.
The issue isn’t logical detail simpliciter, but logical detail relative to the purported impossibility. In the triangle case, you have to imagine, not Tom sitting in his chair thinking ‘I’m glad I proved that E-triangles have more than 180 internal degrees’ (no one could deny that that is possible) but rather the figure itself. It can be otherwise as vague and flawed as you like, so long as the relevant bits are there. Very likely, imagining the proof in the relevant way would require producing it.
And you are asserting something, you’re asserting the possibility of something in virtue of the fact that it is in some sense actual. To say that something is logically impossible is to say that it can’t exist anywhere, ever, not even in a fantasy. To imagine up that possibility is to make it sufficiently real to refute the claim of possibility, but only if you imagine, and thus make real, the precise thing being claimed to be impossible.
Are you sure it is logically impossible to have shameless and timeless universes? Who has put forward the necessity of space and time?
Dear me no! I have no idea if such a universe is impossible. I’m not even terribly confident that this universe has space or time.
I am pretty sure that space and time (or something like them) are a necessary condition on experience, however. Maybe they’re just in our heads, but it’s nevertheless necessary that they, or something like them, be in our heads. Maybe some other kind of creature thinks in terms of space, time, and fleegle, or just fleegle, time, and blop, or just blop and nizz. But I’m confident that such things will all have some common features, namely being something like a context for a multiplicity. I mean in the way time is a context for seeing this, followed by that, and space is a context for seeing this in that in some relation, etc.
Without something like this, it seems to me experience would always (except there’s no time) only be of one (except an idea of number would never come up) thing, in which case it wouldn’t be rich enough to be an experience. Or experience would be of nothing, but that’s the same problem.
So there might be universes of nothing but qualia (or, really, quale) but it wouldn’t be a universe in which there are any experiencing or thinking things. And if that’s so, the whole business is a bit incoherent, since we need an experiencer to have a quale.
Are you using experience to mean visual experience by any chance? How much spatial information are you getting from hearing?
PS your dogmatic Kantianism is now taken as read.
Tapping out.
That depends on your definition of “math”.
For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green? I don’t see why not.
I think you’re conflating the physical operation that we correlate with addition and the mathematical structure. ‘Green’ I’m not seeing, but I could write a computer program modeling a universe in which placing a pair of stones in a container that previously held a pair of stones does not always lead to that container holding a quadruplet of stones. In such a universe, the mathematical structure we call ‘addition’ would not be useful, but that doesn’t say that the formalized reasoning structure we call ‘math’ would not exist, or could not be employed.
(In fact, if it’s a computer program, it is obvious that its nature is susceptible to mathematical analysis.)
I guess I could make it appear that way, sure, though I don’t know if I could then recognize anything in my simulation as thinking or doing math. But in any case, that’s not a universe in which 2+2=green, it’s a universe in which it appears to. Maybe I’m just not being imaginative enough, and so you may need to help me flesh out the hypothetical.
If I write the simulation in python I can simple define my function for addition:
Unfortunately I don’t know how to format the indention perfectly for this forum.
We don’t need to go to the trouble of defining anything in Python. We can get the same result just by saying
If I use python to simulate a world than it matters how things are defined in python.
It doesn’t only appear that 2+2=green but it’s that way at the level of the source code that depends how the world runs.
But it sounds to me like you’re talking about the manipulation of signs, not about numbers themselves. We could make the set of signs ‘2+2=’ end any way we like, but that doesn’t mean we’re talking about numbers. I donno, I think you’re being too cryptic or technical or something for me, I don’t really understand the point you’re trying to make.
What do you mean with “the numbers themselves”? Peano axioms? I could imagine that n → n+1 just doesn’t apply.
Math is what happens when you take your original working predictive toolkit (like counting sheep) and let it run on human wetware disconnected from its original goal of having to predict observables. Thus some form of math would arise in any somewhat-predictable universe evolving a calculational substrate.
That’s an interesting problem. Do we have math because we make abstractions about the multitude of things around us, or must we already have some idea of math in the abstract just to recognize the multitude as a multitude? But I think I agree with the gist of what you’re saying.
Just like I think of language as meta-grunting, I think of math as meta-counting. Some animals can count, and possibly add and subtract a bit, but abstracting it away from the application for the fun of it is what humans do.
Is “containing mathematical truth” the same as “running on math”?
Mixing truth and rationality is a failure mode. To know whether someone statement is true , you have to understand it,ad to understand it, you have to assume the speaker’s rationality.
It’s also a failure mode to attach “Irrational” directly to beliefs. A belief is rational if it can be supported by an argument, and you don’t carry the space of all possible arguments round jn your head,
That’s an… interesting definition of “rational”.
Puts on Principle of Charity hat...
Maybe TheAncientGreek means:
(1) a belief is rational if it can be supported by a sound argument
(2) a belief is rational if it can be supported by a valid argument with probable premises
(3) a belief is rational if it can be supported by an inductively strong argument with plausible premises
(4) a belief is rational if it can be supported by an argument that is better than any counterarguments the agent knows of
etc...
Although personally, I think it is more helpful to think of rationality as having to do with how beliefs cohere with other beliefs and about how beliefs change when new information comes in than about any particular belief taken in isolation.
I can’t but note that the world “reality” is conspicuously absent here...
That there is empirical evidence for something is good argument for it.
Arguments of type (1) necessarily track reality (it is pretty much defined this way), (2) may or may not depending on the quality of the premises, (3) often does, and sometimes you just can’t do any better than (4) with available information and corrupted hardware.
Just because I didn’t use the word “reality” doesn’t really mean much.
A definition of “rational argument” that explicitly referred to “reality” would be a lot less useful, since checking which arguments are rational is one of the steps in figuring what’ real.
I am not sure this is (necessarily) the case, can you unroll?
Generally speaking, arguments live in the map and, in particular, in high-level maps which involve abstract concepts and reasoning. If I check the reality of the stone by kicking it and seeing if my toe hurts, no arguments are involved. And from the other side, classical logic is very much part of “rational arguments” and yet needs not correspond to reality.
That tends to work less well for things that one can’t directly observe, e.g., how old is the universe, or things where there is confounding noise, e.g., does this drug help.
That was a counterexample, not a general theory of cognition...
There isn’t a finite list of rational beliefs, because someone could think of an argument for a belief that you haven’t thought of.
There isn’t a finite list of correct arguments either. People can invent new ones.
Well, it’s not too compatible with self congratulations “rationality”.
I believe this disagreement is testable by experiment.
Do elaborate.
If you would more reliably understand what people mean by specifically treating it as the product of a rational and intelligent person, then executing that hack should lead to your observing a much higher rate of rationality and intelligence in discussions than you would previously have predicted. If the thesis is true, many remarks which, using your earlier methodology, you would have dismissed as the product of diseased reasoning will prove to be sound upon further inquiry.
If, however, you execute the hack for a few months and discover no change in the rate at which you discover apparently-wrong remarks to admit to sound interpretations, then TheAncientGeek’s thesis would fail the test.
You will also get less feedback on the lines of “you just don’t get it”
True, although being told less often that you are missing the point isn’t, in and of itself, all that valuable; the value is in getting the point of those who otherwise would have given up on you with a remark along those lines.
(Note that I say “less often”; I was recently told that this criticism of Tom Godwin’s “The Cold Equations”, which I had invoked in a discussion of “The Ones Who Walk Away From Omelas”, missed the point of the story—to which I replied along the lines of, “I get the point, but I don’t agree with it.”)
That looks like a test of my personal ability to form correct first-impression estimates.
Also “will prove to be sound upon further inquiry” is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X “sound”? X-/
Precisely.
Ah, I see. “Sound” is not the right word for what I mean; what I would expect to occur if the thesis is correct is that statements will prove to be apposite or relevant or useful—that is to say, valuable contributions in the context within which they were uttered. In the case of X, this would hold if the person proposing X believed that those conditions applied in the case described.
A concrete example would be someone who said, “you can divide by zero here” in reaction to someone being confused by a definition of the derivative of a function in terms of the limit of a ratio.
Because you are not engaged in establishing facts about how smart someone is, you are instead trying to establish facts about what they mean by what they say.
I do see what you are describing as being the standard PoC at all. May I suggest you are call it something else.
How does the thing I am vaguely waving my arms at differ from the “standard PoC”?