What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it’s perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability.
I think you are talking about what’s in local parlance is called a “weak prior” vs a “strong prior”. Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.
In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior—the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior—it will take much convincing evidence to persuade you that the theory is not correct after all.
Of course, the posterior of a previous update becomes the prior of the next update.
Using this language, then, you are saying that prima facie evidence of someone’s stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.
Using this language, then, you are saying that prima facie evidence of someone’s stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.
Oh, dear—that’s not what I meant at all. I meant that—absent a strong prior—the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent. It’s entirely possible that ten minutes of conversation will suffice to make a strong prior out of this weaker one—there’s someone arguing for dualism on a webcomic forum I (in)frequent along the same lines as Chalmers “hard problem of consciousness”, and it took less than ten posts to establish pretty confidently that the same refutations would apply—but as the history of DIPS (defense-independent pitching statistics) shows, it’s entirely possible for an idea to be as correct as “the earth is a sphere, not a plane” and nevertheless be taken as prima facie absurd.
(As the metaphor implies, DIPS is not quite correct, but it would be more accurate to describe its successors as “fixing DIPS” than as “showing that DIPS was completely wrongheaded”.)
I meant that—absent a strong prior—the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent.
Oh, I agree with that.
What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid. The principle of charity should not prevent that from happening. Of course evidence of stupidity should not make you close the case, declare someone irretrievably stupid, and stop considering any further evidence.
As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.
What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid.
...in the context during which they exhibited the behavior which generated said evidence, of course. In broader contexts, or other contexts? To a much lesser extent, and not (usually) strongly in the strong-prior sense, but again, yes. That you should always be capable of considering further evidence is—I am glad to say—so universally accepted a proposition in this forum that I do not bother to enunciate it, but I take no issue with drawing conclusions from a sufficient body of evidence.
Come to think, you might be amused by this fictional dialogue about a mendacious former politician, illustrating the ridiculousness of conflating “never assume that someone is arguing in bad faith” and “never assert that someone is arguing in bad faith”. (The author also posted a sequel, if you enjoy the first.)
As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.
I’m afraid that I would have about as much luck barking like a duck as enunciating how I evaluate the intelligence (or reasonableness, or honesty, or...) of those I converse with. YMMV, indeed.
Using this language, then, you are saying that prima facie evidence of someone’s stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being. And I don’t see why this should be so.
The fundamental attribution error is about underestimating the importance of external drivers (the particular situation, random chance, etc.) and overestimating the importance of internal factors (personality, beliefs, etc.) as an explanation for observed actions.
If a person in a discussion is spewing nonsense, it is rare that external factors are making her do it (other than a variety of mind-altering chemicals). The indicators of stupidity are NOT what position a person argues or how much knowledge about the subject does she has—it’s how she does it. And inability e.g. to follow basic logic is hard to attribute to external factors.
This discussion has got badly derailed. You are taking it that there is some robust fact about someones lack of lrationality or intelligence which may or may not be explained by internal or external factors.
The point is that you cannot make a reliable judgement about someone’s rationality or intelligence unless you have understood that they are saying,....and you cannot reliably understand what they ares saying unl ess you treat it as if it were the product of a rational and intelligent person. You can go to “stupid”when all attempts have failed, but not before.
I think it’s true, on roughly these grounds: taking yourself to understand what someone is saying entails thinking that almost all of their beliefs (I mean ‘belief’ in the broad sense, so as to include my beliefs about the colors of objects in the room) are true. The reason is that unless you assume almost all of a person’s (relevant) beliefs are true, the possibility space for judgements about what they mean gets very big, very fast. So if ‘generally understanding what someone is telling you’ means having a fairly limited possibility space, you only get this on the assumption that the person talking to you has mostly true beliefs. This, of course, doesn’t mean they have to be rational in the LW sense, or even very intelligent. The most stupid and irrational (in the LW sense) of us still have mostly true beliefs.
I guess the trick is to imagine what it would be to talk to someone who you thought had on the whole false beliefs. Suppose they said ‘pass me the hammer’. What do you think they meant by that? Assuming they have mostly or all false beliefs relevant to the utterance, they don’t know what a hammer is or what ‘passing’ involves. They don’t know anything about what’s in the room, or who you are, or what you are, or even if they took themselves to be talking to you, or talking at all. The possibility space for what they took themselves to be saying is too large to manage, much larger than, for example, the possibility space including all and only every utterance and thought that’s ever been had by anyone. We can say things like ‘they may have thought they were talking about cats or black holes or triangles’ but even that assumes vastly more truth and reason in the person that we’ve assumed we can anticipate.
Generally speaking, understanding what a person means implies reconstructing their framework of meaning and reference that exists in their mind as the context to what they said.
Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.
Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.
Well, there are two questions here: 1) is it in principle necessary to assume your interlocutors are sane and rational, and 2) is it as a matter of practical necessity a fact that we always do assume our interlocutors are sane and rational. I’m not sure about the first one, but I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they’re broadly sane, rational, and have mostly true beliefs. I’d be interested to know which of these you’re arguing about.
Also, we should probably taboo ‘sane’ and ‘rational’. People around here have a tendency to use these words in an exaggerated way to mean that someone has a kind of specific training in probability theory, statistics, biases, etc. Obviously people who have none of these things, like people living thousands of years ago, were sane and rational in the conventional sense of these terms, and they had mostly true beliefs even by any standard we would apply today.
I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they’re broadly sane, rational, and have mostly true beliefs.
I don’t think so. Two counter-examples:
I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase “Jesus’ self-sacrifice washes away the original sin” without accepting that Christianity is “mostly true” or “rational”.
Consider a psychotherapist talking to a patient, let’s say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.
I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase “Jesus’ self-sacrifice washes away the original sin” without accepting that Christianity is “mostly true” or “rational”.
You’re not being imaginative enough: you’re thinking about someone with almost all true beliefs (including true beliefs about what Christians tend to say), and a couple of sort of stand out false beliefs about how the universe works as a whole. I want you to imagine talking to someone with mostly false beliefs about the subject at hand. So you can’t assume that by ‘Jesus’ self-sacrifice washes away the original sin’ that they’re talking about anything you know anything about because you can’t assume they are connecting with any theology you’ve ever heard of. Or even that they’re talking about theology. Or even objects or events in any sense you’re familiar with.
Consider a psychotherapist talking to a patient, let’s say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.
I think, again, delusional people are remarkable for having some unaccountably false beliefs, not for having mostly false beliefs. People with mostly false beliefs, I think, wouldn’t be recognizable even as being conscious or aware of their surroundings (because they’re not!).
Well, my point is that as a matter of course, you assume everyone you talk to has mostly true beliefs, and for the most part thinks rationally. We’re talking about ‘people’ with mostly or all false beliefs just to show that we don’t have any experience with such creatures.
Bigger picture: the principle of charity, that is the assumption that whoever you are talking to is mostly right and mostly rational, isn’t something you ought to hold, it’s something you have no choice but to hold. The principle of charity is a precondition on understanding anyone at all, even recognizing that they have a mind.
People will have mostly true beliefs, but they might not have true beliefs in the areas under concern. For obvious reasons, people’s irrationality is likely to be disproportionately present in the beliefs with which they disagree with others. So the fact that you need to be charitable in assuming people have mostly true beliefs may not be practically useful—I’m sure a creationist rationally thinks water is wet, but if I’m arguing with him, that subject probably won’t come up as much as creationism.
That’s true, but I feel like a classic LW point can be made here: suppose it turns out some people can do magic. That might seem like a big change, but in fact magic will then just be subject to the same empirical investigation as everything else, and ultimately the same integration into physical theory the same as everything else.
So while I agree with you that when we specify a topic, we can have broader disagreement, that disagreement is built on and made possible by very general agreement about everything else. Beliefs are holistic, not atomic, and we can’t partition them off while making any sense of them. We’re never just talking about some specific subject matter, but rather emphasizing some subject matter on the background of all our other beliefs (most of which must be true).
The thought, in short, is that beliefs are of a nature to be true, in the way dogs naturally have four legs. Some don’t, because something went wrong, but we can only understand the defect in these by having the basic nature of beliefs, namely truth, in the background.
That might seem like a big change, but in fact magic will then just be subject to the same empirical investigation as everything else, and ultimately the same integration into physical theory the same as everything else.
That could be true but doesn’t have to be true. Our ontological assumptions might also turn out to be mistaken.
That could be true but doesn’t have to be true. Our ontological assumptions might also turn out to be mistaken.
True, and a discovery like that might require us to make some pretty fundamental changes. But I don’t think Morpheus could be right about the universe’s relation to math. No universe, I take it, ‘runs’ on math in anything but the loosest figurative sense. The universe we live in is subject to mathematical analysis, and what reason could we have for thinking any universe could fail to be so? I can’t say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding. Even if someone has some very false beliefs, their beliefs are false not just jibber-jabber (and if they are just jibber-jabber then you’re not talking to a person). Even false beliefs are going to have a rational structure.
I can’t say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
That is a fact about you, not a fact about the universe. Nobody could imagine light being both a particle and a wave, for example, until their study of nature forced them to.
People could imagine such a thing before studying nature showed they needed to; they just didn’t. I think there’s a difference between a concept that people only don’t imagine, and a concept that people can’t imagine. The latter may mean that the concept is incoherent or has an intrinsic flaw, which the former doesn’t.
People could imagine such a thing before studying nature showed they needed to; they just didn’t. I think there’s a difference between a concept that people only don’t imagine, and a concept that people can’t imagine.
In the interest of not having this discussion degenerate into an argument about what “could” means, I would like to point out that your and hen’s only evidence that you couldn’t imagine a world that doesn’t run on math is that you haven’t.
For one thing, “math” trivially happens to run on world, and corresponds to what happens when you have a chain of interactions. Specifically to how one chain of physical interactions (apples being eaten for example) combined with another that looks dissimilar (a binary adder) ends up with conclusion that apples were counted correctly, or how the difference in count between the two processes of counting (none) corresponds to another dissimilar process (the reasoning behind binary arithmetic).
As long as there’s any correspondences at all between different physical processes, you’ll be able to kind of imagine that world runs on world arranged differently, and so it would appear that world “runs on math”.
If we were to discover some new laws of physics that were producing incalculable outcomes, we would just utilize those laws in some sort of computer and co-opt them as part of “math”, substituting processes for equivalent processes. That’s how we came up with math in the first place.
edit: to summarize, I think “the world runs on math” is a really confused way to look at how world relates to the practice of mathematics inside of it. I can perfectly well say that the world doesn’t run on math any more than the radio waves are transmitted by mechanical aether made of gears, springs, and weights, and have exact same expectations about everything.
It seems to me that as long as there’s anything that is describable in the loosest sense, that would be taken to be true.
I mean, look at this, some people believe literally that our universe is a “mathematical object”, what ever that means (tegmarkery), and we haven’t even got a candidate TOE that works.
edit: I think the issue is that Morpheus confuses “made of gears” with “predictable by gears”. Time is not made of gears, and neither are astronomical objects, but a clock is very useful nonetheless.
I don’t see why “describable” would necessarily imply “describable mathematically”. I can imagine a qualia only universe,and I can imagine the ability describe qualia. As things stand, there are a number of things that can’t be described mathematically
What’s your evidence for this? Keep in mind that the history of science is full of people asserting that X has to be the case because they couldn’t imagine the world being otherwise, only for subsequent discoveries to show that X is not in fact the case.
Well, the most famous (or infamous) is Kant’s argument the space must be flat (in the Euclidean sense) because the human mind is incapable of imagining it to be otherwise.
Another example was Lucretius’s argument against the theory that the earth is round: if the earth were round and things fell towards its center than in which direction would an object at the center fall?
Not to mention the standard argument against the universe having a beginning “what happened before it?”
I don’t intend to bicker, I think your point is a good one independently of these examples. In any case, I don’t think at least the first two of these examples of the phenomenon you’re talking about.
Well, the most famous (or infamous) is Kant’s argument the space must be flat (in the Euclidean sense) because the human mind is incapable of imagining it to be otherwise.
I think this comes up in the sequences as an example of the mind-projection fallacy, but that’s not right. Kant did not take himself to be saying anything about the world outside the mind when he said that space was flat. He only took himself to be talking about the world as it appears to us. Space, so far as Kant was concerned, was part of the structure of perception, not the universe. So in the Critique of Pure Reason, he says:
...if we remove our own subject or even only the subjective constitution of the senses in general, then all constitution, all relations of objects in space and time, indeed space and time themselves would disappear, and as appearances they cannot exist in themselves, but only in us. What may be the case with objects in themselves and abstracted from all this receptivity of our sensibility remains entirely unknown to us. (A42/B59–60)
So Kant is pretty explicit that he’s not making a claim about the world, but about the way we percieve it. Kant would very likely poke you in chest and say “No you’re committing the mind-projection fallacy for thinking that space is even in the world, rather than just a form of perception. And don’t tell me about the mind-projection fallacy anyway, I invented that whole move.”
Another example was Lucretius’s argument against the theory that the earth is round: if the earth were round and things fell towards its center than in which direction would an object at the center fall?
This also isn’t an example, because the idea of a spherical world had in fact been imagined in detail by Plato (with whom Lucretius seems to be arguing), Aristotle, and many of Lucretius’ contemporaries and predecessors. Lucretius’ point couldn’t have been that a round earth is unimaginable, but that it was inconsistent with an analysis of the motions of simple bodies in terms of up and down: you can’t say that fire is of a nature to go up if up is entirely relative. Or I suppose, you can say that but you’d have to come up with a more complicated account of natures.
Kant did not take himself to be saying anything about the world outside the mind when he said that space was flat. He only took himself to be talking about the world as it appears to us. Space, so far as Kant was concerned, was part of the structure of perception, not the universe.
And in particular he claimed that this showed it had to be Euclidean because humans couldn’t imagine it otherwise. Well, we now know it’s not Euclidean and people can imagine it that way (I suppose you could dispute this, but that gets into exactly what we mean by “imagine” and attempting to argue about other people’s qualia).
And in particular he claimed that this showed it had to be Euclidean because humans couldn’t imagine it otherwise.
No, he never says that. Feel free to cite something from Kant’s writing, or the SEP or something. I may be wrong, but I just read though the Aesthetic again, and I couldn’t find anything that would support your claim.
EDIT: I did find one passage that mentions imagination:
Space then is a necessary representation a priori, which serves for the foundation of all external intuitions. We never can imagine or make representation to ourselves of the non-existence of space, though we may easily enough think that no objects are found in it.
I’ve edited my post accordingly, but my point remains the same. Notice that Kant does not mention the flatness of space, nor is it at all obvious that he’s inferring anything from our inability to imagine the non-existence of space. END EDIT.
You gave Kant’s views about space as an example of someone saying ‘because we can’t imagine it otherwise, the world must be such and such’. Kant never says this. What he says is that the principles of geometry are not derived simply from the analysis of terms, nor are they empirical. Kant is very, very, explicit...almost annoyingly repetitive, that he is not talking about the world, but about our perceptive faculties. And if indeed we cannot imagine x, that does seem to me to be a good basis from which to draw some conclusions about our perceptive faculties.
I have no idea what Kant would say about whether or not we can imagine non-Euclidian space (I have no idea myself if we can) but the matter is complicated because ‘imagination’ is a technical term in his philosophy. He thought space was an infinite Euclidian magnitude, but Euclidian geometry was the only game in town at the time.
Anyway he’s not a good example. As I said before, I don’t mean to dispute the point the example was meant to illustrate. I just wanted to point out that this is an incorrect view of Kant’s claims about space. It’s not really very important what he thought about space though.
There’s a difference between “can’t imagine” in a colloquial sense, and actual inability to imagine. There’s also a difference between not being able to think of how something fits into our knowledge about the universe (for instance, not being able to come up with a mechanism or not being able to see how the evidence supports it) and not being able to imagine the thing itself.
There also aren’t as many examples of this in the history of science as you probably think. Most of the examples that come to people’s mind involve scientists versus noscientists.
I can’t say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
To which you replied that this is a fact about me, not the universe. But I explicitly say that its not a fact about the universe! My evidence for this is the only evidence that could be relevant: my experience with literature, science fiction, talking to people, etc.
Nor is it relevant that science is full of people that say that something has to be true because they can’t imagine the world otherwise. Again, I’m not making a claim about the world, I’m making a claim about the way we have imagined, or now imagine the world to be. I would be very happy to be pointed toward a hypothetical universe that isn’t subject to mathematical analysis and which contains thinking animals.
So before we go on, please tell me what you think I’m claiming? I don’t wish to defend any opinions but my own.
Hen, I told you how I imagine such a universe, and you told me I couldn’t be imagining it! Maybe you could undertake not to gainsay further hypotheses.
I found your suggestion to be implausible for two reasons: first, I don’t think the idea of epistemically significant qualia is defensible, and second, even on the condition that it is, I don’t think the idea of a universe of nothing but a single quale (one having epistemic significance) is defensible. Both of these points would take some time to work out, and it struck me in our last exchange that you had neither the patience nor the good will to do so, at least not with me. But I’d be happy to discuss the matter if you’re interested in hearing what I have to say.
So before we go on, please tell me what you think I’m claiming?
You said:
I just also think it’s a necessary fact.
I’m not sure what you mean by “necessary”, but the most obvious interpretation is that you think it’s necessarily impossible for the world to not be run by math or at least for humans to understand a world that doesn’t.
it’s [probably] impossible for humans to understand a world that [isn’t subject to mathematical analysis].
This is my claim, and here’s the thought: thinking things are natural, physical objects and they necessarily have some internal complexity. Further, thoughts have some basic complexity: I can’t engage in an inference with a single term.
Any universe which would not in principle be subject to mathematical analysis is a universe in which there is no quantity of anything. So it can’t, for example, involve any space or time, no energy or mass, no plurality of bodies, no forces, nothing like that. It admits of no analysis in terms of propositional logic, so Bayes is right out, as is any understanding of causality. This, it seems to me, would preclude the possibility of thought altogether. It may be that the world we live in is actually like that, and all its multiplicity is merely the contribution of our minds, so I won’t venture a claim about the world as such. So far as I know, the fact that worlds admit of mathematical analysis is a fact about thinking things, not worlds.
thinking things are natural, physical objects and they necessarily have some internal complexity. Further, thoughts have some basic complexity: I can’t engage in an inference with a single term.
What do you mean by “complexity”? I realize you have an intuitive idea, but it could very well be that your idea doesn’t make sense when applied to whatever the real universe is.
Any universe which would not in principle be subject to mathematical analysis is a universe in which there is no quantity of anything.
Um, that seems like a stretch. Just because some aspects of the universe are subject to mathematical analysis doesn’t necessarily mean the whole universe is.
What do you mean by “complexity”? I realize you have an intuitive idea, but it could very well be that your idea doesn’t make sense when applied to whatever the real universe is.
For my purposes, complexity is: involving (in the broadest sense of that word) more than one (in the broadest sense of that word) thing (in the broadest sense of that word). And remember, I’m not talking about the real universe, but about the universe as it appears to creatures capable of thinking.
Um, that seems like a stretch. Just because some aspects of the universe are subject to mathematical analysis doesn’t necessarily mean the whole universe is.
I think it does, if you’re granting me that such a world could be distinguished into parts. It doesn’t mean we could have the rich mathematical understanding of laws we do now, but that’s a higher bar than I’m talking about.
You can always “use” analysis the issue is whether it gives you correct answers. It only gives you the correct answer if the universe obeys certain axioms.
Well, this gets us back to the topic that spawned this whole discussion: I’m not sure we can separate the question ‘can we use it’ from ‘does it give us true results’ with something like math. If I’m right that people always have mostly true beliefs, then when we’re talking about the more basic ways of thinking (not Aristotelian dynamics, but counting, arithmetic, etc.) the fact that we can use them is very good evidence that they mostly return true results. So if you’re right that you can always use, say, arithmetic, then I think we should conclude that a universe is always subject to analysis by arithmetic.
You may be totally wrong that you can always use these things, of course. But I think you’re probably right and I can’t make sense of any suggestion to the contrary that I’ve heard yet.
More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding.
The idea of rational understanding rests on the fact that you are separated from the object that you are trying to understand and the object itself doesn’t change if you change your understanding of it.
Then there the halting problem. There are a variety of problems that are NP. Those problems can’t be understood by doing a few experiments and then extrapolating general rules from your experiments.
I’m not quite firm with the mathematical terminology but I think NP problems are not subject to thinks like calculus that are covered in what Wikipedia describes as Mathematical analysis.
Heinz von Förster makes the point that children have to be taught that “green” is no valid answer for the question: “What’s 2+2?”. I personally like his German book titled: “Truth is the invention of a liar”. Heinz von Förster headed the started the Biological Computer Laboratory in 1958 and came up with concepts like second-order cybernetics.
As far as fictional worlds go, Terry Pratchetts discworld runs on narrativium instead of math.
More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding.
That’s true as long there no revelations of truth by Gods or other magical processes. In an universe where you can get the truth through magical tarot reading that assumption is false.
The idea of rational understanding rests on the fact that you are separated from the object that you are trying to understand and the object itself doesn’t change if you change your understanding of it.
That’s not obvious to me. Why do you think this?
That’s true as long there no revelations of truth by Gods or other magical processes. In an universe where you can get the truth through magical tarot reading that assumption is false.
I also don’t understand this inference. Why do you think revelations of truth by Gods or other magical processes, or tarot readings, mean that such a universe would a) be knowable, and b) not be subject to rational analysis?
It might depend a bit of what you mean with rationality. You lose objectivity.
Let’s say I’m hypnotize someone. I’m in a deep state of rapport. That means my emotional state matters a great deal. If I label something that the person I’m talking to as unsuccessful, anxiety raises in myself. That anxiety will screw with the result I want to achieve. I’m better of if I blank my mind instead of engaging in rational analysis of what I’m doing.
I also don’t understand this inference. Why do you think revelations of truth by Gods or other magical processes, or tarot readings, mean that such a universe would a) be knowable, and b) not be subject to rational analysis?
Logically A → B is not the same thing as B → A.
I said that it’s possible for there to be knowledge that you can only get through a process besides rational analysis if you allow “magic”.
If I label something that the person I’m talking to as unsuccessful, anxiety raises in myself. That anxiety will screw with the result I want to achieve.
I’m a little lost. So do you think these observations challenge the idea that in order to understand anyone, we need to assume they’ve got mostly true beliefs, and make mostly rational inferences?
It’s not my phrase, and I don’t particularly like it myself. If you’re asking whether or not qualia are quanta, then I guess the answer is no, but in the sense that the measured is not the measure. It’s a triviality that I can ask you how much pain you feel on a scale of 1-10, and get back a useful answer. I can’t get at what the experience of pain itself is with a number or whatever, but then, I can’t get at what the reality of a block of wood is with a ruler either.
I rather think I do. If you told me you could imagine a euclidian triangle with more or less than 180 internal degrees, I would rightly say ‘No you can’t’. It’s simply not true that we can imagine or conceive of anything we can put into (or appear to put into) words. And I don’t think it’s possible to imagine away things like space and time and keep hold of the idea that you’re imagining a universe, or an experience, or anything like that. Time especially, and so long as I have time, I have quantity.
I don’t know where you are getting yourmfacts from, but it is well known that people’s abilities at visualization vary considerably, so where’s the “we”?
Having studied non euclidean geometry, I can easily imagine a triangle whose angles .sum to more than180 (hint: it’s inscribed on the surface of a sphere)
Saying that non spatial or in temporal universes aren’t really universes is a True Scotsman fallacy.
Non spatial and non temporal models have been serious proposed by physicists; perhaps you should talk to them.
It depends on what you mean by “imagine”. I can’t imagine a Euclidian triangle with less than 180 degrees in the sense of having a visual representation in my mind that I could then reproduce on a piece of paper. On the other hand, I can certainly imagine someone holding up a measuring device to a vague figure on a piece of paper and saying “hey, I don’t get 180 degrees when I measure this”.
Of course, you could say that the second one doesn’t count since you’re not “really” imagining a triangle unless you imagine a visual representation, but if you’re going to say that you need to remember that all nontrivial attempts to imagine things don’t include as much detail as the real thing. How are you going to define it so that eliminating some details is okay and eliminating other details isn’t?
(And if you try that, then explain why you can’t imagine a triangle whose angles add up to 180.05 degrees or some other amount that is not 180 but is close enough that you wouldn’t be able to tell the difference in a mental image. And then ask yourself “can I imagine someone writing a proof that a Euclidian triangle’s angles don’t add up to 180 degrees?” without denying that you can imagine people writing proofs at all.)
These are good questions, and I think my general answer is this: in the context of this similar arguments, being able to imagine something is sometimes taken as evidence that it’s at least a logical possibility. I’m fine with that, but it needs to be imagined in enough detail to capture the logical structure of the relevant possibility. If someone is going to argue, for example, that one can imagine a euclidian triangle with more or less than 180 internal degrees, the imagined state of affairs must have as least as much logical detail as does a euclidian triangle with 180 internal degrees. Will that exclude your ‘vague shape’ example, and probably your ‘proof’ example?
Will that exclude your ‘vague shape’ example, and probably your ‘proof’ example?
It would exclude the vague shape example but I think it fails for the proof example.
Your reasoning suggests that if X is false, it would be impossible for me to imagine someone proving X. I think that is contrary to what most people mean when they say they can imagine something.
It’s not clear what your reasoning implies when X is true. Either
I cannot imagine someone proving X unless I can imagine all the steps in the proof
I can imagine someone proving X as long as X is true, since having a proof would be a logical possibility as long as X is true
1) is also contrary to what most people think of as imagining. 2) would mean that it is possible me to not know whether or not I am imagining something. (I imagine someone proving X and I don’t know if X is true. 2) means that if X is true I’m “really imagining” it and that if X is false, I am not.)
Your reasoning suggests that if X is false, it would be impossible for me to imagine someone proving X. I think that is contrary to what most people mean when they say they can imagine something.
Well, say I argue that it’s impossible to write a story about a bat. It seems like it should be unconvincing for you to say ‘But I can imagine someone writing a story about a bat...see, I’m imagining Tom, who’s just written a story about a bat.’ Instead, you’d need to imagine the story itself. I don’t intend to talk about the nature of the imagination here, only to say that as a rule, showing that something is logically possible by way of imagining it requires that it have enough logical granularity to answer the challenge.
So I don’t doubt that you could imagine someone proving that E-triangles have more than 180 internal degrees, but I am saying that not all imaginings are contenders in an argument about logical possibility. Only those ones which have sufficient logical granularity do.
I would understand “I can imagine...” in such a context to mean that it doesn’t contain flaws that are basic enough to prevent me from coming up with a mental picture or short description. Not that it doesn’t contain any flaws at all. It wouldn’t make sense to have “I can imagine X” mean “there are no flaws in X”—that would make “I can imagine X” equivalent to just asserting X.
The issue isn’t flaws or flawlessness. In my bat example, you could perfectly imagine Tom sitting in an easy chair with a glass of scotch saying to himself, ‘I’m glad I wrote that story about the bat’. But that wouldn’t help. I never said it’s impossible for Tom to sit in a chair and say that, I said that it was impossible to write a story about a bat.
The issue isn’t logical detail simpliciter, but logical detail relative to the purported impossibility. In the triangle case, you have to imagine, not Tom sitting in his chair thinking ‘I’m glad I proved that E-triangles have more than 180 internal degrees’ (no one could deny that that is possible) but rather the figure itself. It can be otherwise as vague and flawed as you like, so long as the relevant bits are there. Very likely, imagining the proof in the relevant way would require producing it.
And you are asserting something, you’re asserting the possibility of something in virtue of the fact that it is in some sense actual. To say that something is logically impossible is to say that it can’t exist anywhere, ever, not even in a fantasy. To imagine up that possibility is to make it sufficiently real to refute the claim of possibility, but only if you imagine, and thus make real, the precise thing being claimed to be impossible.
Are you sure it is logically impossible to have [spaceless] and timeless universes?
Dear me no! I have no idea if such a universe is impossible. I’m not even terribly confident that this universe has space or time.
I am pretty sure that space and time (or something like them) are a necessary condition on experience, however. Maybe they’re just in our heads, but it’s nevertheless necessary that they, or something like them, be in our heads. Maybe some other kind of creature thinks in terms of space, time, and fleegle, or just fleegle, time, and blop, or just blop and nizz. But I’m confident that such things will all have some common features, namely being something like a context for a multiplicity. I mean in the way time is a context for seeing this, followed by that, and space is a context for seeing this in that in some relation, etc.
Without something like this, it seems to me experience would always (except there’s no time) only be of one (except an idea of number would never come up) thing, in which case it wouldn’t be rich enough to be an experience. Or experience would be of nothing, but that’s the same problem.
So there might be universes of nothing but qualia (or, really, quale) but it wouldn’t be a universe in which there are any experiencing or thinking things. And if that’s so, the whole business is a bit incoherent, since we need an experiencer to have a quale.
we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
That depends on your definition of “math”.
For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green? I don’t see why not.
For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green? I don’t see why not.
I think you’re conflating the physical operation that we correlate with addition and the mathematical structure. ‘Green’ I’m not seeing, but I could write a computer program modeling a universe in which placing a pair of stones in a container that previously held a pair of stones does not always lead to that container holding a quadruplet of stones. In such a universe, the mathematical structure we call ‘addition’ would not be useful, but that doesn’t say that the formalized reasoning structure we call ‘math’ would not exist, or could not be employed.
(In fact, if it’s a computer program, it is obvious that its nature is susceptible to mathematical analysis.)
For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green?
I guess I could make it appear that way, sure, though I don’t know if I could then recognize anything in my simulation as thinking or doing math. But in any case, that’s not a universe in which 2+2=green, it’s a universe in which it appears to. Maybe I’m just not being imaginative enough, and so you may need to help me flesh out the hypothetical.
But it sounds to me like you’re talking about the manipulation of signs, not about numbers themselves. We could make the set of signs ‘2+2=’ end any way we like, but that doesn’t mean we’re talking about numbers. I donno, I think you’re being too cryptic or technical or something for me, I don’t really understand the point you’re trying to make.
Math is what happens when you take your original working predictive toolkit (like counting sheep) and let it run on human wetware disconnected from its original goal of having to predict observables. Thus some form of math would arise in any somewhat-predictable universe evolving a calculational substrate.
Math is what happens when you take your original working predictive toolkit (like counting sheep) and let it run on human wetware disconnected from its original goal of having to predict observables.
That’s an interesting problem. Do we have math because we make abstractions about the multitude of things around us, or must we already have some idea of math in the abstract just to recognize the multitude as a multitude? But I think I agree with the gist of what you’re saying.
Just like I think of language as meta-grunting, I think of math as meta-counting. Some animals can count, and possibly add and subtract a bit, but abstracting it away from the application for the fun of it is what humans do.
Mixing truth and rationality is a failure mode. To know whether someone statement is true , you have to understand it,ad to understand it, you have to assume the speaker’s rationality.
It’s also a failure mode to attach “Irrational” directly to beliefs. A belief is rational if it can be supported by an argument, and you don’t carry the space of all possible arguments round jn your head,
(1) a belief is rational if it can be supported by a sound argument
(2) a belief is rational if it can be supported by a valid argument with probable premises
(3) a belief is rational if it can be supported by an inductively strong argument with plausible premises
(4) a belief is rational if it can be supported by an argument that is better than any counterarguments the agent knows of
etc...
Although personally, I think it is more helpful to think of rationality as having to do with how beliefs cohere with other beliefs and about how beliefs change when new information comes in than about any particular belief taken in isolation.
I can’t but note that the world “reality” is conspicuously absent here...
Arguments of type (1) necessarily track reality (it is pretty much defined this way), (2) may or may not depending on the quality of the premises, (3) often does, and sometimes you just can’t do any better than (4) with available information and corrupted hardware.
Just because I didn’t use the word “reality” doesn’t really mean much.
A definition of “rational argument” that explicitly referred to “reality” would be a lot less useful, since checking which arguments are rational is one of the steps in figuring what’ real.
checking which arguments are rational is one of the steps in figuring what’ real
I am not sure this is (necessarily) the case, can you unroll?
Generally speaking, arguments live in the map and, in particular, in high-level maps which involve abstract concepts and reasoning. If I check the reality of the stone by kicking it and seeing if my toe hurts, no arguments are involved. And from the other side, classical logic is very much part of “rational arguments” and yet needs not correspond to reality.
If I check the reality of the stone by kicking it and seeing if my toe hurts, no arguments are involved.
That tends to work less well for things that one can’t directly observe, e.g., how old is the universe, or things where there is confounding noise, e.g., does this drug help.
If you would more reliably understand what people mean by specifically treating it as the product of a rational and intelligent person, then executing that hack should lead to your observing a much higher rate of rationality and intelligence in discussions than you would previously have predicted. If the thesis is true, many remarks which, using your earlier methodology, you would have dismissed as the product of diseased reasoning will prove to be sound upon further inquiry.
If, however, you execute the hack for a few months and discover no change in the rate at which you discover apparently-wrong remarks to admit to sound interpretations, then TheAncientGeek’s thesis would fail the test.
True, although being told less often that you are missing the point isn’t, in and of itself, all that valuable; the value is in getting the point of those who otherwise would have given up on you with a remark along those lines.
(Note that I say “less often”; I was recently told that this criticism of Tom Godwin’s “The Cold Equations”, which I had invoked in a discussion of “The Ones Who Walk Away From Omelas”, missed the point of the story—to which I replied along the lines of, “I get the point, but I don’t agree with it.”)
That looks like a test of my personal ability to form correct first-impression estimates.
Also “will prove to be sound upon further inquiry” is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X “sound”? X-/
That looks like a test of my personal ability to form correct first-impression estimates.
Precisely.
Also “will prove to be sound upon further inquiry” is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X “sound”? X-/
Ah, I see. “Sound” is not the right word for what I mean; what I would expect to occur if the thesis is correct is that statements will prove to be apposite or relevant or useful—that is to say, valuable contributions in the context within which they were uttered. In the case of X, this would hold if the person proposing X believed that those conditions applied in the case described.
A concrete example would be someone who said, “you can divide by zero here” in reaction to someone being confused by a definition of the derivative of a function in terms of the limit of a ratio.
Because you are not engaged in establishing facts about how smart someone is, you are instead trying to establish facts about what they mean by what they say.
I think you are talking about what’s in local parlance is called a “weak prior” vs a “strong prior”. Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.
In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior—the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior—it will take much convincing evidence to persuade you that the theory is not correct after all.
Of course, the posterior of a previous update becomes the prior of the next update.
Using this language, then, you are saying that prima facie evidence of someone’s stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.
And I don’t see why this should be so.
Oh, dear—that’s not what I meant at all. I meant that—absent a strong prior—the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent. It’s entirely possible that ten minutes of conversation will suffice to make a strong prior out of this weaker one—there’s someone arguing for dualism on a webcomic forum I (in)frequent along the same lines as Chalmers “hard problem of consciousness”, and it took less than ten posts to establish pretty confidently that the same refutations would apply—but as the history of DIPS (defense-independent pitching statistics) shows, it’s entirely possible for an idea to be as correct as “the earth is a sphere, not a plane” and nevertheless be taken as prima facie absurd.
(As the metaphor implies, DIPS is not quite correct, but it would be more accurate to describe its successors as “fixing DIPS” than as “showing that DIPS was completely wrongheaded”.)
Oh, I agree with that.
What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid. The principle of charity should not prevent that from happening. Of course evidence of stupidity should not make you close the case, declare someone irretrievably stupid, and stop considering any further evidence.
As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.
...in the context during which they exhibited the behavior which generated said evidence, of course. In broader contexts, or other contexts? To a much lesser extent, and not (usually) strongly in the strong-prior sense, but again, yes. That you should always be capable of considering further evidence is—I am glad to say—so universally accepted a proposition in this forum that I do not bother to enunciate it, but I take no issue with drawing conclusions from a sufficient body of evidence.
Come to think, you might be amused by this fictional dialogue about a mendacious former politician, illustrating the ridiculousness of conflating “never assume that someone is arguing in bad faith” and “never assert that someone is arguing in bad faith”. (The author also posted a sequel, if you enjoy the first.)
I’m afraid that I would have about as much luck barking like a duck as enunciating how I evaluate the intelligence (or reasonableness, or honesty, or...) of those I converse with. YMMV, indeed.
People tend to update too much in these circumstances: Fundamental attribution error
The fundamental attribution error is about underestimating the importance of external drivers (the particular situation, random chance, etc.) and overestimating the importance of internal factors (personality, beliefs, etc.) as an explanation for observed actions.
If a person in a discussion is spewing nonsense, it is rare that external factors are making her do it (other than a variety of mind-altering chemicals). The indicators of stupidity are NOT what position a person argues or how much knowledge about the subject does she has—it’s how she does it. And inability e.g. to follow basic logic is hard to attribute to external factors.
This discussion has got badly derailed. You are taking it that there is some robust fact about someones lack of lrationality or intelligence which may or may not be explained by internal or external factors.
The point is that you cannot make a reliable judgement about someone’s rationality or intelligence unless you have understood that they are saying,....and you cannot reliably understand what they ares saying unl ess you treat it as if it were the product of a rational and intelligent person. You can go to “stupid”when all attempts have failed, but not before.
I disagree, I don’t think this is true.
I think it’s true, on roughly these grounds: taking yourself to understand what someone is saying entails thinking that almost all of their beliefs (I mean ‘belief’ in the broad sense, so as to include my beliefs about the colors of objects in the room) are true. The reason is that unless you assume almost all of a person’s (relevant) beliefs are true, the possibility space for judgements about what they mean gets very big, very fast. So if ‘generally understanding what someone is telling you’ means having a fairly limited possibility space, you only get this on the assumption that the person talking to you has mostly true beliefs. This, of course, doesn’t mean they have to be rational in the LW sense, or even very intelligent. The most stupid and irrational (in the LW sense) of us still have mostly true beliefs.
I guess the trick is to imagine what it would be to talk to someone who you thought had on the whole false beliefs. Suppose they said ‘pass me the hammer’. What do you think they meant by that? Assuming they have mostly or all false beliefs relevant to the utterance, they don’t know what a hammer is or what ‘passing’ involves. They don’t know anything about what’s in the room, or who you are, or what you are, or even if they took themselves to be talking to you, or talking at all. The possibility space for what they took themselves to be saying is too large to manage, much larger than, for example, the possibility space including all and only every utterance and thought that’s ever been had by anyone. We can say things like ‘they may have thought they were talking about cats or black holes or triangles’ but even that assumes vastly more truth and reason in the person that we’ve assumed we can anticipate.
Generally speaking, understanding what a person means implies reconstructing their framework of meaning and reference that exists in their mind as the context to what they said.
Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.
Well, there are two questions here: 1) is it in principle necessary to assume your interlocutors are sane and rational, and 2) is it as a matter of practical necessity a fact that we always do assume our interlocutors are sane and rational. I’m not sure about the first one, but I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they’re broadly sane, rational, and have mostly true beliefs. I’d be interested to know which of these you’re arguing about.
Also, we should probably taboo ‘sane’ and ‘rational’. People around here have a tendency to use these words in an exaggerated way to mean that someone has a kind of specific training in probability theory, statistics, biases, etc. Obviously people who have none of these things, like people living thousands of years ago, were sane and rational in the conventional sense of these terms, and they had mostly true beliefs even by any standard we would apply today.
The answers to your questions are no and no.
I don’t think so. Two counter-examples:
I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase “Jesus’ self-sacrifice washes away the original sin” without accepting that Christianity is “mostly true” or “rational”.
Consider a psychotherapist talking to a patient, let’s say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.
You’re not being imaginative enough: you’re thinking about someone with almost all true beliefs (including true beliefs about what Christians tend to say), and a couple of sort of stand out false beliefs about how the universe works as a whole. I want you to imagine talking to someone with mostly false beliefs about the subject at hand. So you can’t assume that by ‘Jesus’ self-sacrifice washes away the original sin’ that they’re talking about anything you know anything about because you can’t assume they are connecting with any theology you’ve ever heard of. Or even that they’re talking about theology. Or even objects or events in any sense you’re familiar with.
I think, again, delusional people are remarkable for having some unaccountably false beliefs, not for having mostly false beliefs. People with mostly false beliefs, I think, wouldn’t be recognizable even as being conscious or aware of their surroundings (because they’re not!).
So why are we talking about them, then?
Well, my point is that as a matter of course, you assume everyone you talk to has mostly true beliefs, and for the most part thinks rationally. We’re talking about ‘people’ with mostly or all false beliefs just to show that we don’t have any experience with such creatures.
Bigger picture: the principle of charity, that is the assumption that whoever you are talking to is mostly right and mostly rational, isn’t something you ought to hold, it’s something you have no choice but to hold. The principle of charity is a precondition on understanding anyone at all, even recognizing that they have a mind.
People will have mostly true beliefs, but they might not have true beliefs in the areas under concern. For obvious reasons, people’s irrationality is likely to be disproportionately present in the beliefs with which they disagree with others. So the fact that you need to be charitable in assuming people have mostly true beliefs may not be practically useful—I’m sure a creationist rationally thinks water is wet, but if I’m arguing with him, that subject probably won’t come up as much as creationism.
That’s true, but I feel like a classic LW point can be made here: suppose it turns out some people can do magic. That might seem like a big change, but in fact magic will then just be subject to the same empirical investigation as everything else, and ultimately the same integration into physical theory the same as everything else.
So while I agree with you that when we specify a topic, we can have broader disagreement, that disagreement is built on and made possible by very general agreement about everything else. Beliefs are holistic, not atomic, and we can’t partition them off while making any sense of them. We’re never just talking about some specific subject matter, but rather emphasizing some subject matter on the background of all our other beliefs (most of which must be true).
The thought, in short, is that beliefs are of a nature to be true, in the way dogs naturally have four legs. Some don’t, because something went wrong, but we can only understand the defect in these by having the basic nature of beliefs, namely truth, in the background.
That could be true but doesn’t have to be true. Our ontological assumptions might also turn out to be mistaken.
To quote Eliezer:
True, and a discovery like that might require us to make some pretty fundamental changes. But I don’t think Morpheus could be right about the universe’s relation to math. No universe, I take it, ‘runs’ on math in anything but the loosest figurative sense. The universe we live in is subject to mathematical analysis, and what reason could we have for thinking any universe could fail to be so? I can’t say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we’ve never imagined a universe, in fiction or through something like religion, which would fail to run on math.
More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding. Even if someone has some very false beliefs, their beliefs are false not just jibber-jabber (and if they are just jibber-jabber then you’re not talking to a person). Even false beliefs are going to have a rational structure.
That is a fact about you, not a fact about the universe. Nobody could imagine light being both a particle and a wave, for example, until their study of nature forced them to.
People could imagine such a thing before studying nature showed they needed to; they just didn’t. I think there’s a difference between a concept that people only don’t imagine, and a concept that people can’t imagine. The latter may mean that the concept is incoherent or has an intrinsic flaw, which the former doesn’t.
In the interest of not having this discussion degenerate into an argument about what “could” means, I would like to point out that your and hen’s only evidence that you couldn’t imagine a world that doesn’t run on math is that you haven’t.
For one thing, “math” trivially happens to run on world, and corresponds to what happens when you have a chain of interactions. Specifically to how one chain of physical interactions (apples being eaten for example) combined with another that looks dissimilar (a binary adder) ends up with conclusion that apples were counted correctly, or how the difference in count between the two processes of counting (none) corresponds to another dissimilar process (the reasoning behind binary arithmetic).
As long as there’s any correspondences at all between different physical processes, you’ll be able to kind of imagine that world runs on world arranged differently, and so it would appear that world “runs on math”.
If we were to discover some new laws of physics that were producing incalculable outcomes, we would just utilize those laws in some sort of computer and co-opt them as part of “math”, substituting processes for equivalent processes. That’s how we came up with math in the first place.
edit: to summarize, I think “the world runs on math” is a really confused way to look at how world relates to the practice of mathematics inside of it. I can perfectly well say that the world doesn’t run on math any more than the radio waves are transmitted by mechanical aether made of gears, springs, and weights, and have exact same expectations about everything.
“There is non trivial subset of maths whish describes physical law” might be better way of stating it
It seems to me that as long as there’s anything that is describable in the loosest sense, that would be taken to be true.
I mean, look at this, some people believe literally that our universe is a “mathematical object”, what ever that means (tegmarkery), and we haven’t even got a candidate TOE that works.
edit: I think the issue is that Morpheus confuses “made of gears” with “predictable by gears”. Time is not made of gears, and neither are astronomical objects, but a clock is very useful nonetheless.
I don’t see why “describable” would necessarily imply “describable mathematically”. I can imagine a qualia only universe,and I can imagine the ability describe qualia. As things stand, there are a number of things that can’t be described mathematically
Example?
Qualia, the passage of time, symbol grounding..
Absolutely, it’s a fact about me, that’s my point. I just also think it’s a necessary fact.
What’s your evidence for this? Keep in mind that the history of science is full of people asserting that X has to be the case because they couldn’t imagine the world being otherwise, only for subsequent discoveries to show that X is not in fact the case.
Name three (as people often say around here).
Well, the most famous (or infamous) is Kant’s argument the space must be flat (in the Euclidean sense) because the human mind is incapable of imagining it to be otherwise.
Another example was Lucretius’s argument against the theory that the earth is round: if the earth were round and things fell towards its center than in which direction would an object at the center fall?
Not to mention the standard argument against the universe having a beginning “what happened before it?”
I don’t intend to bicker, I think your point is a good one independently of these examples. In any case, I don’t think at least the first two of these examples of the phenomenon you’re talking about.
I think this comes up in the sequences as an example of the mind-projection fallacy, but that’s not right. Kant did not take himself to be saying anything about the world outside the mind when he said that space was flat. He only took himself to be talking about the world as it appears to us. Space, so far as Kant was concerned, was part of the structure of perception, not the universe. So in the Critique of Pure Reason, he says:
So Kant is pretty explicit that he’s not making a claim about the world, but about the way we percieve it. Kant would very likely poke you in chest and say “No you’re committing the mind-projection fallacy for thinking that space is even in the world, rather than just a form of perception. And don’t tell me about the mind-projection fallacy anyway, I invented that whole move.”
This also isn’t an example, because the idea of a spherical world had in fact been imagined in detail by Plato (with whom Lucretius seems to be arguing), Aristotle, and many of Lucretius’ contemporaries and predecessors. Lucretius’ point couldn’t have been that a round earth is unimaginable, but that it was inconsistent with an analysis of the motions of simple bodies in terms of up and down: you can’t say that fire is of a nature to go up if up is entirely relative. Or I suppose, you can say that but you’d have to come up with a more complicated account of natures.
And in particular he claimed that this showed it had to be Euclidean because humans couldn’t imagine it otherwise. Well, we now know it’s not Euclidean and people can imagine it that way (I suppose you could dispute this, but that gets into exactly what we mean by “imagine” and attempting to argue about other people’s qualia).
No, he never says that. Feel free to cite something from Kant’s writing, or the SEP or something. I may be wrong, but I just read though the Aesthetic again, and I couldn’t find anything that would support your claim.
EDIT: I did find one passage that mentions imagination:
I’ve edited my post accordingly, but my point remains the same. Notice that Kant does not mention the flatness of space, nor is it at all obvious that he’s inferring anything from our inability to imagine the non-existence of space. END EDIT.
You gave Kant’s views about space as an example of someone saying ‘because we can’t imagine it otherwise, the world must be such and such’. Kant never says this. What he says is that the principles of geometry are not derived simply from the analysis of terms, nor are they empirical. Kant is very, very, explicit...almost annoyingly repetitive, that he is not talking about the world, but about our perceptive faculties. And if indeed we cannot imagine x, that does seem to me to be a good basis from which to draw some conclusions about our perceptive faculties.
I have no idea what Kant would say about whether or not we can imagine non-Euclidian space (I have no idea myself if we can) but the matter is complicated because ‘imagination’ is a technical term in his philosophy. He thought space was an infinite Euclidian magnitude, but Euclidian geometry was the only game in town at the time.
Anyway he’s not a good example. As I said before, I don’t mean to dispute the point the example was meant to illustrate. I just wanted to point out that this is an incorrect view of Kant’s claims about space. It’s not really very important what he thought about space though.
There’s a difference between “can’t imagine” in a colloquial sense, and actual inability to imagine. There’s also a difference between not being able to think of how something fits into our knowledge about the universe (for instance, not being able to come up with a mechanism or not being able to see how the evidence supports it) and not being able to imagine the thing itself.
There also aren’t as many examples of this in the history of science as you probably think. Most of the examples that come to people’s mind involve scientists versus noscientists.
See my reply to army above.
Hold on now, you’re pattern matching me. I said:
To which you replied that this is a fact about me, not the universe. But I explicitly say that its not a fact about the universe! My evidence for this is the only evidence that could be relevant: my experience with literature, science fiction, talking to people, etc.
Nor is it relevant that science is full of people that say that something has to be true because they can’t imagine the world otherwise. Again, I’m not making a claim about the world, I’m making a claim about the way we have imagined, or now imagine the world to be. I would be very happy to be pointed toward a hypothetical universe that isn’t subject to mathematical analysis and which contains thinking animals.
So before we go on, please tell me what you think I’m claiming? I don’t wish to defend any opinions but my own.
Hen, I told you how I imagine such a universe, and you told me I couldn’t be imagining it! Maybe you could undertake not to gainsay further hypotheses.
I found your suggestion to be implausible for two reasons: first, I don’t think the idea of epistemically significant qualia is defensible, and second, even on the condition that it is, I don’t think the idea of a universe of nothing but a single quale (one having epistemic significance) is defensible. Both of these points would take some time to work out, and it struck me in our last exchange that you had neither the patience nor the good will to do so, at least not with me. But I’d be happy to discuss the matter if you’re interested in hearing what I have to say.
You said:
I’m not sure what you mean by “necessary”, but the most obvious interpretation is that you think it’s necessarily impossible for the world to not be run by math or at least for humans to understand a world that doesn’t.
This is my claim, and here’s the thought: thinking things are natural, physical objects and they necessarily have some internal complexity. Further, thoughts have some basic complexity: I can’t engage in an inference with a single term.
Any universe which would not in principle be subject to mathematical analysis is a universe in which there is no quantity of anything. So it can’t, for example, involve any space or time, no energy or mass, no plurality of bodies, no forces, nothing like that. It admits of no analysis in terms of propositional logic, so Bayes is right out, as is any understanding of causality. This, it seems to me, would preclude the possibility of thought altogether. It may be that the world we live in is actually like that, and all its multiplicity is merely the contribution of our minds, so I won’t venture a claim about the world as such. So far as I know, the fact that worlds admit of mathematical analysis is a fact about thinking things, not worlds.
What do you mean by “complexity”? I realize you have an intuitive idea, but it could very well be that your idea doesn’t make sense when applied to whatever the real universe is.
Um, that seems like a stretch. Just because some aspects of the universe are subject to mathematical analysis doesn’t necessarily mean the whole universe is.
For my purposes, complexity is: involving (in the broadest sense of that word) more than one (in the broadest sense of that word) thing (in the broadest sense of that word). And remember, I’m not talking about the real universe, but about the universe as it appears to creatures capable of thinking.
I think it does, if you’re granting me that such a world could be distinguished into parts. It doesn’t mean we could have the rich mathematical understanding of laws we do now, but that’s a higher bar than I’m talking about.
You can always “use” analysis the issue is whether it gives you correct answers. It only gives you the correct answer if the universe obeys certain axioms.
Well, this gets us back to the topic that spawned this whole discussion: I’m not sure we can separate the question ‘can we use it’ from ‘does it give us true results’ with something like math. If I’m right that people always have mostly true beliefs, then when we’re talking about the more basic ways of thinking (not Aristotelian dynamics, but counting, arithmetic, etc.) the fact that we can use them is very good evidence that they mostly return true results. So if you’re right that you can always use, say, arithmetic, then I think we should conclude that a universe is always subject to analysis by arithmetic.
You may be totally wrong that you can always use these things, of course. But I think you’re probably right and I can’t make sense of any suggestion to the contrary that I’ve heard yet.
One could mathematically describe things not analysable by arithmetic, though...
Fair point, arithmetic’s not a good example of a minimum for mathematical description.
The idea of rational understanding rests on the fact that you are separated from the object that you are trying to understand and the object itself doesn’t change if you change your understanding of it.
Then there the halting problem. There are a variety of problems that are NP. Those problems can’t be understood by doing a few experiments and then extrapolating general rules from your experiments. I’m not quite firm with the mathematical terminology but I think NP problems are not subject to thinks like calculus that are covered in what Wikipedia describes as Mathematical analysis.
Heinz von Förster makes the point that children have to be taught that “green” is no valid answer for the question: “What’s 2+2?”. I personally like his German book titled: “Truth is the invention of a liar”. Heinz von Förster headed the started the Biological Computer Laboratory in 1958 and came up with concepts like second-order cybernetics.
As far as fictional worlds go, Terry Pratchetts discworld runs on narrativium instead of math.
That’s true as long there no revelations of truth by Gods or other magical processes. In an universe where you can get the truth through magical tarot reading that assumption is false.
That’s not obvious to me. Why do you think this?
I also don’t understand this inference. Why do you think revelations of truth by Gods or other magical processes, or tarot readings, mean that such a universe would a) be knowable, and b) not be subject to rational analysis?
It might depend a bit of what you mean with rationality. You lose objectivity.
Let’s say I’m hypnotize someone. I’m in a deep state of rapport. That means my emotional state matters a great deal. If I label something that the person I’m talking to as unsuccessful, anxiety raises in myself. That anxiety will screw with the result I want to achieve. I’m better of if I blank my mind instead of engaging in rational analysis of what I’m doing.
Logically A → B is not the same thing as B → A.
I said that it’s possible for there to be knowledge that you can only get through a process besides rational analysis if you allow “magic”.
I’m a little lost. So do you think these observations challenge the idea that in order to understand anyone, we need to assume they’ve got mostly true beliefs, and make mostly rational inferences?
I don’t know what you mean by “run on math”. Do qualia run in math?
It’s not my phrase, and I don’t particularly like it myself. If you’re asking whether or not qualia are quanta, then I guess the answer is no, but in the sense that the measured is not the measure. It’s a triviality that I can ask you how much pain you feel on a scale of 1-10, and get back a useful answer. I can’t get at what the experience of pain itself is with a number or whatever, but then, I can’t get at what the reality of a block of wood is with a ruler either.
Then by imagining an all qualia universe, I can easily imagine a universe that doesn’t run on math, for some values of an”runs on math”
I don’t think you can imagine, or conceive of, an all qualia universe though.
You don’t get to tell me what I can imagine, though. All I have to do is imagine away the quantitative and structural aspects of my experience.
I rather think I do. If you told me you could imagine a euclidian triangle with more or less than 180 internal degrees, I would rightly say ‘No you can’t’. It’s simply not true that we can imagine or conceive of anything we can put into (or appear to put into) words. And I don’t think it’s possible to imagine away things like space and time and keep hold of the idea that you’re imagining a universe, or an experience, or anything like that. Time especially, and so long as I have time, I have quantity.
That looks likes typical mind fallacy
I don’t know where you are getting yourmfacts from, but it is well known that people’s abilities at visualization vary considerably, so where’s the “we”?
Having studied non euclidean geometry, I can easily imagine a triangle whose angles .sum to more than180 (hint: it’s inscribed on the surface of a sphere)
Saying that non spatial or in temporal universes aren’t really universes is a True Scotsman fallacy.
Non spatial and non temporal models have been serious proposed by physicists; perhaps you should talk to them.
It depends on what you mean by “imagine”. I can’t imagine a Euclidian triangle with less than 180 degrees in the sense of having a visual representation in my mind that I could then reproduce on a piece of paper. On the other hand, I can certainly imagine someone holding up a measuring device to a vague figure on a piece of paper and saying “hey, I don’t get 180 degrees when I measure this”.
Of course, you could say that the second one doesn’t count since you’re not “really” imagining a triangle unless you imagine a visual representation, but if you’re going to say that you need to remember that all nontrivial attempts to imagine things don’t include as much detail as the real thing. How are you going to define it so that eliminating some details is okay and eliminating other details isn’t?
(And if you try that, then explain why you can’t imagine a triangle whose angles add up to 180.05 degrees or some other amount that is not 180 but is close enough that you wouldn’t be able to tell the difference in a mental image. And then ask yourself “can I imagine someone writing a proof that a Euclidian triangle’s angles don’t add up to 180 degrees?” without denying that you can imagine people writing proofs at all.)
These are good questions, and I think my general answer is this: in the context of this similar arguments, being able to imagine something is sometimes taken as evidence that it’s at least a logical possibility. I’m fine with that, but it needs to be imagined in enough detail to capture the logical structure of the relevant possibility. If someone is going to argue, for example, that one can imagine a euclidian triangle with more or less than 180 internal degrees, the imagined state of affairs must have as least as much logical detail as does a euclidian triangle with 180 internal degrees. Will that exclude your ‘vague shape’ example, and probably your ‘proof’ example?
It would exclude the vague shape example but I think it fails for the proof example.
Your reasoning suggests that if X is false, it would be impossible for me to imagine someone proving X. I think that is contrary to what most people mean when they say they can imagine something.
It’s not clear what your reasoning implies when X is true. Either
I cannot imagine someone proving X unless I can imagine all the steps in the proof
I can imagine someone proving X as long as X is true, since having a proof would be a logical possibility as long as X is true
1) is also contrary to what most people think of as imagining. 2) would mean that it is possible me to not know whether or not I am imagining something. (I imagine someone proving X and I don’t know if X is true. 2) means that if X is true I’m “really imagining” it and that if X is false, I am not.)
Well, say I argue that it’s impossible to write a story about a bat. It seems like it should be unconvincing for you to say ‘But I can imagine someone writing a story about a bat...see, I’m imagining Tom, who’s just written a story about a bat.’ Instead, you’d need to imagine the story itself. I don’t intend to talk about the nature of the imagination here, only to say that as a rule, showing that something is logically possible by way of imagining it requires that it have enough logical granularity to answer the challenge.
So I don’t doubt that you could imagine someone proving that E-triangles have more than 180 internal degrees, but I am saying that not all imaginings are contenders in an argument about logical possibility. Only those ones which have sufficient logical granularity do.
I would understand “I can imagine...” in such a context to mean that it doesn’t contain flaws that are basic enough to prevent me from coming up with a mental picture or short description. Not that it doesn’t contain any flaws at all. It wouldn’t make sense to have “I can imagine X” mean “there are no flaws in X”—that would make “I can imagine X” equivalent to just asserting X.
The issue isn’t flaws or flawlessness. In my bat example, you could perfectly imagine Tom sitting in an easy chair with a glass of scotch saying to himself, ‘I’m glad I wrote that story about the bat’. But that wouldn’t help. I never said it’s impossible for Tom to sit in a chair and say that, I said that it was impossible to write a story about a bat.
The issue isn’t logical detail simpliciter, but logical detail relative to the purported impossibility. In the triangle case, you have to imagine, not Tom sitting in his chair thinking ‘I’m glad I proved that E-triangles have more than 180 internal degrees’ (no one could deny that that is possible) but rather the figure itself. It can be otherwise as vague and flawed as you like, so long as the relevant bits are there. Very likely, imagining the proof in the relevant way would require producing it.
And you are asserting something, you’re asserting the possibility of something in virtue of the fact that it is in some sense actual. To say that something is logically impossible is to say that it can’t exist anywhere, ever, not even in a fantasy. To imagine up that possibility is to make it sufficiently real to refute the claim of possibility, but only if you imagine, and thus make real, the precise thing being claimed to be impossible.
Are you sure it is logically impossible to have shameless and timeless universes? Who has put forward the necessity of space and time?
Dear me no! I have no idea if such a universe is impossible. I’m not even terribly confident that this universe has space or time.
I am pretty sure that space and time (or something like them) are a necessary condition on experience, however. Maybe they’re just in our heads, but it’s nevertheless necessary that they, or something like them, be in our heads. Maybe some other kind of creature thinks in terms of space, time, and fleegle, or just fleegle, time, and blop, or just blop and nizz. But I’m confident that such things will all have some common features, namely being something like a context for a multiplicity. I mean in the way time is a context for seeing this, followed by that, and space is a context for seeing this in that in some relation, etc.
Without something like this, it seems to me experience would always (except there’s no time) only be of one (except an idea of number would never come up) thing, in which case it wouldn’t be rich enough to be an experience. Or experience would be of nothing, but that’s the same problem.
So there might be universes of nothing but qualia (or, really, quale) but it wouldn’t be a universe in which there are any experiencing or thinking things. And if that’s so, the whole business is a bit incoherent, since we need an experiencer to have a quale.
Are you using experience to mean visual experience by any chance? How much spatial information are you getting from hearing?
PS your dogmatic Kantianism is now taken as read.
Tapping out.
That depends on your definition of “math”.
For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green? I don’t see why not.
I think you’re conflating the physical operation that we correlate with addition and the mathematical structure. ‘Green’ I’m not seeing, but I could write a computer program modeling a universe in which placing a pair of stones in a container that previously held a pair of stones does not always lead to that container holding a quadruplet of stones. In such a universe, the mathematical structure we call ‘addition’ would not be useful, but that doesn’t say that the formalized reasoning structure we call ‘math’ would not exist, or could not be employed.
(In fact, if it’s a computer program, it is obvious that its nature is susceptible to mathematical analysis.)
I guess I could make it appear that way, sure, though I don’t know if I could then recognize anything in my simulation as thinking or doing math. But in any case, that’s not a universe in which 2+2=green, it’s a universe in which it appears to. Maybe I’m just not being imaginative enough, and so you may need to help me flesh out the hypothetical.
If I write the simulation in python I can simple define my function for addition:
Unfortunately I don’t know how to format the indention perfectly for this forum.
We don’t need to go to the trouble of defining anything in Python. We can get the same result just by saying
If I use python to simulate a world than it matters how things are defined in python.
It doesn’t only appear that 2+2=green but it’s that way at the level of the source code that depends how the world runs.
But it sounds to me like you’re talking about the manipulation of signs, not about numbers themselves. We could make the set of signs ‘2+2=’ end any way we like, but that doesn’t mean we’re talking about numbers. I donno, I think you’re being too cryptic or technical or something for me, I don’t really understand the point you’re trying to make.
What do you mean with “the numbers themselves”? Peano axioms? I could imagine that n → n+1 just doesn’t apply.
Math is what happens when you take your original working predictive toolkit (like counting sheep) and let it run on human wetware disconnected from its original goal of having to predict observables. Thus some form of math would arise in any somewhat-predictable universe evolving a calculational substrate.
That’s an interesting problem. Do we have math because we make abstractions about the multitude of things around us, or must we already have some idea of math in the abstract just to recognize the multitude as a multitude? But I think I agree with the gist of what you’re saying.
Just like I think of language as meta-grunting, I think of math as meta-counting. Some animals can count, and possibly add and subtract a bit, but abstracting it away from the application for the fun of it is what humans do.
Is “containing mathematical truth” the same as “running on math”?
Mixing truth and rationality is a failure mode. To know whether someone statement is true , you have to understand it,ad to understand it, you have to assume the speaker’s rationality.
It’s also a failure mode to attach “Irrational” directly to beliefs. A belief is rational if it can be supported by an argument, and you don’t carry the space of all possible arguments round jn your head,
That’s an… interesting definition of “rational”.
Puts on Principle of Charity hat...
Maybe TheAncientGreek means:
(1) a belief is rational if it can be supported by a sound argument
(2) a belief is rational if it can be supported by a valid argument with probable premises
(3) a belief is rational if it can be supported by an inductively strong argument with plausible premises
(4) a belief is rational if it can be supported by an argument that is better than any counterarguments the agent knows of
etc...
Although personally, I think it is more helpful to think of rationality as having to do with how beliefs cohere with other beliefs and about how beliefs change when new information comes in than about any particular belief taken in isolation.
I can’t but note that the world “reality” is conspicuously absent here...
That there is empirical evidence for something is good argument for it.
Arguments of type (1) necessarily track reality (it is pretty much defined this way), (2) may or may not depending on the quality of the premises, (3) often does, and sometimes you just can’t do any better than (4) with available information and corrupted hardware.
Just because I didn’t use the word “reality” doesn’t really mean much.
A definition of “rational argument” that explicitly referred to “reality” would be a lot less useful, since checking which arguments are rational is one of the steps in figuring what’ real.
I am not sure this is (necessarily) the case, can you unroll?
Generally speaking, arguments live in the map and, in particular, in high-level maps which involve abstract concepts and reasoning. If I check the reality of the stone by kicking it and seeing if my toe hurts, no arguments are involved. And from the other side, classical logic is very much part of “rational arguments” and yet needs not correspond to reality.
That tends to work less well for things that one can’t directly observe, e.g., how old is the universe, or things where there is confounding noise, e.g., does this drug help.
That was a counterexample, not a general theory of cognition...
There isn’t a finite list of rational beliefs, because someone could think of an argument for a belief that you haven’t thought of.
There isn’t a finite list of correct arguments either. People can invent new ones.
Well, it’s not too compatible with self congratulations “rationality”.
I believe this disagreement is testable by experiment.
Do elaborate.
If you would more reliably understand what people mean by specifically treating it as the product of a rational and intelligent person, then executing that hack should lead to your observing a much higher rate of rationality and intelligence in discussions than you would previously have predicted. If the thesis is true, many remarks which, using your earlier methodology, you would have dismissed as the product of diseased reasoning will prove to be sound upon further inquiry.
If, however, you execute the hack for a few months and discover no change in the rate at which you discover apparently-wrong remarks to admit to sound interpretations, then TheAncientGeek’s thesis would fail the test.
You will also get less feedback on the lines of “you just don’t get it”
True, although being told less often that you are missing the point isn’t, in and of itself, all that valuable; the value is in getting the point of those who otherwise would have given up on you with a remark along those lines.
(Note that I say “less often”; I was recently told that this criticism of Tom Godwin’s “The Cold Equations”, which I had invoked in a discussion of “The Ones Who Walk Away From Omelas”, missed the point of the story—to which I replied along the lines of, “I get the point, but I don’t agree with it.”)
That looks like a test of my personal ability to form correct first-impression estimates.
Also “will prove to be sound upon further inquiry” is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X “sound”? X-/
Precisely.
Ah, I see. “Sound” is not the right word for what I mean; what I would expect to occur if the thesis is correct is that statements will prove to be apposite or relevant or useful—that is to say, valuable contributions in the context within which they were uttered. In the case of X, this would hold if the person proposing X believed that those conditions applied in the case described.
A concrete example would be someone who said, “you can divide by zero here” in reaction to someone being confused by a definition of the derivative of a function in terms of the limit of a ratio.
Because you are not engaged in establishing facts about how smart someone is, you are instead trying to establish facts about what they mean by what they say.