That isn’t parsimony, that’s ontological promiscuity of the worst sort.
alicorn&robinZ: i talked about ontological parsimony. you’re talking about something else. epistemological parsimony, perhaps?
same for mystery. that you can prolong it doesn’t mean there’s less of it.
cyan: yes, this might be a problem. you sure natural desity is the right measure?
z_m_davies: looks very interesting. thanks!
jack: yes, I saw that problem too. That’s why I said the theory might be self-defeating. My idea was that even if inflation as a theory is strictly speaking forbidden, it can phenomenologically point in the right direction. I mean, we might be still able to say something like: the “quasi”-observation” or the “quasi”-theory is true.
Why are you replying to us in top-level batches? Are you trying to limit how many total comments you make as a form of karma damage control, or just determined to make the threads disparate and hard to follow?
Anyway:
i talked about ontological parsimony. you’re talking about something else. epistemological parsimony, perhaps?
Do you mean you think you’re proposing fewer basic kinds of entities (i.e. you think objects exist, and we think objects and causes exist)? That seems to me very suspiciously like a feature of how things are worded—you could just as easily say that we’re proposing quarks and space and time, and you’re proposing quarks with different properties (e.g. the property of appearing, disappearing, and moving at random instead of the property of interacting with other quarks causally) and also space and time.
same for mystery. that you can prolong it doesn’t mean there’s less of it.
Most of those mysteries boil down to a fairly small number of things we do not yet know. Yours, granted, boils down to just one thing you don’t know: why the hell do all these random things happen? But it is a very BIG and very CONSPICUOUS mystery, and there is no good way to get rid of it or push it farther away or shrink it. This is not true of our small host of mysteries, which we regularly shrink and push and can even hope to eventually do away with.
I employ the principle of charity when someone’s writing is unclear and they could be saying any of several things, some of which would make sense and some of which wouldn’t. Then the principle of charity suggests that I interpret the unclarity as the possibility that makes sense. Are you saying that I misunderstand you, or do you just want to throw up “charity” as a defense force field for when people who do not agree with you express that disagreement?
As for your comparison: The move to God is unmotivated, unlike the mystery-postponing moves we make based on evidence and logical inference. Also, God is one big, conspicuous, intractable mystery, not lots of little ones, which is exactly what I complained about in your theory of causation. So it is a comparison that is extremely unfavorable to what you seem to be defending.
From your first comment to my post on you were really agressive. Arguments are fine, but why always the personal attacks?
I tell you what might be going on here: You saw the post, couldn’t make sense of it after a quick glance and decided it was junk and an easy way to gain reputation and boost your ego by bashing.
And you are not alone. There are lots of haters, and nobody who just said, Ok, I don’t believe it, but let’s discuss it, and stop hitting the guy over the head.
The theory is highly counterintuitive, I said as much, but it is worth at least a few minutes of discussion, and i discussed it with quite a few eminent philosophers already. None was convinced (which is hardly surprising), but they found the discussion interesting and the theory consistent.
So something has gone wrong here. Maybe all this talk of “winning” and “bayesian conspiracy” and whatever really does not do a favor to the principle goal of the site of being as unbiased as possible.
First, none of us are being as rude to you as you are to us in this comment alone. If you can’t stand the abuse you’re getting here, then quit commenting on this post.
Second, we’ve given this well more than a few minutes’ discussion, and you’ve given us no reason to believe that we misunderstand your theory—you just object to our categorical dismissal of it. I am perfectly willing to believe that the philosophers you discussed this with gave you credit for making an interesting argument—philosophers are generous like that—and for all its faults, your theory is consistent. But around here, interesting is a matter of writing style, and consistent is a sub-minimal requirement: we demand useful. None of us are rationalists just for the lulz—if a theory doesn’t help us get what we actually want, it really is of no use to us. And by that standard, any skeptical hypothesis is a waste of time, including your proposed Humeiform worldview, when other hypotheses actually work.
First, none of us are being as rude to you as you are to us in this comment alone. If you can’t stand the abuse you’re getting here, then quit commenting on this post.
Oh, I can take the abuse, I’m just wondering.
Second, we’ve given this well more than a few minutes’ discussion, and you’ve given us no reason to believe that we misunderstand your theory
At least at first, I’ve been given just accusations and incredulous stares.
if a theory doesn’t help us get what we actually want, it really is of no use to us
If you want the truth, you have to consider being wrong even about your darlings, say, prediction.
Do you actually believe this theory that you have proposed? Because we aren’t arguing that it’s logically impossible, we’re explaining why we don’t believe it.
Your theory says you can’t cause our beliefs to change and you shouldn’t be surprised about it. It also implies that you defend it by accident, not because it’s true.
The good news is that you have an obvious upgrade right ahead. Not all of us are so lucky.
No such assumption required. For example, if you have 10% credence in your theory, the same 10% says you’re defending it by accident. Viewed another way, we have no reason to listen to you if your theory is false and no reason to listen if it’s true either. Please apply this logic to your beliefs and update.
Seems to me you’re conflating different concepts:
“being the reason for” and “being the cause of”:
compare what an enemy of determinism could say: “we have no reason to listen to you if your theory is false and no reason to listen if it’s true either”. Now what?
Let’s drop abstract truth-seeking for a moment and talk about instrumental values instead.
Believing in causality is useful in a causal world and neutral in an acausal one. Disbelieving in causality is harmful in a causal world and likewise neutral in an acausal one. So, if you assign nonzero credence to the existence of causality (as you implied in a comment above: “why does everybody assume I’m a die-hard believer?”), you’d do better by increasing this credence to 100%, because doing so has positive utility in the causal world (to which you have assigned nonzero credence) and doesn’t matter in the acausal one.
I would say, “increasing this credence toward 100%”—without mathematical proof that the familiar sort of causation is the only such scheme that is feasible, absolute certainty is (slightly) risky. (Even with such proof, it is risky—proofs aren’t perfect guarantees.)
I can’t parse your comment. Are you saying that, conditioned on your theory being true, our beliefs “should” somehow causally update in response to your arguments? That’s obviously false.
Side point: if you click on the “Reply” link below a specific comment, your reply will be threaded with that comment and the person you address will be notified of your response—I recommend it.
As for your actual points: I don’t care about ontoepistemoparsiwhatever. I care about being right. And your proposal—I guess we’re calling it the Humeiform theory—isn’t supported by any conceivable block of evidence, including that which actually holds true. If there weren’t any other theories supported by the evidence of the universe, it might win by default, being simplest, but this universe is regular enough to be accurately fitted by many not-overly-complicated theories.
If I want my beliefs to correspond to reality, I’m better off favoring simple theories over complex ceteris paribus—this is Occam’s insight. But I’m also better off favoring accuracy over vacuity. And when I shut up and multiply, the latter dominates the former for this special case.
I guess we’re calling it the Humeiform theory—isn’t supported by any conceivable block of evidence, including that which actually holds true
just untrue. IF pigs start to fly, etc., you’ll better remember this theory.
besides, I repeat that in my opinion, the (controverted, granted,, but this is definitely not a closed case) existence of qualia, mental causation and indeterministic processes already give support.
If pigs start to fly, that doesn’t support the Humeiform theory—it just undermines (some of) its competitors. Being as the Humeiform theory predicts absolutely nothing, it can’t possibly be a better predictor than any theory which predicts anything at all correctly. The only way it can win is if no theory can do so—in which case it, being the simplest, wins by default.
No. We talked about evidential support, not predictive power.
Inhabitants of a Hume world are obviously right to explain flying pigs et al. by a hume-world-theory, even if they cannot predict anything.
Evidential support is directly tied to predictive power. That’s what it means to be supported by the evidence—that it predicted the evidence over the alternatives.
Explanations are directly tied to predictive power. That’s what it means to explain things—that those things are predicted to occur instead of the alternatives.
This is really, really basic stuff—dating back at least to Karl Popper’s falsifiability, if not further. If you don’t know it, you have a long way to go before you can reasonably consider trying to calculate the fundamental nature of the universe.
I’m not giving up yet—I’m fairly sure spuckblase is smart, he just doesn’t have a good background in the philosophy of science, much less Bayes’ Theorem. In any case, if I’m being perfectly frank, karma is not my motive.
In all unseriousness, you’re new to the community and have already linked to tvtropes, XKCD, and EY’s old posts. If you’re not out for karma, I don’t know what you’re playing at.
Nah, he was around on OB. His post on the welcome thread explains all (well, not all, but it’s consistent with his posting history and claimed motivation).
What Cyan said (look for comments under the name “Robin_Z”—those are me, probably). Also, I’ve been an Internet dork since before TV Tropes existed—I’ve got at least three ranks in “Computer Use: Internet Forumite”, a high-speed Internet connection over a wireless network both at school and at home, and a buttload of bookmarks primed to throw at almost any situation.
(Also, I’m in that early stage of Internet site usage where I obsessively follow every new comment and blog post to the community, which explains the high comment rate.)
Thanks but no thanks. I do know this really really basic stuff—I just don’t agree. Instead of just postulating that all explanations have to be tied to prediction, why don’t you try to rebut the argument. Again: Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible. So explanation should be conceived independently of prediction. Not every explanation needs to be tied to prediction.
Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible.
Just because what you believe happens to be true, doesn’t mean you’re right to believe it. If I walk up to a roulette wheel, certain that the ball will land on black, and it does—then I still wasn’t right to believe it would.
Hypothetical Hume-worlders, like us, do not have the luxury of access to reality’s “source code”: they have not been informed that they exist in a hypothetical Hume-world, any more than we can know the “true nature” of our world. Their Hume-world theory, like yours, cannot be based on reading reality’s source code; the only way to justify Hume-world theory is by demonstrating that it makes accurate predictions.
Arguably, it does make at least one prediction: that any causal model of reality will eventually break down. This prediction, to put it mildly, does not hold up well to our investigation of our universe.
Alternatively, you could assert that if all possibilities are randomly realized, we might (with infinitesimal probability) be living in a world that just happened to exactly resemble a causal world. But without evidence to support such a belief, you would not be right to believe it, even if it turns out to be true. Not to mention that, as others have mentioned in this thread, unfalsifiable theories are a waste of valuable mental real estate.
jack: yes, I saw that problem too. That’s why I said the theory might be self-defeating. My idea was that even if inflation as a theory is strictly speaking forbidden, it can phenomenologically point in the right direction. I mean, we might be still able to say something like: the “quasi”-observation” or the “quasi”-theory is true.
But what I said isn’t the same as saying the theory is self-defeating. The theory is just based on a false premise (that inflation allows for regions of finite space that violate our recorded laws of physics). Inflation says: “Any given of configuration of a region of finite space that does not violate the laws of physics exists infinitely many times.” You say: “There are some reasons of finite space where the laws of physics are violated!”
This does not follow!!!
And as I said before, the size of the ordered region, and the amount of order in our region is too great to be justified by the anthropic principle.
The natural density is for natural numbers. The point is that cardinality is probably not the right thing to look at—there are more representative notions of the size of a subset (even in the natural numbers).
Okay, now we’re talking.
alicorn&robinZ: i talked about ontological parsimony. you’re talking about something else. epistemological parsimony, perhaps? same for mystery. that you can prolong it doesn’t mean there’s less of it.
cyan: yes, this might be a problem. you sure natural desity is the right measure?
z_m_davies: looks very interesting. thanks!
jack: yes, I saw that problem too. That’s why I said the theory might be self-defeating. My idea was that even if inflation as a theory is strictly speaking forbidden, it can phenomenologically point in the right direction. I mean, we might be still able to say something like: the “quasi”-observation” or the “quasi”-theory is true.
Why are you replying to us in top-level batches? Are you trying to limit how many total comments you make as a form of karma damage control, or just determined to make the threads disparate and hard to follow?
Anyway:
Do you mean you think you’re proposing fewer basic kinds of entities (i.e. you think objects exist, and we think objects and causes exist)? That seems to me very suspiciously like a feature of how things are worded—you could just as easily say that we’re proposing quarks and space and time, and you’re proposing quarks with different properties (e.g. the property of appearing, disappearing, and moving at random instead of the property of interacting with other quarks causally) and also space and time.
Most of those mysteries boil down to a fairly small number of things we do not yet know. Yours, granted, boils down to just one thing you don’t know: why the hell do all these random things happen? But it is a very BIG and very CONSPICUOUS mystery, and there is no good way to get rid of it or push it farther away or shrink it. This is not true of our small host of mysteries, which we regularly shrink and push and can even hope to eventually do away with.
Why don’t you apply the principle of charity for once?
Anyway, compare:
The universe was created in the big bang.
God created the big bang.
So in 2. I now have prolonged the mystery. Is it less mysterious?
I employ the principle of charity when someone’s writing is unclear and they could be saying any of several things, some of which would make sense and some of which wouldn’t. Then the principle of charity suggests that I interpret the unclarity as the possibility that makes sense. Are you saying that I misunderstand you, or do you just want to throw up “charity” as a defense force field for when people who do not agree with you express that disagreement?
As for your comparison: The move to God is unmotivated, unlike the mystery-postponing moves we make based on evidence and logical inference. Also, God is one big, conspicuous, intractable mystery, not lots of little ones, which is exactly what I complained about in your theory of causation. So it is a comparison that is extremely unfavorable to what you seem to be defending.
From your first comment to my post on you were really agressive. Arguments are fine, but why always the personal attacks? I tell you what might be going on here: You saw the post, couldn’t make sense of it after a quick glance and decided it was junk and an easy way to gain reputation and boost your ego by bashing. And you are not alone. There are lots of haters, and nobody who just said, Ok, I don’t believe it, but let’s discuss it, and stop hitting the guy over the head.
The theory is highly counterintuitive, I said as much, but it is worth at least a few minutes of discussion, and i discussed it with quite a few eminent philosophers already. None was convinced (which is hardly surprising), but they found the discussion interesting and the theory consistent. So something has gone wrong here. Maybe all this talk of “winning” and “bayesian conspiracy” and whatever really does not do a favor to the principle goal of the site of being as unbiased as possible.
Spuckblase, two things.
First, none of us are being as rude to you as you are to us in this comment alone. If you can’t stand the abuse you’re getting here, then quit commenting on this post.
Second, we’ve given this well more than a few minutes’ discussion, and you’ve given us no reason to believe that we misunderstand your theory—you just object to our categorical dismissal of it. I am perfectly willing to believe that the philosophers you discussed this with gave you credit for making an interesting argument—philosophers are generous like that—and for all its faults, your theory is consistent. But around here, interesting is a matter of writing style, and consistent is a sub-minimal requirement: we demand useful. None of us are rationalists just for the lulz—if a theory doesn’t help us get what we actually want, it really is of no use to us. And by that standard, any skeptical hypothesis is a waste of time, including your proposed Humeiform worldview, when other hypotheses actually work.
Edit circa 2014: the Slacktivist blog moved (mostly) to a new website—this is the new link to the “sub-minimal requirement” post.
Oh, I can take the abuse, I’m just wondering.
At least at first, I’ve been given just accusations and incredulous stares.
If you want the truth, you have to consider being wrong even about your darlings, say, prediction.
Do you actually believe this theory that you have proposed? Because we aren’t arguing that it’s logically impossible, we’re explaining why we don’t believe it.
Your theory says you can’t cause our beliefs to change and you shouldn’t be surprised about it. It also implies that you defend it by accident, not because it’s true.
The good news is that you have an obvious upgrade right ahead. Not all of us are so lucky.
Why does everybody assume I’m a die-hard believer in this theory?
No such assumption required. For example, if you have 10% credence in your theory, the same 10% says you’re defending it by accident. Viewed another way, we have no reason to listen to you if your theory is false and no reason to listen if it’s true either. Please apply this logic to your beliefs and update.
Seems to me you’re conflating different concepts: “being the reason for” and “being the cause of”:
compare what an enemy of determinism could say: “we have no reason to listen to you if your theory is false and no reason to listen if it’s true either”. Now what?
Let’s drop abstract truth-seeking for a moment and talk about instrumental values instead.
Believing in causality is useful in a causal world and neutral in an acausal one. Disbelieving in causality is harmful in a causal world and likewise neutral in an acausal one. So, if you assign nonzero credence to the existence of causality (as you implied in a comment above: “why does everybody assume I’m a die-hard believer?”), you’d do better by increasing this credence to 100%, because doing so has positive utility in the causal world (to which you have assigned nonzero credence) and doesn’t matter in the acausal one.
Well, if you stipulate that “abstract truth-seeking” has nothing whatsoever to do with my getting along in the world, then you’re right I guess.
I would say, “increasing this credence toward 100%”—without mathematical proof that the familiar sort of causation is the only such scheme that is feasible, absolute certainty is (slightly) risky. (Even with such proof, it is risky—proofs aren’t perfect guarantees.)
I can’t parse your comment. Are you saying that, conditioned on your theory being true, our beliefs “should” somehow causally update in response to your arguments? That’s obviously false.
We don’t need to assume that. If you have 10% credence for your theory, my reasoning applies for that 10%.
Side point: if you click on the “Reply” link below a specific comment, your reply will be threaded with that comment and the person you address will be notified of your response—I recommend it.
As for your actual points: I don’t care about ontoepistemoparsiwhatever. I care about being right. And your proposal—I guess we’re calling it the Humeiform theory—isn’t supported by any conceivable block of evidence, including that which actually holds true. If there weren’t any other theories supported by the evidence of the universe, it might win by default, being simplest, but this universe is regular enough to be accurately fitted by many not-overly-complicated theories.
If I want my beliefs to correspond to reality, I’m better off favoring simple theories over complex ceteris paribus—this is Occam’s insight. But I’m also better off favoring accuracy over vacuity. And when I shut up and multiply, the latter dominates the former for this special case.
just untrue. IF pigs start to fly, etc., you’ll better remember this theory. besides, I repeat that in my opinion, the (controverted, granted,, but this is definitely not a closed case) existence of qualia, mental causation and indeterministic processes already give support.
If pigs start to fly, that doesn’t support the Humeiform theory—it just undermines (some of) its competitors. Being as the Humeiform theory predicts absolutely nothing, it can’t possibly be a better predictor than any theory which predicts anything at all correctly. The only way it can win is if no theory can do so—in which case it, being the simplest, wins by default.
No. We talked about evidential support, not predictive power. Inhabitants of a Hume world are obviously right to explain flying pigs et al. by a hume-world-theory, even if they cannot predict anything.
Err, wrong.
Evidential support is directly tied to predictive power. That’s what it means to be supported by the evidence—that it predicted the evidence over the alternatives.
Explanations are directly tied to predictive power. That’s what it means to explain things—that those things are predicted to occur instead of the alternatives.
This is really, really basic stuff—dating back at least to Karl Popper’s falsifiability, if not further. If you don’t know it, you have a long way to go before you can reasonably consider trying to calculate the fundamental nature of the universe.
I know arguing against this fellow is like minting karma, but is it really getting anywhere?
I’m not giving up yet—I’m fairly sure spuckblase is smart, he just doesn’t have a good background in the philosophy of science, much less Bayes’ Theorem. In any case, if I’m being perfectly frank, karma is not my motive.
In all unseriousness, you’re new to the community and have already linked to tvtropes, XKCD, and EY’s old posts. If you’re not out for karma, I don’t know what you’re playing at.
Nah, he was around on OB. His post on the welcome thread explains all (well, not all, but it’s consistent with his posting history and claimed motivation).
What Cyan said (look for comments under the name “Robin_Z”—those are me, probably). Also, I’ve been an Internet dork since before TV Tropes existed—I’ve got at least three ranks in “Computer Use: Internet Forumite”, a high-speed Internet connection over a wireless network both at school and at home, and a buttload of bookmarks primed to throw at almost any situation.
(Also, I’m in that early stage of Internet site usage where I obsessively follow every new comment and blog post to the community, which explains the high comment rate.)
Thanks but no thanks. I do know this really really basic stuff—I just don’t agree. Instead of just postulating that all explanations have to be tied to prediction, why don’t you try to rebut the argument. Again: Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible. So explanation should be conceived independently of prediction. Not every explanation needs to be tied to prediction.
Just because what you believe happens to be true, doesn’t mean you’re right to believe it. If I walk up to a roulette wheel, certain that the ball will land on black, and it does—then I still wasn’t right to believe it would.
Hypothetical Hume-worlders, like us, do not have the luxury of access to reality’s “source code”: they have not been informed that they exist in a hypothetical Hume-world, any more than we can know the “true nature” of our world. Their Hume-world theory, like yours, cannot be based on reading reality’s source code; the only way to justify Hume-world theory is by demonstrating that it makes accurate predictions.
Arguably, it does make at least one prediction: that any causal model of reality will eventually break down. This prediction, to put it mildly, does not hold up well to our investigation of our universe.
Alternatively, you could assert that if all possibilities are randomly realized, we might (with infinitesimal probability) be living in a world that just happened to exactly resemble a causal world. But without evidence to support such a belief, you would not be right to believe it, even if it turns out to be true. Not to mention that, as others have mentioned in this thread, unfalsifiable theories are a waste of valuable mental real estate.
But what I said isn’t the same as saying the theory is self-defeating. The theory is just based on a false premise (that inflation allows for regions of finite space that violate our recorded laws of physics). Inflation says: “Any given of configuration of a region of finite space that does not violate the laws of physics exists infinitely many times.” You say: “There are some reasons of finite space where the laws of physics are violated!”
This does not follow!!!
And as I said before, the size of the ordered region, and the amount of order in our region is too great to be justified by the anthropic principle.
alicorn&robinZ: i talked about ontological parsimony.
In the sense of subtracting an angel (causality) from the head of a pin (our surfboard)? :)
The natural density is for natural numbers. The point is that cardinality is probably not the right thing to look at—there are more representative notions of the size of a subset (even in the natural numbers).