Predictions are justified not by becoming a reality, but by the likelihood of their becoming a reality [1]. When this likelihood is hard to estimate, we can take their becoming a reality as weak evidence that the likelihood is high. But in the end, after counting all the evidence, it’s really only the likelihood itself that matters.
If I predict [...] that I will win [...] and I in fact lose fourteen touches in a row, only to win by forfeit
If I place a bet on you to win and this happens, I’ll happily collect my prize, but still feel that I put my money on the wrong athlete. My prior and the signal are rich enough for me to deduce that your victory, although factual, was unlikely. If I believed that you’re likely to win, then my belief wasn’t “true for the wrong reasons”, it was simply false. If I believed that “you will win” (no probability qualifier), then in the many universes where you didn’t I’m in Bayes Hell.
Conversely in the other example, your winning itself is again not the best evidence for its own likelihood. Your scoring 14 touches is. My belief that you’re likely to win is true and justified for the right reasons: you’re clearly the better athlete.
[1] Where likelihood is measured either given what I know, or what I could know, or what anybody could know—depending on why we’re asking the question in the first place.
I notice that I am confused. What you say seems plausible but also in conflict with the (also plausible) Yudkowskian creed that probability is in the map.
Can you clarify the conflict? It seems to me that when I treat observations as evidence with which to update my estimate of the likelihood of a prediction, as the OP describes, I’m doing a bunch of “map-level” operations.
I would like to be able to clarify it; as of now it’s only my own confusion. In my confusion, “estimate of the likelihood of a prediction” sounds like assigning probabilities to probability statements, which feels like a map-territory reversal of some sort.
[1] Where likelihood is measured either given what I know, or what I could know, or what anybody could know—depending on why we’re asking the question in the first place.
That is, he’s not talking about some thing out there in the world which is independent of our minds, nor am I when I adopt his terminology. The likelihood of a prediction, like all probability judgments, exists in the mind and is a function of how evidence is being evaluated. Indeed, any relationship between a prediction and a state of events in the world exists solely in the mind to begin with.
To clarify further: likelihood is a relative quantity, like speed—it only has meaning relative to a specific frame of reference.
If you’re judging my calibration, the proper frame of reference is what I knew at the time of prediction. I didn’t know what the result of the fencing match would be, but I had some evidence for who is more likely to win. The (objective) probability distribution given that (subjective) information state is what I should’ve used for prediction.
If you’re judging my diligence as an evidence seeker, the proper frame of reference is what I would’ve known after reasonable information gathering. I could’ve taken some actions to put myself in a difference information state, and then my prediction could be better.
But it’s unreasonable to expect me to know the result beyond any doubt. Even if Omega is in an information state of perfectly predicting the future, this is never a proper frame of reference by which to judge bounded agents.
And this is the major point on which I’m non-Yudkowskian: since Omega is never a useful frame of reference, I’m not constraining reality to be consistent with it. In this sense, some probabilities are in the territory.
since Omega is never a useful frame of reference, I’m not constraining reality to be consistent with it. In this sense, some probabilities are in the territory.
I thought I was following you, but you lost me there.
I certainly agree that if I want to evaluate various aspects of your cognitive abilities based on your predictions, I should look at different aspects of your predictions depending on what abilities I care about, as you describe, and that often the accuracy of your prediction is not the most useful aspect to look at. And of course I agree that expecting perfect knowledge is unreasonable.
But what that has to do with Omega, and what the uselessness of Omega as a frame of reference has to do with constraints on reality, I don’t follow.
I probably need to write a top-level post to explain this adequately, but in a nutshell:
I’ve tossed a coin. Now we can say that the world is in one of two states: “heads” and “tails”. This view is consistent with any information state. The information state (A) of maximal ignorance is a uniform distribution over the two states. The information state (B) where heads is twice as likely as tails is the distribution p(“heads”) = 2⁄3, p(“tails”) = 1⁄3. The information state (C) of knowing for sure that the result is heads is the distribution p(“heads”) = 1, p(“tails”) = 0.
Alternatively, we can say that the world is in one of these two states: “almost surely heads” and “almost surely tails”. Now information state (A) is a uniform distribution over these states; (B) is perhaps the distribution p(“ASH”) = 0.668, p(“AST”) = 0.332; but (C) is impossible, and so is any information state that is more certain than reality in this strange model.
Now, in many cases we can theoretically have information states arbitrarily close to complete certainty. In such cases we must use the first kind of model. So we can agree to just always use the first kind of model, and avoid all this silly complication.
But then there are cases where there are real (physical) reasons why not every information state is possible. In these cases reality is not constrained to be of the first kind, and it could be of the second kind. As a matter of fact, to say that reality is of the first kind—and that probability is only in the mind—is to say more about reality than can possibly be known. This goes against Jaynesianism.
So I completely agree that not knowing something is a property of the map rather than the territory. But an impossibility of any map to know something is a property of the territory.
the world is in one of two states: “heads” and “tails”. [..] The information state (C) of knowing for sure that the result is heads is the distribution p(“heads”) = 1, p(“tails”) = 0.
Sure. And (C) is unachievable in practice if one is updating one’s information state sensibly from sensible priors.
Alternatively, we can say that the world is in one of these two states: “almost surely heads” and “almost surely tails”. Now information state (A) is a uniform distribution over these states
I am uncertain what you mean to convey in this example by the difference between a “world state” (e.g., ASH or AST) and an “information state” (e.g. p(“ASH”)=0.668).
The “world state” of ASH is in fact an “information state” of p(“heads”)>SOME_THRESHOLD, which is fine if you mean those terms to be denotatively synonymous but connotatively different, but problematic if you mean them to be denotatively different.
...but (C) is impossible
.
(C), if I’m following you, maps roughly to the English phrase “I know for absolutely certain that the coin is almost surely heads”.
Yes, agreed that this is strictly speaking unachievable, just as “I know for absolutely certain that the coin is heads” was.
That said, I’m not sure what it means for a human brain to have “I know for absolutely certain that the coin is almost surely heads” as a distinct state from “I am almost sure the coin is heads,” and the latter is achievable.
So we can agree to just always use the first kind of model, and avoid all this silly complication.
Works for me.
But then there are cases where there are real (physical) reasons why not every information state is possible.
And now you’ve lost me again. Of course there are real physical reasons why certain information states are not possible… e.g., my brain is incapable of representing certain thoughts. But I suspect that’s not what you mean here.
Can you give me some examples of the kinds of cases you have in mind?
The “world state” of ASH is in fact an “information state” of p(“heads”)>SOME_THRESHOLD
Actually, I meant p(“heads”) = 0.999 or something.
(C), if I’m following you, maps roughly to the English phrase “I know for absolutely certain that the coin is almost surely heads”.
No, I meant: “I know for absolutely certain that the coin is heads”. We agree that this much you can never know. As for getting close to this, for example having the information state (D) where p(“heads”) = 0.999999: if the world is in the state “heads”, (D) is (theoretically) possible; if the world is in the state “ASH”, (D) is impossible.
Can you give me some examples of the kinds of cases you have in mind?
Mundane examples may not be as clear, so: suppose we send a coin-flipping machine deep into intergalactic space. After a few billion years it flies permanently beyond our light cone, and then flips the coin.
Now any information state about the coin, other than complete ignorance, is physically impossible. We can still say that the coin is in one of the two states “heads” and “tails”, only unknown to us. Alternatively we can say that the coin is in a state of superposition. These two models are epistemologically equivalent.
I prefer the latter, and think many people in this community should agree, based on the spirit of other things they believe: the former model is ontologically more complicated. It’s saying more about reality than can be known. It sets the state of the coin as a free-floating property of the world, with nothing to entangle with.
If I flip a fair coin, and it comes up heads, what is the probability of that coin flip, which I already made, having instead been tails? (Approximately) 0, because we’ve already seen that the coin didn’t come up tails, or (approximately) 50%, because it’s a fair coin and we have no way of knowing the outcome in advance?
As Jaynes suggested, it’s best to view all probabilities as conditional. P(coin came up heads | what i know now) = 1, P(coin came up heads | what i knew before flipping it) = 0.5.
The way I look at it is that before the coin flip you obtain a probability from information you have at the time and you predict ~50%. After the coin flip, you have obtained new information, so the probability that the last coin flip came up tails becomes ~100% (because it did), and the new information also gives you a tiny bit of data that says “maybe the coin comes up heads more often”, so you also update to ~50.005% heads for the next one (or whatever). So, yes, the probability that the coin came up tails last try becomes ~100%, you just couldn’t estimate it from the information you had and with just your brain beforehand (an AGI would’ve probably immediately seen how much force is going into the flip and calculated it all out and seen ~100% probability).
Although if you have an event that’s heavily influenced by quantum magic, which a coin flip is not, you might need to consider that maybe it did have true 50% probability (that is, no amount of information and processing power would improve the prediction), and you just lost half your world’s measure.
The relevant number for the purpose of judging the “correctness” of predictions is the probability you should have had at the time of the prediction (ie. the epistemically correct prior). Whether the outcome of the coin flip is heads or tails, the correct prior odds are 1:1, because you had no evidence either way.
If royf bets that katydee will win a fencing bout, and in fact they lose fourteen touches in a row, only to win by forfeit, we should update towards suspecting that royf miscounted the evidence, since such a bad athlete should not have much strong evidence predicting their winning. We should believe that the correct prior probability of katydee winning (from royf’s point of view) is lower than royf thought it was.
In a book of his, Daniel Dennett appropriates the word “actualism” to mean “the belief that only things that have actually happened, or will happen, are possible.” In other words, all statements that are false are not only false, but also impossible: If the coin flip comes up heads, it was never possible for the coin flip to have come up tails. He considers this rather silly, says there are good reasons for dismissing it that aren’t relevant to the current discussion, and proceeds as though the matter is solved. This strikes me as one of those philosophical positions that seem obviously absurd but very difficult to refute in practice. (It also strikes me as splitting hairs over words, so maybe it’s just a wrong question in the first place?)
In a book of his, Daniel Dennett appropriates the word “actualism” to mean “the belief that only things that have actually happened, or will happen, are possible.” In other words, all statements that are false are not only false, but also impossible: If the coin flip comes up heads, it was never possible for the coin flip to have come up tails.
Taboo “possible.” My take: in the absence of real physical indeterminism (which I doubt exists), “possible” is basically an epistemic term meaning “my model does not rule this out.” So actualism is wrong, on my view, because it projects the limitations of my mind onto the future causal evolution of the universe.
My take: in the absence of real physical indeterminism (which I doubt exists), “possible” is basically an epistemic term meaning “my model does not rule this out.”
You may doubt that real physical indeterminism exists; others do not. The problem is that communication hinges on shared meanings, so if you change your meanings to reflect beliefs you have and others don’t, confusion may ensue.
True; however, even granting physical indeterminism, in most cases we can say that what possibility there is is epistemic. For example, whether katydee wins her fencing match probably does not depend closely on the result of some quantum event. (Although there is an interesting resonance between the probability she assigns to winning, and her actual likelihood of winning—but that’s a whole other kettle of worms.)
On LessWrong, we generally use ‘possible,’ ‘necessary,’ ‘probable,’ etc. epistemically. Epistemic actualism, the doctrine that all events that occur have epistemic probability 1 (or approaching 1), is clearly absurd, since it requires that I be mistaken about nothing, and have perfect epistemic access to all facts in the universe. (But, of course, by ‘actualism’ no one ever means ‘epistemic actualism’.)
On the other hand, metaphysical actualism seems quite reasonable; indeed, thee metaphysical non-actualist has a lot of ground to cover in establishing what s/he even means by ‘metaphysically non-actual events’. Are non-actual ‘worlds’ abstract, for instance? Concrete? Neither? Both? Actually existent as non-actuals? Actually non-existent as non-actuals? Meinongian? And how do we gain any epistemic access to these mysterious possibilia? Even if you aren’t a Lewisian modal realist, asserting anything but actualism (or, equivalently, necessitarianism) with respect to metaphysical modality seems… spooky.
We can epistemically access possible but non actual worlds by noting that they are not against known laws
of nature...what is not impossible is possible.
We can epistemically access possible but non actual worlds by noting that they are not against known laws of nature
There are two options for what you’re trying to do here:
(1) You’re trying to analyze away metaphysical-possibilityspeak in terms of metaphysical-lawspeak. I.e., there’s nothing we could discover or learn that would disassociate these two concepts; one is simply an definitional analysis of the other. In which case, we can simply discard the idea of metaphysical possibility, to avoid miscommunication (since most people do not understand it in this way), and speak only of the laws of nature.
(2) You’re leaving the concepts distinct, but explaining that it just is the case that ‘what is lawful is possible, and what is “against the (natural) law” is impossible’, even though this is not synonymous with saying ‘what is possible is possible, and what is impossible is impossible’. That is, this is a substantive metaphysical thesis.
If you mean to be asserting (2), the metaphysical rather than semantic thesis (i.e., the non-trivial and interesting one), then I ask: What is your basis for this claim? What is your prior grasp on metaphysical possibility, such that you can be confident of its relationship to natural law? Are the laws of nature themselves contingent, or necessary? What evidence could we use to decide the matter one way or the other?
we can simply discard the idea of metaphysical possibility, to avoid miscommunication (since most people do not understand it in this way),
Because most people do understand it epistemically/subjectively? I think there are many kinds of possibility and many kinds of laws, and different kinds of possibility, and we make judgements about possibility based on lows. The nomologically possible is that which is allowed by the laws of nature, the logically possible is that which is not contradictory, which follows the law of non contradiction, and the epistemically possible is that which does not contradict anything I already know. So I think the kinds of possibility have a family resemblance, and there is no issue of dsicarding the other kinds in favour of epistemic possibility. (I am however happy to deflate a “possible world” into a “hypothetical state of affairs that is allowed by such-and-such laws”).
into
Because most people do understand it epistemically/subjectively?
No. Most English language speakers use modal terms both epistemically and metaphysically. My point was that most people, both lay- and academic, do not use ‘p is (metaphysically) possible’ to mean ‘p is not ruled out by the laws of physics’. If they did, then they wouldn’t understand anthropic arguments that presuppose the contingency of the physical laws themselves.
I think there are many kinds of possibility and many kinds of laws
Then I don’t know what claim you’re making anymore. Taboo ‘law’; what is it you’re actually including in this ‘law’ category, potentially?
I think the kinds of possibility have a family resemblance, and there is no issue of dsicarding the other kinds in favour of epistemic possibility.
But you still haven’t explained what a ‘merely possible’ thing is. If logical and nomological possibility are metaphysical, then you owe us an account of what kinds of beings or thingies these possibilia are. On the other hand, if you reduce logical and nomological possibility to epistemic possibility—logical necessity is what I can infer from a certain set of logical axioms alone, logical possibility is what I can’t infer the negation of from some set of axioms, nomological necessity is what I know given only a certain set of ‘natural laws’.… but if we epistemologize these forms of necessity, then we collapse everything into the epistemic, and no longer owe any account of mysterious ‘possible worlds’ floating out there in the aether.
do not use ‘p is (metaphysically) possible’ to mean ‘p is not ruled out by the laws of physics’. If they did, then they wouldn’t understand anthropic arguments that presuppose the contingency of the physical laws themselves.
If that is meant to indicate there is some specific sense of possible that is used instead, I doubt that.
Consider the following:
A: “Are perpertual motions machines possible?”
B: “I don;t see why not”
A: “Ah, but theyre against the laws of thermodynamics ”
B: “Ok, they.re impossible”.
A: “But could the laws of phsyics have been different..?”
B: “I suppse so. I don’t know what makes them thew way they are”.
AFAICS, B has gone through as many of 3 different notions of possibility there.
But you still haven’t explained what a ‘merely possible’ thing is.
I don’t think there is “mere” possibility, if it means subtracting the X from “something is X-ly possible if it is allowed by X-ical laws”.
If logical and nomological possibility are metaphysical, then you owe us an account of what kinds of beings or thingies these possibilia are. On the other hand, if you reduce logical and nomological possibility to epistemic possibility
What they are would depend on the value of X. Family resemblance.
This is perhaps not the best description of actualism, but I see your point. Actualists would disagree with this part of my comment:
If I believed that “you will win” (no probability qualifier), then in the many universes where you didn’t I’m in Bayes Hell.
on the grounds that those other universes don’t exist.
But that was just a figure of speech. I don’t actually need those other universes to argue against 0 and 1 as probabilities. And if Frequentists disbelieve in that, there’s no place in Bayes Heaven for them.
Well, assuming a strict definition of “possible”, it’s just determinism; if God’s playing dice then “actualism” is false, and if he’s not then it’s true.
Assuming a useful definition of possible, then it’s trivially false.
Predictions are justified not by becoming a reality, but by the likelihood of their becoming a reality [1]. When this likelihood is hard to estimate, we can take their becoming a reality as weak evidence that the likelihood is high. But in the end, after counting all the evidence, it’s really only the likelihood itself that matters.
If I place a bet on you to win and this happens, I’ll happily collect my prize, but still feel that I put my money on the wrong athlete. My prior and the signal are rich enough for me to deduce that your victory, although factual, was unlikely. If I believed that you’re likely to win, then my belief wasn’t “true for the wrong reasons”, it was simply false. If I believed that “you will win” (no probability qualifier), then in the many universes where you didn’t I’m in Bayes Hell.
Conversely in the other example, your winning itself is again not the best evidence for its own likelihood. Your scoring 14 touches is. My belief that you’re likely to win is true and justified for the right reasons: you’re clearly the better athlete.
[1] Where likelihood is measured either given what I know, or what I could know, or what anybody could know—depending on why we’re asking the question in the first place.
I notice that I am confused. What you say seems plausible but also in conflict with the (also plausible) Yudkowskian creed that probability is in the map.
Can you clarify the conflict? It seems to me that when I treat observations as evidence with which to update my estimate of the likelihood of a prediction, as the OP describes, I’m doing a bunch of “map-level” operations.
I would like to be able to clarify it; as of now it’s only my own confusion. In my confusion, “estimate of the likelihood of a prediction” sounds like assigning probabilities to probability statements, which feels like a map-territory reversal of some sort.
Does it help to reread royf’s footnote?
That is, he’s not talking about some thing out there in the world which is independent of our minds, nor am I when I adopt his terminology. The likelihood of a prediction, like all probability judgments, exists in the mind and is a function of how evidence is being evaluated. Indeed, any relationship between a prediction and a state of events in the world exists solely in the mind to begin with.
To clarify further: likelihood is a relative quantity, like speed—it only has meaning relative to a specific frame of reference.
If you’re judging my calibration, the proper frame of reference is what I knew at the time of prediction. I didn’t know what the result of the fencing match would be, but I had some evidence for who is more likely to win. The (objective) probability distribution given that (subjective) information state is what I should’ve used for prediction.
If you’re judging my diligence as an evidence seeker, the proper frame of reference is what I would’ve known after reasonable information gathering. I could’ve taken some actions to put myself in a difference information state, and then my prediction could be better.
But it’s unreasonable to expect me to know the result beyond any doubt. Even if Omega is in an information state of perfectly predicting the future, this is never a proper frame of reference by which to judge bounded agents.
And this is the major point on which I’m non-Yudkowskian: since Omega is never a useful frame of reference, I’m not constraining reality to be consistent with it. In this sense, some probabilities are in the territory.
I thought I was following you, but you lost me there.
I certainly agree that if I want to evaluate various aspects of your cognitive abilities based on your predictions, I should look at different aspects of your predictions depending on what abilities I care about, as you describe, and that often the accuracy of your prediction is not the most useful aspect to look at. And of course I agree that expecting perfect knowledge is unreasonable.
But what that has to do with Omega, and what the uselessness of Omega as a frame of reference has to do with constraints on reality, I don’t follow.
I probably need to write a top-level post to explain this adequately, but in a nutshell:
I’ve tossed a coin. Now we can say that the world is in one of two states: “heads” and “tails”. This view is consistent with any information state. The information state (A) of maximal ignorance is a uniform distribution over the two states. The information state (B) where heads is twice as likely as tails is the distribution p(“heads”) = 2⁄3, p(“tails”) = 1⁄3. The information state (C) of knowing for sure that the result is heads is the distribution p(“heads”) = 1, p(“tails”) = 0.
Alternatively, we can say that the world is in one of these two states: “almost surely heads” and “almost surely tails”. Now information state (A) is a uniform distribution over these states; (B) is perhaps the distribution p(“ASH”) = 0.668, p(“AST”) = 0.332; but (C) is impossible, and so is any information state that is more certain than reality in this strange model.
Now, in many cases we can theoretically have information states arbitrarily close to complete certainty. In such cases we must use the first kind of model. So we can agree to just always use the first kind of model, and avoid all this silly complication.
But then there are cases where there are real (physical) reasons why not every information state is possible. In these cases reality is not constrained to be of the first kind, and it could be of the second kind. As a matter of fact, to say that reality is of the first kind—and that probability is only in the mind—is to say more about reality than can possibly be known. This goes against Jaynesianism.
So I completely agree that not knowing something is a property of the map rather than the territory. But an impossibility of any map to know something is a property of the territory.
Sure. And (C) is unachievable in practice if one is updating one’s information state sensibly from sensible priors.
I am uncertain what you mean to convey in this example by the difference between a “world state” (e.g., ASH or AST) and an “information state” (e.g. p(“ASH”)=0.668).
The “world state” of ASH is in fact an “information state” of p(“heads”)>SOME_THRESHOLD, which is fine if you mean those terms to be denotatively synonymous but connotatively different, but problematic if you mean them to be denotatively different.
Yes, agreed that this is strictly speaking unachievable, just as “I know for absolutely certain that the coin is heads” was.
That said, I’m not sure what it means for a human brain to have “I know for absolutely certain that the coin is almost surely heads” as a distinct state from “I am almost sure the coin is heads,” and the latter is achievable.
Works for me.
And now you’ve lost me again. Of course there are real physical reasons why certain information states are not possible… e.g., my brain is incapable of representing certain thoughts. But I suspect that’s not what you mean here.
Can you give me some examples of the kinds of cases you have in mind?
Actually, I meant p(“heads”) = 0.999 or something.
No, I meant: “I know for absolutely certain that the coin is heads”. We agree that this much you can never know. As for getting close to this, for example having the information state (D) where p(“heads”) = 0.999999: if the world is in the state “heads”, (D) is (theoretically) possible; if the world is in the state “ASH”, (D) is impossible.
Mundane examples may not be as clear, so: suppose we send a coin-flipping machine deep into intergalactic space. After a few billion years it flies permanently beyond our light cone, and then flips the coin.
Now any information state about the coin, other than complete ignorance, is physically impossible. We can still say that the coin is in one of the two states “heads” and “tails”, only unknown to us. Alternatively we can say that the coin is in a state of superposition. These two models are epistemologically equivalent.
I prefer the latter, and think many people in this community should agree, based on the spirit of other things they believe: the former model is ontologically more complicated. It’s saying more about reality than can be known. It sets the state of the coin as a free-floating property of the world, with nothing to entangle with.
OK. Thanks for clarifying.
That suggests a question.
If I flip a fair coin, and it comes up heads, what is the probability of that coin flip, which I already made, having instead been tails? (Approximately) 0, because we’ve already seen that the coin didn’t come up tails, or (approximately) 50%, because it’s a fair coin and we have no way of knowing the outcome in advance?
If you define “probability” as something that exists in a mind then it’s perfectly reasonable that you_then.prob != you_now.prob.
If you’re defining “probability” in some other way, please explain what you mean.
As Jaynes suggested, it’s best to view all probabilities as conditional. P(coin came up heads | what i know now) = 1, P(coin came up heads | what i knew before flipping it) = 0.5.
The way I look at it is that before the coin flip you obtain a probability from information you have at the time and you predict ~50%. After the coin flip, you have obtained new information, so the probability that the last coin flip came up tails becomes ~100% (because it did), and the new information also gives you a tiny bit of data that says “maybe the coin comes up heads more often”, so you also update to ~50.005% heads for the next one (or whatever). So, yes, the probability that the coin came up tails last try becomes ~100%, you just couldn’t estimate it from the information you had and with just your brain beforehand (an AGI would’ve probably immediately seen how much force is going into the flip and calculated it all out and seen ~100% probability).
Although if you have an event that’s heavily influenced by quantum magic, which a coin flip is not, you might need to consider that maybe it did have true 50% probability (that is, no amount of information and processing power would improve the prediction), and you just lost half your world’s measure.
The relevant number for the purpose of judging the “correctness” of predictions is the probability you should have had at the time of the prediction (ie. the epistemically correct prior). Whether the outcome of the coin flip is heads or tails, the correct prior odds are 1:1, because you had no evidence either way.
If royf bets that katydee will win a fencing bout, and in fact they lose fourteen touches in a row, only to win by forfeit, we should update towards suspecting that royf miscounted the evidence, since such a bad athlete should not have much strong evidence predicting their winning. We should believe that the correct prior probability of katydee winning (from royf’s point of view) is lower than royf thought it was.
Does this answer your question?
Not really.
Let me elaborate:
In a book of his, Daniel Dennett appropriates the word “actualism” to mean “the belief that only things that have actually happened, or will happen, are possible.” In other words, all statements that are false are not only false, but also impossible: If the coin flip comes up heads, it was never possible for the coin flip to have come up tails. He considers this rather silly, says there are good reasons for dismissing it that aren’t relevant to the current discussion, and proceeds as though the matter is solved. This strikes me as one of those philosophical positions that seem obviously absurd but very difficult to refute in practice. (It also strikes me as splitting hairs over words, so maybe it’s just a wrong question in the first place?)
Taboo “possible.” My take: in the absence of real physical indeterminism (which I doubt exists), “possible” is basically an epistemic term meaning “my model does not rule this out.” So actualism is wrong, on my view, because it projects the limitations of my mind onto the future causal evolution of the universe.
You may doubt that real physical indeterminism exists; others do not. The problem is that communication hinges on shared meanings, so if you change your meanings to reflect beliefs you have and others don’t, confusion may ensue.
True; however, even granting physical indeterminism, in most cases we can say that what possibility there is is epistemic. For example, whether katydee wins her fencing match probably does not depend closely on the result of some quantum event. (Although there is an interesting resonance between the probability she assigns to winning, and her actual likelihood of winning—but that’s a whole other kettle of worms.)
Or it might be better to use two different words. In fact, (courtesy of Popper IRC), we have “propensity” for objective probabilitty.
That is a well-chosen word.
Good answer.
On LessWrong, we generally use ‘possible,’ ‘necessary,’ ‘probable,’ etc. epistemically. Epistemic actualism, the doctrine that all events that occur have epistemic probability 1 (or approaching 1), is clearly absurd, since it requires that I be mistaken about nothing, and have perfect epistemic access to all facts in the universe. (But, of course, by ‘actualism’ no one ever means ‘epistemic actualism’.)
On the other hand, metaphysical actualism seems quite reasonable; indeed, thee metaphysical non-actualist has a lot of ground to cover in establishing what s/he even means by ‘metaphysically non-actual events’. Are non-actual ‘worlds’ abstract, for instance? Concrete? Neither? Both? Actually existent as non-actuals? Actually non-existent as non-actuals? Meinongian? And how do we gain any epistemic access to these mysterious possibilia? Even if you aren’t a Lewisian modal realist, asserting anything but actualism (or, equivalently, necessitarianism) with respect to metaphysical modality seems… spooky.
We can epistemically access possible but non actual worlds by noting that they are not against known laws of nature...what is not impossible is possible.
There are two options for what you’re trying to do here:
(1) You’re trying to analyze away metaphysical-possibilityspeak in terms of metaphysical-lawspeak. I.e., there’s nothing we could discover or learn that would disassociate these two concepts; one is simply an definitional analysis of the other. In which case, we can simply discard the idea of metaphysical possibility, to avoid miscommunication (since most people do not understand it in this way), and speak only of the laws of nature.
(2) You’re leaving the concepts distinct, but explaining that it just is the case that ‘what is lawful is possible, and what is “against the (natural) law” is impossible’, even though this is not synonymous with saying ‘what is possible is possible, and what is impossible is impossible’. That is, this is a substantive metaphysical thesis.
If you mean to be asserting (2), the metaphysical rather than semantic thesis (i.e., the non-trivial and interesting one), then I ask: What is your basis for this claim? What is your prior grasp on metaphysical possibility, such that you can be confident of its relationship to natural law? Are the laws of nature themselves contingent, or necessary? What evidence could we use to decide the matter one way or the other?
Because most people do understand it epistemically/subjectively? I think there are many kinds of possibility and many kinds of laws, and different kinds of possibility, and we make judgements about possibility based on lows. The nomologically possible is that which is allowed by the laws of nature, the logically possible is that which is not contradictory, which follows the law of non contradiction, and the epistemically possible is that which does not contradict anything I already know. So I think the kinds of possibility have a family resemblance, and there is no issue of dsicarding the other kinds in favour of epistemic possibility. (I am however happy to deflate a “possible world” into a “hypothetical state of affairs that is allowed by such-and-such laws”). into
No. Most English language speakers use modal terms both epistemically and metaphysically. My point was that most people, both lay- and academic, do not use ‘p is (metaphysically) possible’ to mean ‘p is not ruled out by the laws of physics’. If they did, then they wouldn’t understand anthropic arguments that presuppose the contingency of the physical laws themselves.
Then I don’t know what claim you’re making anymore. Taboo ‘law’; what is it you’re actually including in this ‘law’ category, potentially?
But you still haven’t explained what a ‘merely possible’ thing is. If logical and nomological possibility are metaphysical, then you owe us an account of what kinds of beings or thingies these possibilia are. On the other hand, if you reduce logical and nomological possibility to epistemic possibility—logical necessity is what I can infer from a certain set of logical axioms alone, logical possibility is what I can’t infer the negation of from some set of axioms, nomological necessity is what I know given only a certain set of ‘natural laws’.… but if we epistemologize these forms of necessity, then we collapse everything into the epistemic, and no longer owe any account of mysterious ‘possible worlds’ floating out there in the aether.
If that is meant to indicate there is some specific sense of possible that is used instead, I doubt that. Consider the following:
A: “Are perpertual motions machines possible?”
B: “I don;t see why not”
A: “Ah, but theyre against the laws of thermodynamics ”
B: “Ok, they.re impossible”.
A: “But could the laws of phsyics have been different..?”
B: “I suppse so. I don’t know what makes them thew way they are”.
AFAICS, B has gone through as many of 3 different notions of possibility there.
I don’t think there is “mere” possibility, if it means subtracting the X from “something is X-ly possible if it is allowed by X-ical laws”.
What they are would depend on the value of X. Family resemblance.
This is perhaps not the best description of actualism, but I see your point. Actualists would disagree with this part of my comment:
on the grounds that those other universes don’t exist.
But that was just a figure of speech. I don’t actually need those other universes to argue against 0 and 1 as probabilities. And if Frequentists disbelieve in that, there’s no place in Bayes Heaven for them.
Well, assuming a strict definition of “possible”, it’s just determinism; if God’s playing dice then “actualism” is false, and if he’s not then it’s true.
Assuming a useful definition of possible, then it’s trivially false.
Looks like yet another argument over definitions.