The possibility of a god existing doesn’t equate, to me, to seeing if a possible thing exists or not, but rather whether the set of concepts are in any way possible. This is a question about the very nature of reality, and I’m pretty sure that reality is weird enough that the question falls far short of having any real meaning.
I don’t understand the last half of that last sentence. But as for the rest, if I’m interpreting you correctly, here’s how I’d respond:
The probability of a god existing is not necessarily equal to the probability of “the set of concepts [being] in any way possible” (or we might instead say something like “it being metaphysically possible”, “the question even being coherent”, or similar). Instead, it’s less than or equal to that probability. That is, a god can indeed only exist if the set of concepts are in any way possible, but it seems at least conceivable that the set of concepts could be conceivable and yet it still happen to be that there’s no god.
And in any case, for the purposes of this post, what I’m really wondering about is not what the odds of there being a god are, but rather whether and how we can arrive at meaningful probabilities for these sorts of claims. So I’d then also ask whether and how we can arrive at a meaningful probability for the claim “It is metaphysically possible/in any way possible that there’s a god” (as a separate claim to whether there is a god). And I’d argue we can, through a process similar to the one described in this post.
To sketch it briefly, we might think about previous concepts that were vaguely like this one, and whether, upon investigation, they “turned out to be metaphysically possible”. We might find they never have (“yet”), but that that’s not at all surprising, even if we assume that those claims are metaphysically possible, because we just wouldn’t expect to have found evidence of that anyway. In which case, we might be forced to either go for way broader reference classes (like “weird-seeming claims”, or “things that seemed to violate occam’s razor unnecessarily”), or abandon reference class forecasting entirely, and lean 100% on inside-view type considerations (like our views on occam’s razor and how well this claim fits with it) or our “gut feelings” (hopefully honed by calibration training). I think the probability we assign might be barely meaningful, but still more meaningful than nothing.
I don’t understand the last half of that last sentence. But as for the rest, if I’m interpreting you correctly, here’s how I’d respond:
The probability of a god existing is not necessarily equal to the probability of “the set of concepts [being] in any way possible” (or we might instead say something like “it being metaphysically possible”, “the question even being coherent”, or similar). Instead, it’s less than or equal to that probability. That is, a god can indeed only exist if the set of concepts are in any way possible, but it seems at least conceivable that the set of concepts could be conceivable and yet it still happen to be that there’s no god.
And in any case, for the purposes of this post, what I’m really wondering about is not what the odds of there being a god are, but rather whether and how we can arrive at meaningful probabilities for these sorts of claims. So I’d then also ask whether and how we can arrive at a meaningful probability for the claim “It is metaphysically possible/in any way possible that there’s a god” (as a separate claim to whether there is a god). And I’d argue we can, through a process similar to the one described in this post.
To sketch it briefly, we might think about previous concepts that were vaguely like this one, and whether, upon investigation, they “turned out to be metaphysically possible”. We might find they never have (“yet”), but that that’s not at all surprising, even if we assume that those claims are metaphysically possible, because we just wouldn’t expect to have found evidence of that anyway. In which case, we might be forced to either go for way broader reference classes (like “weird-seeming claims”, or “things that seemed to violate occam’s razor unnecessarily”), or abandon reference class forecasting entirely, and lean 100% on inside-view type considerations (like our views on occam’s razor and how well this claim fits with it) or our “gut feelings” (hopefully honed by calibration training). I think the probability we assign might be barely meaningful, but still more meaningful than nothing.