One knows that in principle anything can be a hallucination, and that only very rare events have true certainty (and perhaps none, because how can one be sure that information is genuinely new), but going by this, one would hardly be able to operate a car or do anything remotely risky, because anything one sees can be a hallucination.
Instead one is just doing what feels intuitively reasonable, occasionally pausing to ponder all this.
So, here it is the same, one is leaning towards what feels intuitively reasonable, occasionally pausing to ponder all this at the meta-level.
One knows that in principle anything can be a hallucination, and that only very rare events have true certainty
Well, precisely, so some metaphysical axioms I just take on faith, because that’s all I can do. Maybe I’m a Boltzmann brain existing only for a moment, but that’s not actionable, so I scratch that. And if one of my metaphysical axioms is “the world is made only of things that are empirically observable, causal, and at their fundamental level passive” (so I don’t expect some tricky demon to purposefully mess with my observations in ways I couldn’t possibly detect), then that’s it. I can’t really update away from it, because the entire framework of my epistemology (including the very notion of beliefs and Bayesian updates!) rests on it. I can fit in it all sorts of entities—including gods, demons, angels and ghosts—just fine, but I need those to still be causal, emergent entities, just like you and me, subject to some kind of natural, regular, causal law, if not the ones I’m used to. Otherwise, I got nothing.
But if I get a feel that those metaphysical axioms no longer fit without contorting things too much, I easily replace them...
This is even easier for me, because I have the current “default set” of those axioms, the one which I intuitively rely upon when I don’t do any special efforts, and the alternative sets of axioms, which I think about when I ponder all this philosophically.
I am very much aware that there is no “objective way” to choose among those sets of axioms, and that, moreover, the “true set of axioms” might not even be among the candidate sets I am aware of.
But that does not in any way prevent me from letting one of the sets of axioms I aware of to replace my current “default set of axioms” if my intuition starts suggesting that the other set of axioms fits better. That happens way before I ponder this kind of shift in my axioms philosophically and reflect on it.
So, in one period of my life I might feel materialistic, and I might live accordingly, and in a different period of my life I might feel “connected to a higher entity”, and I would live accordingly, and in yet another period I feel particularly agnostic and I would stay on a meta-level and focus on how I am not really sure...
But I don’t think you can call such a process a Bayesian update. Again, it would require you placing conditional probabilities on the various metaphysical axioms—but the very concept of probabilities and Bayes’ theorem are built upon those axioms. If causality doesn’t always hold, if there are entities that do not need to obey it, then Bayes’ theorem doesn’t apply to them. It’s just your own personal conviction shift, but you shouldn’t use Bayesian updates as a framework to think about it, nor fall prey to the illusion that it makes your decision process any better in this kind of thing. It doesn’t. Everyone is just as clueless as everyone else on these matters and no one has any hope to know better. You may pick your metaphysical axioms as they were revealed to you in a dream and they’ll be as good as anything.
You may pick your metaphysical axioms as they were revealed to you in a dream and they’ll be as good as anything.
But that’s not arbitrary at all. That probably reflects some deep subconscious intuitions which are not arbitrary.
And these kinds of intuitive updates happen first, before philosophical reflections on the meta-level.
But then we are the type of people inclined to philosophically reflect on the meta-level about all this. One can argue whether these reflections make any sense or not, we’ll still continue to reflect on the meta-level once in a while and we’ll try to apply some non-rigorous approximate reasoning, since fully rigorous reasoning is not available.
In fact, this dialog between us is an example of this kind of meta-level reflection.
Sometimes I am thinking about those metaphyhical sets of axioms as “philosophical coordinate systems”, and one of those “philosophical coordinate systems”_ might feel more convenient at a given moment, and another one might feel more convenient at a given moment, depending on how reality looks...
When I think about this philosophically, I don’t think about one of them being “really true”, and others not being “really true”. Instead, in recent years I tend to think about a multiverse, with me moving between branches of reality, between alternative realities, with those realities being governed by different systems of axioms and having somewhat different phenomenologies.
But that’s not how one operates in the world.
One knows that in principle anything can be a hallucination, and that only very rare events have true certainty (and perhaps none, because how can one be sure that information is genuinely new), but going by this, one would hardly be able to operate a car or do anything remotely risky, because anything one sees can be a hallucination.
Instead one is just doing what feels intuitively reasonable, occasionally pausing to ponder all this.
So, here it is the same, one is leaning towards what feels intuitively reasonable, occasionally pausing to ponder all this at the meta-level.
Well, precisely, so some metaphysical axioms I just take on faith, because that’s all I can do. Maybe I’m a Boltzmann brain existing only for a moment, but that’s not actionable, so I scratch that. And if one of my metaphysical axioms is “the world is made only of things that are empirically observable, causal, and at their fundamental level passive” (so I don’t expect some tricky demon to purposefully mess with my observations in ways I couldn’t possibly detect), then that’s it. I can’t really update away from it, because the entire framework of my epistemology (including the very notion of beliefs and Bayesian updates!) rests on it. I can fit in it all sorts of entities—including gods, demons, angels and ghosts—just fine, but I need those to still be causal, emergent entities, just like you and me, subject to some kind of natural, regular, causal law, if not the ones I’m used to. Otherwise, I got nothing.
But if I get a feel that those metaphysical axioms no longer fit without contorting things too much, I easily replace them...
This is even easier for me, because I have the current “default set” of those axioms, the one which I intuitively rely upon when I don’t do any special efforts, and the alternative sets of axioms, which I think about when I ponder all this philosophically.
I am very much aware that there is no “objective way” to choose among those sets of axioms, and that, moreover, the “true set of axioms” might not even be among the candidate sets I am aware of.
But that does not in any way prevent me from letting one of the sets of axioms I aware of to replace my current “default set of axioms” if my intuition starts suggesting that the other set of axioms fits better. That happens way before I ponder this kind of shift in my axioms philosophically and reflect on it.
So, in one period of my life I might feel materialistic, and I might live accordingly, and in a different period of my life I might feel “connected to a higher entity”, and I would live accordingly, and in yet another period I feel particularly agnostic and I would stay on a meta-level and focus on how I am not really sure...
But I don’t think you can call such a process a Bayesian update. Again, it would require you placing conditional probabilities on the various metaphysical axioms—but the very concept of probabilities and Bayes’ theorem are built upon those axioms. If causality doesn’t always hold, if there are entities that do not need to obey it, then Bayes’ theorem doesn’t apply to them. It’s just your own personal conviction shift, but you shouldn’t use Bayesian updates as a framework to think about it, nor fall prey to the illusion that it makes your decision process any better in this kind of thing. It doesn’t. Everyone is just as clueless as everyone else on these matters and no one has any hope to know better. You may pick your metaphysical axioms as they were revealed to you in a dream and they’ll be as good as anything.
But that’s not arbitrary at all. That probably reflects some deep subconscious intuitions which are not arbitrary.
And these kinds of intuitive updates happen first, before philosophical reflections on the meta-level.
But then we are the type of people inclined to philosophically reflect on the meta-level about all this. One can argue whether these reflections make any sense or not, we’ll still continue to reflect on the meta-level once in a while and we’ll try to apply some non-rigorous approximate reasoning, since fully rigorous reasoning is not available.
In fact, this dialog between us is an example of this kind of meta-level reflection.
Sometimes I am thinking about those metaphyhical sets of axioms as “philosophical coordinate systems”, and one of those “philosophical coordinate systems”_ might feel more convenient at a given moment, and another one might feel more convenient at a given moment, depending on how reality looks...
When I think about this philosophically, I don’t think about one of them being “really true”, and others not being “really true”. Instead, in recent years I tend to think about a multiverse, with me moving between branches of reality, between alternative realities, with those realities being governed by different systems of axioms and having somewhat different phenomenologies.