I’ve been thinking about the set theory multiverse and its philosophical implications, particularly regarding mathematical truth. While I understand the pragmatic benefits of the multiverse view, I’m struggling with its philosophical implications.
The multiverse view suggests that statements like the Continuum Hypothesis aren’t absolutely true or false, but rather true in some set-theoretic universes and false in others. We have:
Gödel’s Constructible Universe (L) where CH is true
Forcing extensions where CH is false
Various universes with different large cardinal axioms
However, I find myself appealing to basic logical principles like the law of non-contradiction. Even if we can’t currently prove certain axioms, doesn’t this just reflect our epistemological limitations rather than implying all axioms are equally “true”?
To make an analogy: physical theories being underdetermined by evidence doesn’t mean reality itself is underdetermined. Similarly, our inability to prove CH doesn’t necessarily mean it lacks a definite truth value.
Questions I’m wrestling with:
What makes certain axioms “true” beyond mere consistency?
Is there a meaningful distinction between mathematical existence and consistency?
Can we maintain mathematical realism while acknowledging the practical utility of the multiverse approach?
How do we reconcile Platonism with independence results?
I’m leaning towards a view that maintains objective mathematical truth while explaining why we need to work with multiple models pragmatically. But I’m very interested in hearing other perspectives, especially from those who work in set theory or mathematical logic.
Axioms are only “true” or “false” relative to a model. In some cases the model is obvious, e.g. the intended model of Peano arithmetic is the natural numbers. The intended model of ZFC is a bit harder to get your head around. Usually it is taken to be defined as the union of the von Neumann hierarchy over all “ordinals”, but this definition depends on taking the concept of an ordinal as pretheoretic rather than defined in the usual way as a well-founded totally ordered set.
An axiom system is consistent if and only if it has some model, which may not be the intended model. So there is a meaningful distinction, but the only way you can interact with that distinction is by finding some way of distinguishing the intended model from other models. This is difficult.
The models that appear in the multiverse approach are indeed models of your axiom system, so it makes perfect sense to talk about them. I don’t see why this would generate any contradiction with also being able to talk about a canonical model.
Independence results are only about what you can prove (or equivalently what is true in non-canonical models), not about what is true in a canonical model. So I don’t see any difficulty to be reconciled.
The following is probably just an ELI5 version of Dacyn’s answer:
Just because you use a word (such as “set”), it doesn’t mean that it has an unambiguous meaning.
Imagine the same discussion about “numbers”. Can you subtract 5 from 3? In the universe of natural numbers, the answer is no. In the universe of all integers, the answer is yes. Is there a number such that if you multiply it by itself, the result is 2? In the universe of rational numbers, the answer is no; in the universe of real numbers, the answer is yes.
Here you probably don’t see any problem. Some statements can be true about real numbers and false about rational numbers, because those are two different things. A person who talks about “numbers” in general, needs to be more specific. As we see here, defining addition, subtraction, multiplication, and division is still not enough to allow us to figure out the answer to “∃a: a × a = 2″.
It’s similar with the sets. The ZF(C) axioms are simply not enough to pinpoint what you actually mean by a “set”. They reduce the space of possible meanings, sufficiently to let us prove many interesting things, but there are still (infinitely) many possible meanings compatible with all of the axioms. For some of those meanings, CH is true, for other meanings, CH is false.
Is there a “set” greater than ℵ0 but smaller than 2ℵ0? It depends.
Is there a “number” that is greater than 2 but smaller than 3? It depends.
Nothing. What do you mean by “true” here? Matching our physical universe? That in general is not what math does. The natural numbers may already include some that exceed the number of particles in our universe. The real numbers are inspired by measuring actual things, but do we really need an infinite number of decimal places? On top of all that, sets are merely mental constructs. A set of {2, 8, 33897798} does not imply anything about our world.
No.
Maybe I am missing some important aspect, but the “multiverse” seems to me just like a intuitively helpful metaphor, but the actual problem is more like this: is the natural number “2“ the same object as the integer “2”, the real number “2.0”, the Gaussian integer “2+0i”, the complex number “2.0+0.0i”, etc.?
One possible approach is to say: those are different domains of discourse… uhm, let’s call them parallel universes to make it intuitive for the sci-fi fans. The object in a parallel universe is a different object, but also in some sense the captain Picard from the parallel universe is a natural counterpart to our captain Picard. They are generally the same unless specified otherwise for the plot relevant reasons, just like “2.0” from the real number universe is the natural counterpart to “2″ from the integer universe, except that the former can be divided by three and the latter cannot. (Some things do not have a counterpart in the other universe, etc.) This feels like a natural approach for real vs complex numbers, and probably like an overkill for natural numbers vs integers.
The assumption of different universes kinda goes against the Occam’s razor; we could simply move all these objects into the same universe (different planets perhaps) and make a story about a spaceship captain from Earth and a spaceship captain from Mars. Now we don’t have the concept of a natural counterpart, and the analogies need to make explicitly: the horses on Earth correspond to the giant six-legged lizards on Mars. There is a set of natural numbers, the set of real numbers, and a function N → R which maps the object “2” to the object “2.0“. More importantly, there is no such thing as “addition”; there are actually two different things, “natural number addition” and “real number addition”, and we call the latter the extension of the former, if for each pair of natural numbers, the counterpart of their sum is the same as the sum of their counterparts. The question whether “2” and “2.0” are intrinsically the same object can become kinda meaningless, if we always talk about numbers qua members of one or the other set. They could be the same object, or they could be different objects; the important thing is what they do, i.e. how they participate in various functions and relations.
(This kinda reminds me of Korzybski’s “Aristotelian” vs “Non-Aristotelian” thinking, where the former is about what things are, while the latter is about how things are related to each other. Is “2” the same as “2.0“? A meaningless question, from the non-A perspective. The important thing is what they do; how are they related to other numbers. The important facts about “2” are that “1+1=2” and “2+2=4” etc. We can show that we can map N to R in a way that preserves all existing addition and multiplication, and whenever we do so, “2.0″ is the image of “2”. And that’s all there is.)
With sets, I guess it is similar. If we have different definitions of what a “set” means, is the empty set according to definition X the same mathematical object as the empty set according to definition Y? The question is meaningless, from the non-A perspective; but to avoid all the complicated philosophy, it is easier to say that one lives in the universe X, and the other lives in the universe Y, so they are “kinda the same, but not the same”. But to be precise, there is no such thing as an “empty set”, only something that plays the role of an empty set in a certain system. Some systems could not even have such role, or they could have multiple distinct empty sets—for example, we could imagine a system where each set has a type, and the “empty set of integers” is different from the “empty set of reals”, because it has a different content type.
(Now I suspect I have opened a new can of worms, like how to reconcile Platonism with Korzybski’s non-A thinking, and… that would be a long debate that I would prefer to avoid. My quick opinion is that perhaps we should aim for some kind of “Platonism of function” rather than “Platonism of essence”, i.e. what the abstract objects do rather than what they are. The question is whether we should still call this approach “Platonism”, perhaps some other name would be better.)
[note: I dabble, at best. This is likely wrong in some ways, so I look forward to corrections. ]
It’s REALLY hard to distinguish between “unprovable” and “unknown truth value”. In fact, this is recursively hard—there are lots of things that are not proven, but it’s not known if they’re provable. And so on.
Mathematical truth is very much about provability from axioms.
“true” is hard to apply to axioms. There’s the common-sense version of “can’t find a counterexample, and have REALLY tried”, which is unsatisfying but pretty effective for practical use. The formal version is just not to use “true”, but “chosen” for axioms. Some are more USEFUL than others. Some are more easily justified than others. It’s not clear how to know which (if any) are true, but that doesn’t make them equally true.
Not a correction (because this is all philosophy) but the problem with this “hard formalism” stance:
is that statements of the form “statement x follows from axiom set S” are themselves arithmetical statements that may or may not even be provable from a given standard axiom system. I would guess that you’re implicitly taking for granted that Σ1 and Π1 statements in the arithmetic hierarchy have inherent truth in order to at least establish a truth value for such statements. Most people do this implicitly; it’s equivalent to saying that every Turing machine either halts or it doesn’t (and the behavior has nothing to do with someone’s axiom system).
I assume you’re familiar with the case of the parallel postulate in classical geometry as being independent of other axioms? Where that independence corresponds with the existence of spherical/hyperbolic geometries (i.e. actual models in which the axiom is false) versus normal flat Euclidean geometry (i.e. actual models in which it is true).
To me, this is a clear example of there being no such thing as an “objective” truth about the the validity of the parallel postulate—you are entirely free to assume either it or incompatible alternatives. You end up with equally valid theories, it’s just those theories are applicable to different models, and those models are each useful in different situations, so the only thing it comes down to is which models you happen to be wanting to use or explore or prove things about on a given day.
Similarly for the huge variety of different algebraic or topological structures (groups, ordered fields, manifolds, etc) - it is extremely common to have statements that are independent of the axioms, e.g. in a ring it is independent of the axioms whether multiplication is commutative or not. And both choices are valid. We have commutative rings, and we have noncommutative rings, and both are self-consistent mathematical structures that one might wish to study.
Loosely analogous to how one can write a compiler/interpreter for a programming language within other programming languages, some theories can easily simulate other theories. Set theories are particularly good and convenient for simulating other theories, but one can also simulate set theories within other seemingly more “primitive” theories (e.g. simulating it in theories of basic arithmetic via Godel numbering). This might be analogous to e.g. someone writing a C compiler in Brainfuck. Just like how it’s meaningless to talk about whether a programming language or a given sub-version or feature extension of a programming language is more “objectively true” than another, there are many who take the position that the same holds for different set theories.
When you say you’re “leaning towards a view that maintains objective mathematical truth” with respect to certain axioms, is there some fundamental principle by which you’re discriminating the axioms that you want to assign objective truth from axioms like the parallel postulate or the commutativity of rings, which obviously have no objective truth? Or do you think that even in these latter cases there is still an objective truth?
This is true, but there’s an important caveat: Mathematicians accepted Euclidean geometry long before they accepted non-Euclidean geometry, because they took it to be intuitively evident that a model of Euclid’s axioms existed, whereas the existence of models of non-Euclidean geometry was AFAIK regarded as non-obvious until such models were constructed within a metatheory assuming Euclidean space.
From the perspective of modern foundations, it’s not so important to pick one kind of geometry as fundamental and use it to construct models of other geometries, because we now know how to construct models of all the classical geometries within more fundamental foundational theories such as arithmetic or set theory. But OP was asking about incompatible variants of the axioms of set theory. We don’t have a more fundamental theory than set theory in which to construct models of different set theories, so we instead assume a model of set theory and then construct models of other set theories within it.
For example, one can replace the axiom of foundation of ZFC with axioms of anti-foundation postulating the existence of all sorts of circular or infinitely regressing chains of membership relations between sets. One can construct models of non-well-founded set theories within well-founded set theories and vice versa, but I don’t know of anyone who claims that therefore both kinds of set theory are equally valid as foundations. The existence of models of well-founded set theories is natural to assume as a foundation, whereas the existence of models satisfying strong anti-foundation axioms is not intuitively obvious and is therefore treated as a theorem rather than an axiom, the same way non-Euclidean geometry was historically.
Yes, there are ways of interpreting ZFC in a theory of natural numbers or other finite objects. What there is not, however, is any known system of intuitively obvious axioms about natural numbers or other finite objects, which makes no appeal to intuitions about infinite objects, and which is strong enough to prove that such an interpretation of ZFC exists (and therefore that ZFC is consistent). I don’t think any way of reducing the consistency of ZFC to intuitively obvious axioms about finite objects will ever be found, and if I live to see a day when I’m proved wrong about that, I would regard it as the greatest discovery in the foundations of math since the incompleteness theorems.
It doesn’t and they are fundamentally equal. The only reality is the physical one—there is no reason to complicate your ontology with platonically existing math. Math is just a collection of useful templates that may help you predict reality and that it works is always just a physical fact. Best case is that we’ll know true laws of physics and they will work like some subset of math and then axioms of physics would be actually true. You can make a guess about what axioms are compatible with true physics.
Also there is Shoenfield’s absoluteness theorem, which I don’t understand, but which maybe prevents empirical grounding of CH?
This is an appealingly parsimonious account of mathematical knowledge, but I feel like it leaves an annoying hole in our understanding of the subject, because it doesn’t explain why practicing math as if Platonism were correct is so ridiculously reliable and so much easier and more intuitive than other ways of thinking about math.
For example, I have very high credence that no one will ever discover a deduction of 0=1 from the ZFC axioms, and I guess I could just treat that as an empirical hypothesis about what kinds of physical instantiations of ZFC proofs will ever exist. But the early set theorists weren’t just randomly sampling the space of all possible axioms and sticking with whatever ones they couldn’t find inconsistencies in. They had strong priors about what kinds of theories should be consistent. Their intuitions sometimes turned out to be wrong, as in the case of Russel’s paradox, but overall their work has held up remarkably well, after huge amounts of additional investigation by later generations of mathematicians.
So where did their intuitions come from? As I said in my answer, I have doubts about Platonism as an explanation, but none of the alternatives I’ve investigated seem to shed much light on the question.
I will give an eminently practical answer to the philosophical discussion.
It’s too hard to get anywhere close to the mathematical truth, so people have given up on mathematical truth for good reasons, IMO.
I will also say that this is why I wish computability theory had finer distinctions beyond decidable/undecidable, and have more of a complexity theoretic flavor in distinguishing between the hardness of problems, where you cand was more explicit about the fact that many of the impossibility results are relative to a certain set of capabilities, because in a different world, I could see computability theory having much more relevance to this debate.
In essence, I’m arguing for pinning down exactly how hard a problem actually is, closer to the style of how complexity theorists not only care whether a problem is hard, but also how hard the problem exactly is.
I’ll give an example, just to show everyone my position on the matter.
If you could decide the totality problem, you could also decide whether P equaled NP or not, which requires much weaker oracles.
Some examples of such computational models include Turing Machines with the ability to have an infinity of states, Blum Shub Smale machines with arbitrary real constants, and probabilistic Turing Machines with non-recursive biases for the coin.
See these links for more (You can ignore all details about physical plausibility, since it’s not necessary for the complexity and computability results above:)
https://arxiv.org/abs/math/0209332
https://arxiv.org/abs/cs/0401019
http://www.amirrorclear.net/files/the-many-forms-of-hypercomputation.pdf
So I’m pretty firmly on the side of mathematical truth, but with an enormous caveat here, in that what independence proofs show us is that we need new computational capabilities, not new axioms, and thus we can either accept that it’s too hard to get at the truth and move on, or if someone does have a plan to get the computational capacity needed that is actually viable, then we can talk and have more productive discussions on whether the discovery is real and how to exploit it, if applicable.
So the most useful way to say things is that whenever an independence result is published, we should immediately try to figure out how complicated it is to determine the statement is true according to our standard computability hierarchies, and ideally have completeness results along the lines of NP-complete or RE-complete problems, and then give up on proving or disproving it, unless someone proposes a plausible method for getting that computational power.
Edit: I’ve edited out the continuum hypothesis example, since I had misinterpreted what exactly it was saying, and it’s saying that it’s a universal 2 1 conservative extension, not that it actually has that complexity.
See here for more:
https://en.wikipedia.org/wiki/Conservative_extension#Examples
Intuitively, the computational models you were suggesting seem like they would only decide statements that are at most second-order; they can really answer third-order arithmetical questions?
The computational models are far, far more powerful than only answering second-order questions.
The infinite state Turing Machine explicitly computes every function from the natural numbers to the natural numbers, which means they can compute any arithmetical question, no matter how large the n-th order arithmetic question gets.
This as a corollary means they can decide any language that is normally defined in complexity theory, and solve a uncountably infinite set of problems.
The same holds for the Blum Shub Smale machine, assuming arbitrary real constants are allowed.
Finally, the probabilistic Turing Machine having a non-recursive bias acting as a universal oracle machine can code any oracle set X as a probability Px such that if it’s given that probability, it can compute any function recursive to the oracle set chosen, and given that the continuum hypothesis is a universal 2 1 statement, we can certainly give a probabilistic Turing Machine a non-recursive probability that corresponds to the oracle for the continuum hypothesis.
And at any rate, the probability could be coded as an oracle set for the truth of third-order arithmetic sentences, which is strictly stronger than an oracle for the continuum hypothesis, so this doesn’t truly matter.
Blum Shub Smale machines with arbitrary r
I’m going to stick with BSS because the wikipedia page for that is pretty short and easy to understand. It seems pretty straightforward to construct injections:
configuration space →R
set of possible instruction sets →R
So asking something like “does there exist an n such that this machine with this instruction set hits this configuration in n steps” strikes me as a second order arithmetical question. I’m handwaving here, possibly in a really bad way that’s getting me into trouble, but that’s what’s throwing me off.
What’s throwing you off, I think is that classically defined second order arithmetic questions are computable if we allow any function from N to N, and the set R of real numbers can from a cardinality viewpoint be shown to be equivalent to the power set of the natural numbers, which includes all functions from the natural numbers to the natural numbers.
Given that the Blum Shub Smale machine can compute every function from N to N, this also implies it must compute second-order arithmetical questions as classically defined.
To clarify: I agree that BSS machines can answer second order questions (maybe not the full analytic hierarchy; that’s something I’ll think more about). My confusion is about this:
where you said that it all nth-order arithmetic questions can be decided by some such machine. My above comment was why I believed that the questions that are being answered strike me as second order.
Edit for clarifying the relevance here: it strikes me as plausible that questions about BSS machines halting/not halting have inherent truth; if this would imply that third-order arithmetical questions like CH have inherent truth, I would be surprised and update my view about mathematical truth.
The reasoning is pretty similar to the second order case, and the reason here is that no matter how many orders we add to arithmetic like first-order, second-order, nth order, formal languages and theories are always constrained by the usual definition to countably infinite sets of statements and languages, and the ability to compute any function from N to N means that you have an uncountably infinite set of functions to compute, which is always larger than any individual formal languages and theories in the usual definition.
Now that the original comment has changed, I’m more unsure about what you exactly you mean. Can you be more precise about what you mean by being able to compute arbitrary nth-order arithmetical statements? Is there a BSS machine that halts if and only if CH is true? Is there a BSS machine that halts if and only if there’s a choice function on the set of non-empty subsets of R?
I admit, I haven’t found any such explicit examples yet.
I wish this was more of a common practice to give explicit examples of these mathematical statements in more computational terms.
For the Nth order arithmetical statement issue, what I’m talking about is essentially a generalization of the arithmetical hierarchy, where there’s 0 1, 0 2, 0 3 and so on defined here, until it stops at 0^w, which then continues on the analytical hierarchy from 1 1, 1 2, 1 3, until it too stops at 1^w, which can go on essentially forever.
Essentially, I’m talking about this in a generalized form:
https://en.wikipedia.org/wiki/Arithmetical_hierarchy
The formal reason why a BSS machine can compute all of this comes down, in the end to the fact that it can compute every function from the natural numbers to the natural numbers, which as a set is always larger than any arithmetical hierarchy, which means that the claim that we can always compute nth order arithmetic for any N follows easily.
So I didn’t mean to clarify what you meant by nth-order statements, but rather that BSS statements can decide them, because that seems way too strong to be true. I’m also a bit more skeptical because:
is not true; the first level of the analytic hierarchy sits on top of the hyperarithmetic hierarchy. And my understanding is that finding a way to have an analagous picture of going from something like a “hyperanalytic hierarchy” to the bottom level of third order arithmetic is something highly non-trivial/impossible/ill-defined, but this isn’t my area of math. So I think it’s easier to ignore the hierarchy aspect and just consider nth-order statements that can be finitely expressed in the language of set theory involving the n-1st power set of N and lower. I would be very, very surprised if BSS machines could decide all such third-order statements but if you have a reference that implies it, I’d take a look.
Isn’t the hyperarithmetic hierarchy just contained in 1 1 statements, or equivalently 0^w statements, where you decide the truth of all first-order arithmetic statements?
I will for now retract the claim that BSS machines can decide all third order statements, mostly because I don’t have a reference to the specific fact, but I would be very surprised if BSS machines couldn’t decide all third order arithmetic statements, or any order of arithmetic that can be finitely expressed in the language of set theory, because ZFC can define a truth predicate that works for all arithmetical statements, no matter which order of arithmetic you are considering, for the reason shown below, and all finitely expressible statements in the language of arithmetic or set theory are countable, while the amount of problems they can decide and the amount of functions they can solve are uncountable, so the BSS machine is larger than any formal theory of arithmetic or set theory that is finitary.
I think you have some significant gaps in your understanding. The 0ω statements contain all first-order arithmetic statements yes. But that’s just the beginning of the hyperarithmetic hierarchy. You can then discuss sentences that have a quantifier over 0ω statements, which are 0ω+1, and iterate this to to get 02ω . Then you can iterate again. Etc. Abstracting that process gets you to 0ω2. And you keep going again and again. This is still very low in the hyperarithmetic hierarchy.
My understanding (again, this is not my area) is that given any computable ordinal (i.e. any well-ordering of the naturals that can be encoded by the outputs of some Turing machine), you can construct a corresponding level of the hyperarithmetic hierarchy. So how do you discuss an arbitrary hyperarithmetical statement? You need to be able to quantify over the Turing machines that spit out well-orderings. Verifying that an ordering is a well-ordering requires quantifying over all subsequences—a second order statement. That’s why the abstraction of the hyperarithemic hierarchy is called the first level of the analytic hierarchy, the 1 1 statements as you said.
Alright, I’ll have to do homework today, so I can’t respond for now.
The law of non contradiction isn’t true in all “universes” , either. It’s not true in paraconsistent logic, specifically.
Arguably, “basic logical principles” are those that are true in natural language. Otherwise nothing stops us from considering absurd logical systems where “true and true” is false, or the like. Likewise, “one plus one is two” seems to be a “basic mathematical principle” in natural language. Any axiomatization which produces “one plus one is three” can be dismissed on grounds of contradicting the meanings of terms like “one” or “plus” in natural language.
The trouble with set theory is that, unlike logic or arithmetic, it often doesn’t involve strong intuitions from natural language. Sets are a fairly artificial concept compared to natural language collections (empty sets, for example, can produce arbitrary nestings), especially when it comes to infinite sets.
That’s where the problem starts, not where it stops. Natural language supports a bunch of assumptions that are hard to formally reconcile: if you want your strict PNC, you have to give up on something else. The whole 2500 yeah history of logic has been a history of trying to come up with formal systems that fulfil various desiderata. It is now formally proven that you can’t have all of them at once, and it’s not obvious what to keep and what to ditch. (Godelian problems can be avoided with lower power systems, but that’s another tradeoff, since high power is desirable).
Formalists are happy to pick a system that’s appropriate for a practical domain, and to explore the theoretical properties of different systems in parallel.
Platonists believe that only one axiom system has truth in addition to usefulness, but can’t agree which one it is, so it makes no difference in practice
I’m not seeing a specific problem with sets—you can avoid some of the problems of naive self theory by adding limitations, but that’s tradeoffs again.
“You can’t have all the intuitive principles in full strength in one system”
doesn’t imply
“adopt unintuitive axioms”.
Even formalists don’t believe all axiomisations are equally useful.
What’s 12+1?
They’re ambiguous in natural language, hence the need for formalisation.
It involves some intuitions . It works like clubs. Being a senator is being a member of a set, not exemplifying a universal.
If you want finitism, you need a principled way to select a largest finite number.
I have spent a long time looking in vain for any reason to think ZFC is consistent, other than that it holds in the One True Universe of Sets (OTUS, henceforth). So far I haven’t found anything compelling, and I am quite doubtful at this point that any such justification exists.
Just believing in the OTUS seems to provide a perfectly satisfactory account of independence and nonstandard models, though: They are just epiphenomenal shadows of the OTUS, which we have deduced from our axioms about the OTUS. They may be interesting and useful (I rather like nonstandard analysis), but they don’t have any foundational significance except as a counterexample showing the limitations of what formal systems can express. I take it that this is more or less what you have in mind when you say
It’s disappointing that we apparently can’t answer some natural questions about the OTUS, like the continuum hypothesis, but Gödel showed that our knowledge of the OTUS was always bound to be incomplete 🤷♂️.
Having said that, I still don’t find the Platonist view entirely satisfactory. How do humans come to have knowledge that the OTUS exists and satisfies the ZFC axioms? Supposing that we do have such knowledge, what is it that distinguishes mathematical propositions whose truth we can directly perceive (which we call axioms) from other mathematical propositions (which we call conjectures, theorems, etc.)?
An objection more specific to set theory, as opposed to Platonism more generally, would be, given a supposed “universe” V of “all” sets, the proper classes of V are set-like objects, so why can’t we extend the cumulative hierarchy another level higher to include them, and continue that process transfinitely? Or, if we can do that, then we can’t claim to ever really be quantifying over all sets. But if that’s so, then why should we believe that the power set axiom holds, i.e. that any of these partial universes of sets that we can quantify over is ever large enough to contain all subsets of N?
But every alternative to Platonism seems to entail skepticism about the consistency of ZFC (or even much weaker foundational theories), which is pragmatically inconvenient, and philosophically unsatisfactory, inasmuch as the ZFC axioms do seem intuitively pretty compelling. So I’m just left with an uneasy agnosticism about the nature of mathematical knowledge.
Getting back to the question of the multiverse view, my take on it is that it all seems to presuppose the consistency of ZFC, and realism about the OTUS is the only good reason to make that presupposition. In his writings on the multiverse (e.g. here), Joel Hamkins seems to be expressing skepticism that there is even a unique (up to isomorphism) standard model of N that embeds into all the nonstandard ones. I would say that if he thinks that, he should first of all be skeptical that the Peano axioms are consistent, to say nothing of ZFC, because the induction principle rests on the assumption that “well-founded” means what we think it means and is a property possessed by N. I have never seen an answer to this objection from Hamkins or another multiverse advocate, but if anyone claims to have one I’d be interested to see it.