Like any proof in a formal system, you can conclude that “the idea is consistent unless the formal system is inconsistent.” But that’s a tautology. If you’re not willing to say that ZF refers to things in the real world i.e. has ontological content, why aren’t you skeptical of it?
Like any proof in a formal system, you can conclude that “the idea is consistent unless the formal system is inconsistent.” But that’s a tautology.
I wasn’t saying that. If you believe that a formal system captures the idea you’re considering, in the sense of this idea being about properties of (some of) the models of this formal system, and the formal system tells you that the idea doesn’t make sense, it’s some evidence towards the idea not making sense, even though it’s also possible that the formal system is just broken, or that it doesn’t actually capture the idea, and you need to look for a different formal system to perceive it properly.
If you’re not willing to say that ZF refers to things in the real world i.e. has ontological content, why aren’t you skeptical of it?
ZF clearly refers to lots of things not related to the physical world, but if it’s not broken (and it doesn’t look like it is), it can talk about many relevant ideas, and help in answering questions about these ideas. It can tell whether some object doesn’t hold some property, for example, or whether some specification is contradictory.
(I know a better term for my current philosophy of ontology now: “mathematical monism”. From this POV, inference systems are just another kind of abstract object, as is their physical implementation in mathematicians’ brains. Inference systems are versatile tools for “perceiving” other facts, in the sense that (some of) the properties of those other facts get reflected as the properties of the inference systems, and consequently as the properties of physical devices implementing or simulating the inference systems. An inference system may be unable to pinpoint any one model of interest, but it still reflects its properties, which is why failure to focus of a particular model or describe what it is, is not automatically a failure to perceive some properties of that model. Morality is perhaps undefinable in this sense.)
Like any proof in a formal system, you can conclude that “the idea is consistent unless the formal system is inconsistent.” But that’s a tautology. If you’re not willing to say that ZF refers to things in the real world i.e. has ontological content, why aren’t you skeptical of it?
I wasn’t saying that. If you believe that a formal system captures the idea you’re considering, in the sense of this idea being about properties of (some of) the models of this formal system, and the formal system tells you that the idea doesn’t make sense, it’s some evidence towards the idea not making sense, even though it’s also possible that the formal system is just broken, or that it doesn’t actually capture the idea, and you need to look for a different formal system to perceive it properly.
ZF clearly refers to lots of things not related to the physical world, but if it’s not broken (and it doesn’t look like it is), it can talk about many relevant ideas, and help in answering questions about these ideas. It can tell whether some object doesn’t hold some property, for example, or whether some specification is contradictory.
(I know a better term for my current philosophy of ontology now: “mathematical monism”. From this POV, inference systems are just another kind of abstract object, as is their physical implementation in mathematicians’ brains. Inference systems are versatile tools for “perceiving” other facts, in the sense that (some of) the properties of those other facts get reflected as the properties of the inference systems, and consequently as the properties of physical devices implementing or simulating the inference systems. An inference system may be unable to pinpoint any one model of interest, but it still reflects its properties, which is why failure to focus of a particular model or describe what it is, is not automatically a failure to perceive some properties of that model. Morality is perhaps undefinable in this sense.)