Many of them render considering them not true pointless, in the sense all my reasoning and senses are invalid if they don’t hold so I might as well give up and save computing time by conditioning on them being true.
I call these sorts of models sticky, in the sense that they are pervasive in our perception and categorisation. Sitcky categories are the sort of thing that we have a hard time not taking literally. I haven’t gone into any of this yet, of course, but I like it when comments anticipate ideas and continue trains of thought.
Maybe a short run-long run model would be good to illustrate this stickiness. In the short run, perception is fixed; this also fixes certain categories, and the “degree of stickiness” that different categories have. For example, chair is remarkably hard to get rid of, whereas “corpuscle” isn’t quite as sticky. In the long run, when perception is free, no category needs to be sticky. At least, not unless we come up with a more restrictive model of possible perceptions. I don’t think that such a restrictive model would be appropriate in a background epistemology. That’s something that agents will develop for themselves based on their needs and perceptual experience.
Many of them are directly implemented in physical systems
Different mathematical models of human perceptual experience might be perfectly suitable for the same purpose.. Physics should be the clearest example, since we have undergone many different changes of mathematical models, and are currently experiencing a plurality of theories with different mathematics in cosmology. The differences between classical mechanics and quantum mechanics should in particular show this nicely: different formalisms, but very good models of a large class of experiences.
you cant have one of them without ALL the others.
I think you slightly underestimate the versatility of mathematicians in making their systems work despite malfunctions. For instance, even if ZFC were proved inconsistent (as Edward Nelson hopes to do), we would not have to abandon it as a foundation. Set theorists would just do some hocus pocus involving ordinals, and voila! all would be well. And there are several alternative formulations of arithmetic, analysis, topology, etc. which are all adequate for most purposes.
You might try to imagine an universe without math.
In the case of some math, this is easy to do. In other cases it is not. This is because we don’t experience the freefloating perceptual long term, not because certain models are necessary for all possible agents and perceptual content.
I don’t mean just sticky models. The concepts I’m talking about are things like “probability”, “truth”, “goal”, “If-then”, “persistent objects”, etc. Believing that a theory is true that says “true” is not a thing theories can be is obviously silly. Believing that there is no such things as decisionmaking and that you’re a fraction of a second old and will cease to be within another fraction of a second might be philosophically more defensible, but conditioning on it not being true can never have bad consequences while it has a chance of having good ones.
I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air… “Applied successfully in many cases”, where “many” is “billions of times every second”
Then ZFC is not one of those cores ones, just one of the peripheral ones. I’m talking ones like set theory as a whole, or arithmetic, or Turing machines.
Believing that a theory is true that says “true” is not a thing theories can be is obviously silly.
Oh okay. This is a two-part misunderstanding.
I’m not saying that theories can’t be true, I’m just not talking about this truth thing in my meta-model. I’m perfectly a-okay with models of truth popping up wherever they might be handy, but I want to taboo the intuitive notion and refuse to explicate it. Instead I’ll rely on other concepts to do much of the work we give to truth, and see what happens. And if there’s work that they can’t do, I want to evaluate whether it’s important to include in the meta-model or not.
I’m also not saying that my theory is true. At least, not when I’m talking from within the theory. Perhaps I’ll find certain facets of the correspondence theory useful for explaining things or convincing others, in which case I might claim it’s true. My epistemology is just as much a model as anything else, of course; I’m developing it with certain goals in mind.
I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air… “Applied successfully in many cases”, where “many” is “billions of times every second”
The math we use to model computation is a model and a tool just as much as computers are tools; there’s nothing weird (at least from my point of view) about models being used to construct other tools. Living cells can be modeled successfully with math, you’re right; but that again is just a model. And atoms are definitely theoretical constructs used to model experiences, the persuasive images of balls or clouds they conjure notwithstanding. Something similar can be said about fluid dynamics.
I don’t mean any of this to belittle models, of course, or make them seem whimsical. Models are worth taking seriously, even if I don’t think they should be taken literally.
Then ZFC is not one of those cores ones, just one of the peripheral ones. I’m talking ones like set theory as a whole, or arithmetic, or Turing machines.
The best example in the three is definitely arithmetic; the other two aren’t convincing. Math was done without set theory for ages, and besides we have other foundations available for modern math that can be formulated entirely without talking about sets. Turing machines can be replaced with logical systems like the lambda calculus, or with other machine models like register machines.
Arithmetic is more compelling, because it’s very sticky. It’s hard not to take it literally, and it’s hard to imagine things without it. This is because some of the ideas it constitutes are at the core cluster of our categories, i.e. they’re very sticky. But could you imagine that some agent might a) have goals that never require arithmetical concepts, and b) that there could be models that are non-arithmetical that could be used toward some of the same goals for which we use arithmetic? I can imagine … visualise, actually, both, although I would have a very hard time translating my visual into text without going very meta first, or else writing a ridiculously long post.
Hmm, maybe I need to reveal my epistemology another step towards the bottom. Two things seem relevant here.
I think you you SHOULD take your best model literally if you live in a human brain, since it can never get completely stuck requiring infinite evidence due to it’s architecture, but does have limited computation and doubt can both confuse it and damage motivation. The few downsides there are can be fixed with injunctions and heuristics.
Secondly, you seem to be going with fuzzy intuitions or direct sensory experience as the most fundamental. At my core is instead that I care about stuff, and that my output might determine that stuff. The FIRST thing that happens is conditioning on that my decisions matter, and then I start updating on the input stream of a particular instance/implementation of myself. My working definition of “real” is “stuff I might care about”.
My point wasn’t that the physical systems can be modeled BY math, but that they themselves model math. Further, that if the math wasn’t True, then it wouldn’t be able to model the physical systems.
With the math systems as well you seem to be coming from the opposite direction. Set theory is a formal system, arithmetic can model it using gödel numbering, and you can’t prevent that or have it give different results without breaking arithmetic entirely. Likewise, set theory can model arithmetic. It’s a package deal. Lambda calculus and register machines are also members of that list of mutual modeling. I think even basic geometry can be made sort of Turing complete somehow. Any implementation of any of them must by necessity model all of them, exactly as they are.
You can model an agent that doesn’t need the concepts, but it must be a very simple agent with very simple goals in a very simple environment. To simple to be recognizable as agentlike by humans.
I call these sorts of models sticky, in the sense that they are pervasive in our perception and categorisation. Sitcky categories are the sort of thing that we have a hard time not taking literally. I haven’t gone into any of this yet, of course, but I like it when comments anticipate ideas and continue trains of thought.
Maybe a short run-long run model would be good to illustrate this stickiness. In the short run, perception is fixed; this also fixes certain categories, and the “degree of stickiness” that different categories have. For example, chair is remarkably hard to get rid of, whereas “corpuscle” isn’t quite as sticky. In the long run, when perception is free, no category needs to be sticky. At least, not unless we come up with a more restrictive model of possible perceptions. I don’t think that such a restrictive model would be appropriate in a background epistemology. That’s something that agents will develop for themselves based on their needs and perceptual experience.
Different mathematical models of human perceptual experience might be perfectly suitable for the same purpose.. Physics should be the clearest example, since we have undergone many different changes of mathematical models, and are currently experiencing a plurality of theories with different mathematics in cosmology. The differences between classical mechanics and quantum mechanics should in particular show this nicely: different formalisms, but very good models of a large class of experiences.
I think you slightly underestimate the versatility of mathematicians in making their systems work despite malfunctions. For instance, even if ZFC were proved inconsistent (as Edward Nelson hopes to do), we would not have to abandon it as a foundation. Set theorists would just do some hocus pocus involving ordinals, and voila! all would be well. And there are several alternative formulations of arithmetic, analysis, topology, etc. which are all adequate for most purposes.
In the case of some math, this is easy to do. In other cases it is not. This is because we don’t experience the freefloating perceptual long term, not because certain models are necessary for all possible agents and perceptual content.
I don’t mean just sticky models. The concepts I’m talking about are things like “probability”, “truth”, “goal”, “If-then”, “persistent objects”, etc. Believing that a theory is true that says “true” is not a thing theories can be is obviously silly. Believing that there is no such things as decisionmaking and that you’re a fraction of a second old and will cease to be within another fraction of a second might be philosophically more defensible, but conditioning on it not being true can never have bad consequences while it has a chance of having good ones.
I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air… “Applied successfully in many cases”, where “many” is “billions of times every second”
Then ZFC is not one of those cores ones, just one of the peripheral ones. I’m talking ones like set theory as a whole, or arithmetic, or Turing machines.
Oh okay. This is a two-part misunderstanding.
I’m not saying that theories can’t be true, I’m just not talking about this truth thing in my meta-model. I’m perfectly a-okay with models of truth popping up wherever they might be handy, but I want to taboo the intuitive notion and refuse to explicate it. Instead I’ll rely on other concepts to do much of the work we give to truth, and see what happens. And if there’s work that they can’t do, I want to evaluate whether it’s important to include in the meta-model or not.
I’m also not saying that my theory is true. At least, not when I’m talking from within the theory. Perhaps I’ll find certain facets of the correspondence theory useful for explaining things or convincing others, in which case I might claim it’s true. My epistemology is just as much a model as anything else, of course; I’m developing it with certain goals in mind.
The math we use to model computation is a model and a tool just as much as computers are tools; there’s nothing weird (at least from my point of view) about models being used to construct other tools. Living cells can be modeled successfully with math, you’re right; but that again is just a model. And atoms are definitely theoretical constructs used to model experiences, the persuasive images of balls or clouds they conjure notwithstanding. Something similar can be said about fluid dynamics.
I don’t mean any of this to belittle models, of course, or make them seem whimsical. Models are worth taking seriously, even if I don’t think they should be taken literally.
The best example in the three is definitely arithmetic; the other two aren’t convincing. Math was done without set theory for ages, and besides we have other foundations available for modern math that can be formulated entirely without talking about sets. Turing machines can be replaced with logical systems like the lambda calculus, or with other machine models like register machines.
Arithmetic is more compelling, because it’s very sticky. It’s hard not to take it literally, and it’s hard to imagine things without it. This is because some of the ideas it constitutes are at the core cluster of our categories, i.e. they’re very sticky. But could you imagine that some agent might a) have goals that never require arithmetical concepts, and b) that there could be models that are non-arithmetical that could be used toward some of the same goals for which we use arithmetic? I can imagine … visualise, actually, both, although I would have a very hard time translating my visual into text without going very meta first, or else writing a ridiculously long post.
Hmm, maybe I need to reveal my epistemology another step towards the bottom. Two things seem relevant here.
I think you you SHOULD take your best model literally if you live in a human brain, since it can never get completely stuck requiring infinite evidence due to it’s architecture, but does have limited computation and doubt can both confuse it and damage motivation. The few downsides there are can be fixed with injunctions and heuristics.
Secondly, you seem to be going with fuzzy intuitions or direct sensory experience as the most fundamental. At my core is instead that I care about stuff, and that my output might determine that stuff. The FIRST thing that happens is conditioning on that my decisions matter, and then I start updating on the input stream of a particular instance/implementation of myself. My working definition of “real” is “stuff I might care about”.
My point wasn’t that the physical systems can be modeled BY math, but that they themselves model math. Further, that if the math wasn’t True, then it wouldn’t be able to model the physical systems.
With the math systems as well you seem to be coming from the opposite direction. Set theory is a formal system, arithmetic can model it using gödel numbering, and you can’t prevent that or have it give different results without breaking arithmetic entirely. Likewise, set theory can model arithmetic. It’s a package deal. Lambda calculus and register machines are also members of that list of mutual modeling. I think even basic geometry can be made sort of Turing complete somehow. Any implementation of any of them must by necessity model all of them, exactly as they are.
You can model an agent that doesn’t need the concepts, but it must be a very simple agent with very simple goals in a very simple environment. To simple to be recognizable as agentlike by humans.