I don’t mean just sticky models. The concepts I’m talking about are things like “probability”, “truth”, “goal”, “If-then”, “persistent objects”, etc. Believing that a theory is true that says “true” is not a thing theories can be is obviously silly. Believing that there is no such things as decisionmaking and that you’re a fraction of a second old and will cease to be within another fraction of a second might be philosophically more defensible, but conditioning on it not being true can never have bad consequences while it has a chance of having good ones.
I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air… “Applied successfully in many cases”, where “many” is “billions of times every second”
Then ZFC is not one of those cores ones, just one of the peripheral ones. I’m talking ones like set theory as a whole, or arithmetic, or Turing machines.
Believing that a theory is true that says “true” is not a thing theories can be is obviously silly.
Oh okay. This is a two-part misunderstanding.
I’m not saying that theories can’t be true, I’m just not talking about this truth thing in my meta-model. I’m perfectly a-okay with models of truth popping up wherever they might be handy, but I want to taboo the intuitive notion and refuse to explicate it. Instead I’ll rely on other concepts to do much of the work we give to truth, and see what happens. And if there’s work that they can’t do, I want to evaluate whether it’s important to include in the meta-model or not.
I’m also not saying that my theory is true. At least, not when I’m talking from within the theory. Perhaps I’ll find certain facets of the correspondence theory useful for explaining things or convincing others, in which case I might claim it’s true. My epistemology is just as much a model as anything else, of course; I’m developing it with certain goals in mind.
I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air… “Applied successfully in many cases”, where “many” is “billions of times every second”
The math we use to model computation is a model and a tool just as much as computers are tools; there’s nothing weird (at least from my point of view) about models being used to construct other tools. Living cells can be modeled successfully with math, you’re right; but that again is just a model. And atoms are definitely theoretical constructs used to model experiences, the persuasive images of balls or clouds they conjure notwithstanding. Something similar can be said about fluid dynamics.
I don’t mean any of this to belittle models, of course, or make them seem whimsical. Models are worth taking seriously, even if I don’t think they should be taken literally.
Then ZFC is not one of those cores ones, just one of the peripheral ones. I’m talking ones like set theory as a whole, or arithmetic, or Turing machines.
The best example in the three is definitely arithmetic; the other two aren’t convincing. Math was done without set theory for ages, and besides we have other foundations available for modern math that can be formulated entirely without talking about sets. Turing machines can be replaced with logical systems like the lambda calculus, or with other machine models like register machines.
Arithmetic is more compelling, because it’s very sticky. It’s hard not to take it literally, and it’s hard to imagine things without it. This is because some of the ideas it constitutes are at the core cluster of our categories, i.e. they’re very sticky. But could you imagine that some agent might a) have goals that never require arithmetical concepts, and b) that there could be models that are non-arithmetical that could be used toward some of the same goals for which we use arithmetic? I can imagine … visualise, actually, both, although I would have a very hard time translating my visual into text without going very meta first, or else writing a ridiculously long post.
Hmm, maybe I need to reveal my epistemology another step towards the bottom. Two things seem relevant here.
I think you you SHOULD take your best model literally if you live in a human brain, since it can never get completely stuck requiring infinite evidence due to it’s architecture, but does have limited computation and doubt can both confuse it and damage motivation. The few downsides there are can be fixed with injunctions and heuristics.
Secondly, you seem to be going with fuzzy intuitions or direct sensory experience as the most fundamental. At my core is instead that I care about stuff, and that my output might determine that stuff. The FIRST thing that happens is conditioning on that my decisions matter, and then I start updating on the input stream of a particular instance/implementation of myself. My working definition of “real” is “stuff I might care about”.
My point wasn’t that the physical systems can be modeled BY math, but that they themselves model math. Further, that if the math wasn’t True, then it wouldn’t be able to model the physical systems.
With the math systems as well you seem to be coming from the opposite direction. Set theory is a formal system, arithmetic can model it using gödel numbering, and you can’t prevent that or have it give different results without breaking arithmetic entirely. Likewise, set theory can model arithmetic. It’s a package deal. Lambda calculus and register machines are also members of that list of mutual modeling. I think even basic geometry can be made sort of Turing complete somehow. Any implementation of any of them must by necessity model all of them, exactly as they are.
You can model an agent that doesn’t need the concepts, but it must be a very simple agent with very simple goals in a very simple environment. To simple to be recognizable as agentlike by humans.
I don’t mean just sticky models. The concepts I’m talking about are things like “probability”, “truth”, “goal”, “If-then”, “persistent objects”, etc. Believing that a theory is true that says “true” is not a thing theories can be is obviously silly. Believing that there is no such things as decisionmaking and that you’re a fraction of a second old and will cease to be within another fraction of a second might be philosophically more defensible, but conditioning on it not being true can never have bad consequences while it has a chance of having good ones.
I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air… “Applied successfully in many cases”, where “many” is “billions of times every second”
Then ZFC is not one of those cores ones, just one of the peripheral ones. I’m talking ones like set theory as a whole, or arithmetic, or Turing machines.
Oh okay. This is a two-part misunderstanding.
I’m not saying that theories can’t be true, I’m just not talking about this truth thing in my meta-model. I’m perfectly a-okay with models of truth popping up wherever they might be handy, but I want to taboo the intuitive notion and refuse to explicate it. Instead I’ll rely on other concepts to do much of the work we give to truth, and see what happens. And if there’s work that they can’t do, I want to evaluate whether it’s important to include in the meta-model or not.
I’m also not saying that my theory is true. At least, not when I’m talking from within the theory. Perhaps I’ll find certain facets of the correspondence theory useful for explaining things or convincing others, in which case I might claim it’s true. My epistemology is just as much a model as anything else, of course; I’m developing it with certain goals in mind.
The math we use to model computation is a model and a tool just as much as computers are tools; there’s nothing weird (at least from my point of view) about models being used to construct other tools. Living cells can be modeled successfully with math, you’re right; but that again is just a model. And atoms are definitely theoretical constructs used to model experiences, the persuasive images of balls or clouds they conjure notwithstanding. Something similar can be said about fluid dynamics.
I don’t mean any of this to belittle models, of course, or make them seem whimsical. Models are worth taking seriously, even if I don’t think they should be taken literally.
The best example in the three is definitely arithmetic; the other two aren’t convincing. Math was done without set theory for ages, and besides we have other foundations available for modern math that can be formulated entirely without talking about sets. Turing machines can be replaced with logical systems like the lambda calculus, or with other machine models like register machines.
Arithmetic is more compelling, because it’s very sticky. It’s hard not to take it literally, and it’s hard to imagine things without it. This is because some of the ideas it constitutes are at the core cluster of our categories, i.e. they’re very sticky. But could you imagine that some agent might a) have goals that never require arithmetical concepts, and b) that there could be models that are non-arithmetical that could be used toward some of the same goals for which we use arithmetic? I can imagine … visualise, actually, both, although I would have a very hard time translating my visual into text without going very meta first, or else writing a ridiculously long post.
Hmm, maybe I need to reveal my epistemology another step towards the bottom. Two things seem relevant here.
I think you you SHOULD take your best model literally if you live in a human brain, since it can never get completely stuck requiring infinite evidence due to it’s architecture, but does have limited computation and doubt can both confuse it and damage motivation. The few downsides there are can be fixed with injunctions and heuristics.
Secondly, you seem to be going with fuzzy intuitions or direct sensory experience as the most fundamental. At my core is instead that I care about stuff, and that my output might determine that stuff. The FIRST thing that happens is conditioning on that my decisions matter, and then I start updating on the input stream of a particular instance/implementation of myself. My working definition of “real” is “stuff I might care about”.
My point wasn’t that the physical systems can be modeled BY math, but that they themselves model math. Further, that if the math wasn’t True, then it wouldn’t be able to model the physical systems.
With the math systems as well you seem to be coming from the opposite direction. Set theory is a formal system, arithmetic can model it using gödel numbering, and you can’t prevent that or have it give different results without breaking arithmetic entirely. Likewise, set theory can model arithmetic. It’s a package deal. Lambda calculus and register machines are also members of that list of mutual modeling. I think even basic geometry can be made sort of Turing complete somehow. Any implementation of any of them must by necessity model all of them, exactly as they are.
You can model an agent that doesn’t need the concepts, but it must be a very simple agent with very simple goals in a very simple environment. To simple to be recognizable as agentlike by humans.