But the “turtles all the way down” or the method in which the act of discovery changes the law...
Why can’t that also be modeled? Even if the model is self-modifying meta-recursive turtle-stack infinite “nonsense”, there probably exists some way to describe it, model it, understand it, or at least point towards it.
This very “pointing towards it” is what I’m doing right now. I postulate that no matter the form it takes, even if it seems logically nonsensical, there’s a model which can explain the results proportionally to how much we understand about it (we may end up being never able to perfectly understand it).
Currently, the best fuzzy picture of that model, by my pinpointing of what-I’m-referring-to, is precisely what you’ve just described:
“we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on”.
That’s what I’m pointing at. I don’t care either how many turtle stacks or infinities or regresses or recursions or polymorphic interfaces or variables or volatilities there are. The hypothetical description that a perfect agent with perfect information looking at our models and inputs from the outside would give of the program that we are part of is the “algorithm”.
Maybe the turing tape never halts, and just keeps computing on and on more new “laws of physics” as we research on and on and do more exotic things, such that there’s no “true final ultimate laws”. Of course that could happen. I have no solid evidence either way, so why would I restrict my thinking to the hypothesis that there is? I like flexibility in options like that.
So yeah, my definition of that formula is pretty much self-referential and perhaps not always coherently explained. It’s a bit like CEV in that regards, “whatever we would if …” and so on.
Once all reduced away, all I’m really postulating is the continuing ability of possible agents who make models and analyze their own models to point at and frame and describe mathematically and meta-modelize the patterns of experimental results, given sufficient intelligence and ability to model things. It’s not nearly as powerfully predictive or groundbreaking as I might have made it sound in earlier comments.
For more comparisons, it’s a bit like when I say “my utility function”. Clearly, there might not be a final utility function in my brain, it might be circular, or it might regress infinitely, or be infinitely self-modifying and self-referential, but by golly when I say that my best approximation of my utility function values having food much more highly than starving, I’m definitely pointing at and approximating something in there in that mess of patterns, even if I might not know exactly where I’m pointing at.
That “something” is my “true utility function”, even if it would have to be defined with fuzzy self-recursive meta-games and timeless self-determinance or some other crazy shenanigans.
So I guess that’s about also what I refer to when I say “reality”.
I’m not really disagreeing. I’m just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like “but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?” become progressively more nonsensical.
Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.
These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.
the idea of some objective reality becomes progressively less useful
Useful for what? Prediction? But realists arent using these models to answer the “what input should I expect” question; they are answering other questions, like “what is real” and “what should we value”.
And “nothing” is an answer to “what is real”. What does instrumentalism predict?
If it’s really better or more “true” on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I’d have to update in favour of something.
If it’s really better or more “true” on some level
But if that’s no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive
theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.
If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn’t need to persuade people who already agree with it—just the rest of us.
And once you’ve self-modified into an instrumentalist, I guess there are other arguments that will now persuade you—for example, that this hypothetical underlying layer of “reality” has no extra predictive power (at least, I think that’s what shiminux finds persuasive.)
But the “turtles all the way down” or the method in which the act of discovery changes the law...
Why can’t that also be modeled? Even if the model is self-modifying meta-recursive turtle-stack infinite “nonsense”, there probably exists some way to describe it, model it, understand it, or at least point towards it.
This very “pointing towards it” is what I’m doing right now. I postulate that no matter the form it takes, even if it seems logically nonsensical, there’s a model which can explain the results proportionally to how much we understand about it (we may end up being never able to perfectly understand it).
Currently, the best fuzzy picture of that model, by my pinpointing of what-I’m-referring-to, is precisely what you’ve just described:
That’s what I’m pointing at. I don’t care either how many turtle stacks or infinities or regresses or recursions or polymorphic interfaces or variables or volatilities there are. The hypothetical description that a perfect agent with perfect information looking at our models and inputs from the outside would give of the program that we are part of is the “algorithm”.
Maybe the turing tape never halts, and just keeps computing on and on more new “laws of physics” as we research on and on and do more exotic things, such that there’s no “true final ultimate laws”. Of course that could happen. I have no solid evidence either way, so why would I restrict my thinking to the hypothesis that there is? I like flexibility in options like that.
So yeah, my definition of that formula is pretty much self-referential and perhaps not always coherently explained. It’s a bit like CEV in that regards, “whatever we would if …” and so on.
Once all reduced away, all I’m really postulating is the continuing ability of possible agents who make models and analyze their own models to point at and frame and describe mathematically and meta-modelize the patterns of experimental results, given sufficient intelligence and ability to model things. It’s not nearly as powerfully predictive or groundbreaking as I might have made it sound in earlier comments.
For more comparisons, it’s a bit like when I say “my utility function”. Clearly, there might not be a final utility function in my brain, it might be circular, or it might regress infinitely, or be infinitely self-modifying and self-referential, but by golly when I say that my best approximation of my utility function values having food much more highly than starving, I’m definitely pointing at and approximating something in there in that mess of patterns, even if I might not know exactly where I’m pointing at.
That “something” is my “true utility function”, even if it would have to be defined with fuzzy self-recursive meta-games and timeless self-determinance or some other crazy shenanigans.
So I guess that’s about also what I refer to when I say “reality”.
I’m not really disagreeing. I’m just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like “but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?” become progressively more nonsensical.
Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.
These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.
Useful for what? Prediction? But realists arent using these models to answer the “what input should I expect” question; they are answering other questions, like “what is real” and “what should we value”.
And “nothing” is an answer to “what is real”. What does instrumentalism predict?
If it’s really better or more “true” on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I’d have to update in favour of something.
But if that’s no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.
Well, I suppose all it would need to peruade is people who don’t already believe it …
More seriously, you’ll have to ask shiminux, because I, as a realist, anticipate this test failing, so naturally I can’t explain why it would succeed.
Huh? I don’t see why the ability to convince people who don’t care about consistency is something that should sway me.
If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn’t need to persuade people who already agree with it—just the rest of us.
And once you’ve self-modified into an instrumentalist, I guess there are other arguments that will now persuade you—for example, that this hypothetical underlying layer of “reality” has no extra predictive power (at least, I think that’s what shiminux finds persuasive.)