Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies ‘beliefs’, and the latter thingy ‘reality’.
I refuse to postulate an extra “thingy that determines my experimental results”. Occam’s razor and such.
So uhm. How do the experimental results, y’know, happen?
I think I understand everything else. Your position makes perfect sense. Except for that last non-postulate. Perhaps I’m just being obstinate, but there needs to be something to the pattern / regularity.
If I look at a set of models, a set of predictions, a set of experiments, and the corresponding set of experimental results, all as one big blob:
The models led to predictions—predictions about the experimental results, which are part of the model. The experiments were made according to the model that describes how to test those predictions (I might be wording this a bit confusingly?). But the experimental results… just “are”. They magically are like they are, for no reason, and they are ontologically basic in the sense that nothing at all ever determines them.
To me, it defies any reasonable logical description, and to my knowledge there does not exist a possible program that would generate this (i.e. if the program “randomly” generates the experimental results, then the randomness generator is the cause of the results, and thus is that thinghy, and for any regularity observable then the algorithm that causes that regularity in the resulting program output is the thinghy). Since as far as I can tell there is no possible logical construct that could ever result in a causeless ontologically basic “experimental result set” that displays regularity and can be predicted and tested, I don’t see how it’s even possible to consistently form a system where there are even models and experiences.
In short, if there is nothing at all whatsoever from which the experimental results arise, not even just a mathematical formula that can be pointed at and called ‘reality’, then this doesn’t even seem like a well-formed mathematically-expressible program, let alone one that is occam/solomonoff “simpler” than a well-formed program that implicitly contains a formula for experimental results.
No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say “Look here! This is what ‘determines’ what experimental results I see and restricts the possible futures! Let’s call this thinghy/subset/formula ‘reality’!”
I don’t see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.
No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say “Look here! This is what ‘determines’ what experimental results I see and restricts the possible futures! Let’s call this thinghy/subset/formula ‘reality’!”
I don’t see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.
As far as I can tell, those two paragraphs are pretty much Eliezer’s position on this, and he’s just putting that subset as an arbitrary variable, saying something like “Sure, we might not know said subset of the program or where exactly it is or what computational form it takes, but let’s just have a name for it anyway so we can talk about things more easily”.
So uhm. How do the experimental results, y’know, happen?
Are you trying to solve the question of origin? How did the external reality, that thing that determines the experimental results, in the realist model, y’know, happen?
I discount your musings about “ontological basis”, perhaps uncharitably. Instrumentally, all I care about is making accurate predictions, and the concept of external reality is sometimes useful in that sense, and sometimes it gets in the way.
No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say “Look here! This is what ‘determines’ what experimental results I see and restricts the possible futures! Let’s call this thinghy/subset/formula ‘reality’!”
Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.
I don’t see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.
I fail to see a requirement you think I would have to get around. Just some less-than-useful logical construct.
I think it all just finally clicked. Strawman test (hopefully this is a good enough approximation):
You do imagine patterns and formulas, and your model does (or can) contain a (meta^x)-model that we could use and call “reality” and do whatever other realist-like shenanigans, and does describe the experimental results in some way that we could say “this formula, if it ‘really existed’ and the concept of existence is coherent at all, is the cause of my experimental results and the thinghy that determines them”.
You just naturally exclude going from there to assuming that the meta-model is “real”, “exists”, or is itself what is external to the models and causes everything; something which for other people requires extra mental effort and does relate to the problem of origin.
Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.
Sure. What I was attempting to say is that if I look at your model of the world, and within this model find a sub-part that happens to be a meta-model of the world like that program, I could also point at a smaller sub-part of that meta-model and say “Within this meta-model that you have in your model of the world, this is the modeled ‘cause’ of your experimental results, they all happen according to this algorithm”.
So now, given that the above is at least a reasonable approximation of your beliefs, the hypotheses for one of us misinterpreting Eliezer have risen quite considerably.
Personally, I tend to mentally “simplify” my model by saying that the program in question “is” (reality), for purposes of not having to redefine and debate things with people. Sometimes, though, when I encounter people who think “quarks are really real out there and have a real position in a really existing space”, I just get utterly confused. Quarks are just useful models of the interactions in the world. What’s “actually” doing the quark-ing is irrelevant.
So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality? I have at least two issues with this formulation. One is that every model supposedly contains this algorithm. Lots of high-level models are polymorphic, you can replace quarks with bits or wooden blocks and they still hold. The other is that, once you put this algorithm outside the model space, you are tempted to consider other similar algorithms which have no connection with the rest of the models whatsoever, like the mathematical universe. The term “exist” gains a meaning not present in its original instrumental Latin definition: to appear or to stand out. And then we are off the firm ground of what can be tested and into the pure unconnected ideas, like “post-utopian” Eliezer so despises, yet apparently implicitly adopts. Or maybe I’m being uncharitable here. He never engaged me on this point.
I think both you and DaFranker might be going a bit too deep down the meta-model rabbit-hole. As far as I understand, when a scientist says “electrons exists”, he does not mean,
These mathematical formulae that I wrote down describe an objective reality with 100% accuracy.
Rather, he’s saying something like,
There must be some reason why all my experiments keep coming out the way they do, and not in some other way. Sure, this could be happening purely by chance, but the probability of this is so tiny as to be negligible. These formulae describe a model of whatever it is that’s supplying my experimental results, and this model predicts future results correctly 99.999999% of the time, so it can’t be entirely wrong.
As far as I understand, you would disagree with the second statement. But, if so, how do you explain the fact that our experimental results are so reliable and consistent ? Is this just an ineffable mystery ?
I don’t disagree with the second statement, I find parts of it meaningless or tautological. For example:
These formulae describe a model of whatever it is that’s supplying my experimental results
The part in bold is redundant. You would normally say “of Higgs decay” or something to that effect.
, and this model predicts future results correctly 99.999999% of the time, so it can’t be entirely wrong.
The part in bold is tautological. Accurate predictions is the definition of not being wrong (within the domain of applicability). In that sense Newtonian physics is not wrong, it’s just not as accurate.
The part in bold is tautological. Accurate predictions is the definition of not being wrong
The instrumentalist definition. For realists, and accurate theory can still be wrong because it fails to correspond to reality, or posits non existent entities. For instance, and epicyclic theory of the solar system can be made as accurate as you like.
Accurate predictions is the definition of not being wrong (within the domain of applicability)
I meant to make a more further-reaching statement than that. If we believe that our model approximates that (postulated) thing that is causing our experiments to come out a certain way, then we can use this model to devise novel experiments, which are seemingly unrelated to the experiments we are doing now; and we could expect these novel experiments to come out the way we expected, at least on occasion.
For example, we could say, “I have observed this dot of light moving across the sky in a certain way. According to my model, this means that if I were to point my telescope at some other part of sky, we would find a much dimmer dot there, moving in a specific yet different way”.
This is a statement that can only be made if you believe that different patches of the sky are connected, somehow, and if you have a model that describes the entire sky, even the pieces that you haven’t looked at yet.
If different patches of the sky are completely unrelated to each other, the likelihood of you observing what you’d expect is virtually zero, because there are too many possible observations (an infinite number of them, in fact), all equally likely. I would argue that the history of science so far contradicts this assumption of total independence.
In that sense Newtonian physics is not wrong, it’s just not as accurate.
This may be off-topic, but I would agree with this statement. Similarly, the statement “the Earth is flat” is not, strictly speaking, wrong. It works perfectly well if you’re trying to lob rocks over a castle wall. Its inaccuracy is too great, however, to launch satellites into orbit.
So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality?
Sort-of.
I’m saying that there’s a sufficiently fuzzy and inaccurate polymorphic model (or sets of models, or meta-description of the requirements and properties for relevant models,) of “the universe” that could be created and pointed at as “the laws”, which if known fully and accurately could be “computed” or simulated or something and computing this algorithm perfectly would in-principle let us predict all of the experimental results.
If this theoretical, not-perfectly-known sub-algorithm is a perfect description of all the experimental results ever, then I’m perfectly willing to slap the labels “fundamental” and “reality” on it and call it a day, even though I don’t see why this algorithm would be more “fundamentally existing” than the exact same algorithm with all parameters multiplied by two, or some other algorithm that produces the same experimental results in all possible cases.
The only reason I refer to it in the singular—“the sub-algorithm”—is because I suspect we’ll eventually have a way to write and express as “an algorithm” the whole space/set/field of possible algorithms that could perfectly predict inputs, if we knew the exact set that those are in. I’m led to believe it’s probably impossible to find this exact set.
I find this approach very limiting. There is no indication that you can construct anything like that algorithm. Yet by postulating its existence (ahem), you are forced into a mode of thinking where “there is this thing called reality with some fundamental laws which we can hopefully learn some day”. As opposed to “we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on”. Without ever worrying if some day there is nothing more to discover, because we finally found the holy grail, the ultimate laws of reality. I don’t mind if it’s turtles all the way down.
In fact, in the spirit of QM and as often described in SF/F stories, the mere act of discovery may actually change the “laws”, if you are not careful. Or maybe we can some day do it intentionally, construct our own stack of turtles. Oh, the possibilities! And all it takes is to let go of one outdated idea, which is, like Aristotle’s impetus, ripe for discarding.
I’m not sure it’s as different as all that from shminux’s perspective.
By way of analogy, I know a lot of people who reject the linguistic habit of treating “atheism” as referring to a positive belief in the absence of a deity, and “agnosticism” as referring to the absence of a positive belief in the presence of a deity. They argue that no, both positions are atheist; in the absence of a positive belief in the presence of a deity, one does not believe in a deity, which is the defining characteristic of the set of atheist positions. (Agnosticism, on this view, is the position that the existence of a deity cannot be known, not merely the observation that one does not currently know it. And, as above, on this view that means agnosticism implies atheism.)
If I substitute (reality, non-realism, the claim that reality is unknowable) for (deity, atheism, agnosticism) I get the assertion that the claim that reality is unknowable is a non-realist position. (Which is not to say that it’s specifically an instrumentalist position, but we’re not currently concerned with choosing among different non-realist positions.)
All of that said, none of it addresses the question which has previously been raised, which is how instrumentalism accounts for the at-least-apparently-non-accidental relationship between past inputs, actions, models, and future inputs. That relationship still strikes me as strong evidence for a realist position.
I can’t see much evidence that the people who construe atheism and agnosticicsm in the way you describe ae actually correct. I agree that the no-reality position and the unknowable-reality position could both be considered
anti-realist, but they are still substantively difference. Deriving no-reality from unknowable reality always seems like an error to me, but maybe someone has an impressive defense of it.
Well, I certainly don’t want to get into a dispute about what terms like “atheism”, “agnosticism”, “anti-realism”, etc. ought to mean. All I’ll say about that is if the words aren’t being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it’s best not to use those words.
Leaving language aside, I accept that the difference between “there is no reality” and “whether there is a reality is systematically unknowable” is an important difference to you, and I agree that deriving the former from the latter is tricky.
I’m pretty sure it’s not an important difference to shminux. It certainly isn’t an important difference to me… I can’t imagine why I would ever care about which of those two statements is true if at least one of them is.
Well, I certainly don’t want to get into a dispute about what terms like “atheism”, “agnosticism”, “anti-realism”, etc. ought to mean.
I don’t see why not.
All I’ll say about that is if the words aren’t being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it’s best not to use those words.
Or settle their correct meanings using a dictionary, or something.
Leaving language aside, I accept that the difference between “there is no reality” and “whether there is a reality is systematically unknowable” is an important difference to you, and I agree that deriving the former from the latter is tricky.
I’m pretty sure it’s not an important difference to shminux.
If shminux is using arguments for Unknowable Reality as arguments for No Reality, then shminux’s arguments are invalid whatever shminux cares about.
It certainly isn’t an important difference to me… I can’t imagine why I would ever care about which of those two statements is true if at least one of them is.
One seems a lot ore far fetched that then other to me.
Well, I certainly don’t want to get into a dispute about what terms like “atheism”, “agnosticism”, “anti-realism”, etc. ought to mean.
I don’t see why not.
If all goes well in a definitional dispute, at the end of it we have agreed on what meaning to assign to a word. I don’t really care; I’m usually perfectly happy to assign to it whatever meaning my interlocutor does. In most cases, there was some other more interesting question about the world I was trying to get at, which got derailed by a different discussion about the meanings of words. In most of the remaining cases, the discussion about the meanings of words was less valuable to me than silence would have been.
That’s not to say other people need to share my values, though; if you want to join definitional disputes (by referencing a dictionary or something) go right ahead. I’m just opting out.
If shminux is using arguments for Unknowable Reality as arguments for No Reality,
I don’t think he is, though I could be wrong about that.
Agnosticism = believing we can’t know if God exists
Atheism = believing God does not exist
Theism = believing God exists
turtles-all-the-way-down-ism = believing we can’t know what reality is (can’t reach the bottom turtle)
instrumentalism/anti-realism = believing reality does not exist
realism = believing reality exists
Thus anti-realism and realism map to atheism and theism, but agnosticism doesn’t map to infinte-turtle-ism because it says we can’t know if God exists, not what God is.
Agnosticism = believing we can’t know if God exists
Or believing that it’s not a meaningful or interesting question to ask
instrumentalism/anti-realism = believing reality does not exist
That’s quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.
Or believing that it’s not a meaningful or interesting question to ask
Those would be ignosticism and apatheism respectively.
That’s quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.
Yes, yes, we all know your idiosyncratic definition of “exist”, I was using the standard meaning because I was talking to a realist.
Yeah. The issue here, i gather, has to do a lot with domain specific knowledge—you’re a physicist, you have general idea how physics does not distinguish between, for example, 0 and two worlds of opposite phases which cancel out from our perspective. Which is way different from naive idea of some sort of computer simulation, where of course two simulations with opposite signs being summed, are a very different thing ‘from the inside’ from plain 0. If we start attributing reality to components of the sum in Feynman’s path integral… that’s going to get weird.
You realize that, assuming Feynman’s path integral makes accurate predictions, shiminux will attribute it as much reality as, say, the moon, or your inner experience.
Thanks for the clarification, it helps. An agnostic with respect to God (which is what “agnostic” has come to mean by default) would say both that we can’t know if God exists, and also that we can’t know the nature of God. So I think the analogy still holds.
Right. But! An agnostic with respect to the details of reality—an infinite-turtle-ist—need not be an agnostic with respect to reality, even if an agnostic with respect to reality is also an agnostic with respect to it’s details (although I’m not sure if that follows in any case.)
(shrug) Sure. So my analogy only holds between agnostics-about-God (who question the knowability of both the existence and nature of God) and agnostics-about-reality (who question the knowability of both the existence and nature of reality).
As you say, there may well be other people out there, for example those who question the knowability of the details, but not of the existence, of reality. (For a sufficiently broad understanding of “the details” I suspect I’m one of those people, as is almost everyone I know.) I wasn’t talking about them, but I don’t dispute their existence.
I have to admit, this has gotten rarefied enough that I’ve lost track both of your point and my own.
So, yeah, maybe I’m confusing knowing-X-exists with knowing-details-of-X for various Xes, or maybe I’ve tried to respond to a question about (one, the other, just one, both) with an answer about (the other, one, both, just one). I no longer have any clear notion, either of which is the case or why it should matter, and I recommend we let this particular strand of discourse die unless you’re willing to summarize it in its entirety for my benefit.
I predict that these discussions, even among smart, rational people will go nowhere conclusive until we have a proper theory of self-aware decision making, because that’s what this all hinges on. All the various positions people are taking in this are just packaging up the same underlying confusion, which is how not to go off the rails once your model includes yourself.
Not that I’m paying close attention to this particular thread.
And all it takes is to let go of one outdated idea, which is, like Aristotle’s impetus, ripe for discarding.
This is not at all important to your point, but the impetus theory of motion was developed by John Philoponus in the 6th century as an attack on Aristotle’s own theory of motion. It was part of a broadly Aristotelian programme, but its not something Aristotle developed. Aristotle himself has only traces of a dynamical theory (the theory being attacked by Philoponus is sort of an off-hand remark), and he concerned himself mostly with what we would probably call kinematics. The Aristotelian principle carried through in Philoponus’ theory is the principle that motion requires the simultaneous action of a mover, which is false with respect to motion but true with respect to acceleration. In fact, if you replace ‘velocity’ with ‘acceleration’ in a certain passage of the Physics, you get F=ma. So we didn’t exactly discard Aristotle’s (or Philoponus’) theory, important precursors as they were to the idea of inertia.
In fact, if you replace ‘velocity’ with ‘acceleration’ in a certain passage of the Physics, you get F=ma.
That kind of replacement seems like a serious type error—velocity is not really anything like acceleration. Like saying that if you replace P with zero, you can prove P = NP.
“we can keep refining our models and explain more and more inputs”
Hm.
On your account, “explaining an input” involves having a most-accurate-model (aka “real world”) which alters in response to that input in some fashion that makes the model even more accurate than it was (that is, better able to predict future inputs). Yes?
If so… does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done? If it does… how is that any less limiting than the realist’s view allowing for entering a state where there is no further understanding of reality to be done?
I mean, I recognize that it’s possible to have an instrumentalist account in which no such limitative result applies, just as it’s possible to have a realist account in which no such limitative result applies. But you seem to be saying that there’s something systematically different between instrumentalist and realist accounts here, and I don’t quite see why that should be.
You make a reference a little later on to “mental blocks” that realism makes more likely, and I guess that’s another reference to the same thing, but I don’t quite see what it is that that mental block is blocking, or why an instrumentalist is not subject to equivalent mental blocks.
Does the question make sense? Is it something you can further clarify?
If so… does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done?
Maybe you are reading too much into what I said. If your view is that what we try to understand is this external reality, it’s quite a small step to assuming that some day it will be understood in its entirety. This sentiment has been expressed over and over by very smart people, like the proverbial Lord Kelvin’s warning that “physics is almost done”, or Laplacian determinism. If you don’t assume that the road you travel leads to a certain destination, you can still decide that there are no more places to go as your last trail disappears, but it is by no means an obvious conclusion.
If your view is that what we try to understand is this external reality, it’s quite a small step to assuming that some day it will be understood in its entirety.
Well, OK. I certainly agree that this assumption has been made by realists historically. And while I’m not exactly sure it’s a bad thing, I’m willing to treat it as one for the sake of discussion.
That said… I still don’t quite get what the systematic value-difference is. I mean, if my view is instead that what we try to achieve is maximal model accuracy, with no reference to this external reality… then what? Is it somehow a longer step from there to assuming that some day we’ll achieve a perfectly accurate model? If so, why is that? If not, then what have I gained by switching from the goal of “understand external reality in its entirety” to the goal of “achieve a perfectly accurate model”?
If I’m following you at all, it seems you’re arguing in favor of a non-idealist position much more than a non-realist position. That is, if it’s a mistake to “assume that the road you travel leads to a certain destination”, it follows that I should detach from “ultimate”-type goals more generally, whether it’s a realist’s goal of ultimately understanding external reality, or an instrumentalist’s goal of ultimately achieving maximal model accuracy, or some other ontology’s goal of ultimately doing something else.
Have I missed a turn somewhere? Or is instrumentalism somehow better suited to discouraging me from idealism than realism is? Or something else?
Look, I don’t know if I can add much more. What started my deconversion from realism is watching smart people argue about interpretations of QM, Boltzmann brains and other untestable ontologies. After a while these debates started to seem silly to me, so I had to figure out why. Additionally, I wanted to distill the minimum ontology, something which needn’t be a subject of pointless argument, but only of experimental checking. Eventually I decided that external reality is just an assumption, like any other. This seems to work for me, and saves me a lot of worrying about untestables. Most physicists follow this pragmatic approach, except for a few tenured dudes who can afford to speculate on any topic they like. Max Tegmark and Don Page are more or less famous examples. But few physicists worry about formalizing their ontology of pragmatism. They follow the standard meaning of the terms exist, real, true, etc., and when these terms lead to untestable speculations, their pragmatism takes over and they lose interest, except maybe for some idle chat over a beer. A fine example of compartmentalization. I’ve been trying to decompartmentalize and see where the pragmatic approach leads, and my interpretation of the instrumentalism is the current outcome. It lets me to spot early many statements implications of which a pragmatist would eventually ignore, which is quite satisfying. I am not saying that I have finally worked out the One True Ontology, or that I have resolved every issue to my satisfaction, but it’s the best I’ve been able to cobble together. But I am not willing to trade it for a highly compartmentalized version of realism, or the Eliezerish version of many untestable worlds and timeless this or that. YMMV.
But the “turtles all the way down” or the method in which the act of discovery changes the law...
Why can’t that also be modeled? Even if the model is self-modifying meta-recursive turtle-stack infinite “nonsense”, there probably exists some way to describe it, model it, understand it, or at least point towards it.
This very “pointing towards it” is what I’m doing right now. I postulate that no matter the form it takes, even if it seems logically nonsensical, there’s a model which can explain the results proportionally to how much we understand about it (we may end up being never able to perfectly understand it).
Currently, the best fuzzy picture of that model, by my pinpointing of what-I’m-referring-to, is precisely what you’ve just described:
“we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on”.
That’s what I’m pointing at. I don’t care either how many turtle stacks or infinities or regresses or recursions or polymorphic interfaces or variables or volatilities there are. The hypothetical description that a perfect agent with perfect information looking at our models and inputs from the outside would give of the program that we are part of is the “algorithm”.
Maybe the turing tape never halts, and just keeps computing on and on more new “laws of physics” as we research on and on and do more exotic things, such that there’s no “true final ultimate laws”. Of course that could happen. I have no solid evidence either way, so why would I restrict my thinking to the hypothesis that there is? I like flexibility in options like that.
So yeah, my definition of that formula is pretty much self-referential and perhaps not always coherently explained. It’s a bit like CEV in that regards, “whatever we would if …” and so on.
Once all reduced away, all I’m really postulating is the continuing ability of possible agents who make models and analyze their own models to point at and frame and describe mathematically and meta-modelize the patterns of experimental results, given sufficient intelligence and ability to model things. It’s not nearly as powerfully predictive or groundbreaking as I might have made it sound in earlier comments.
For more comparisons, it’s a bit like when I say “my utility function”. Clearly, there might not be a final utility function in my brain, it might be circular, or it might regress infinitely, or be infinitely self-modifying and self-referential, but by golly when I say that my best approximation of my utility function values having food much more highly than starving, I’m definitely pointing at and approximating something in there in that mess of patterns, even if I might not know exactly where I’m pointing at.
That “something” is my “true utility function”, even if it would have to be defined with fuzzy self-recursive meta-games and timeless self-determinance or some other crazy shenanigans.
So I guess that’s about also what I refer to when I say “reality”.
I’m not really disagreeing. I’m just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like “but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?” become progressively more nonsensical.
Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.
These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.
the idea of some objective reality becomes progressively less useful
Useful for what? Prediction? But realists arent using these models to answer the “what input should I expect” question; they are answering other questions, like “what is real” and “what should we value”.
And “nothing” is an answer to “what is real”. What does instrumentalism predict?
If it’s really better or more “true” on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I’d have to update in favour of something.
If it’s really better or more “true” on some level
But if that’s no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive
theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.
If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn’t need to persuade people who already agree with it—just the rest of us.
And once you’ve self-modified into an instrumentalist, I guess there are other arguments that will now persuade you—for example, that this hypothetical underlying layer of “reality” has no extra predictive power (at least, I think that’s what shiminux finds persuasive.)
The disagreement starts here:
I refuse to postulate an extra “thingy that determines my experimental results”. Occam’s razor and such.
So uhm. How do the experimental results, y’know, happen?
I think I understand everything else. Your position makes perfect sense. Except for that last non-postulate. Perhaps I’m just being obstinate, but there needs to be something to the pattern / regularity.
If I look at a set of models, a set of predictions, a set of experiments, and the corresponding set of experimental results, all as one big blob:
The models led to predictions—predictions about the experimental results, which are part of the model. The experiments were made according to the model that describes how to test those predictions (I might be wording this a bit confusingly?). But the experimental results… just “are”. They magically are like they are, for no reason, and they are ontologically basic in the sense that nothing at all ever determines them.
To me, it defies any reasonable logical description, and to my knowledge there does not exist a possible program that would generate this (i.e. if the program “randomly” generates the experimental results, then the randomness generator is the cause of the results, and thus is that thinghy, and for any regularity observable then the algorithm that causes that regularity in the resulting program output is the thinghy). Since as far as I can tell there is no possible logical construct that could ever result in a causeless ontologically basic “experimental result set” that displays regularity and can be predicted and tested, I don’t see how it’s even possible to consistently form a system where there are even models and experiences.
In short, if there is nothing at all whatsoever from which the experimental results arise, not even just a mathematical formula that can be pointed at and called ‘reality’, then this doesn’t even seem like a well-formed mathematically-expressible program, let alone one that is occam/solomonoff “simpler” than a well-formed program that implicitly contains a formula for experimental results.
No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say “Look here! This is what ‘determines’ what experimental results I see and restricts the possible futures! Let’s call this thinghy/subset/formula ‘reality’!”
I don’t see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.
As far as I can tell, those two paragraphs are pretty much Eliezer’s position on this, and he’s just putting that subset as an arbitrary variable, saying something like “Sure, we might not know said subset of the program or where exactly it is or what computational form it takes, but let’s just have a name for it anyway so we can talk about things more easily”.
Are you trying to solve the question of origin? How did the external reality, that thing that determines the experimental results, in the realist model, y’know, happen?
I discount your musings about “ontological basis”, perhaps uncharitably. Instrumentally, all I care about is making accurate predictions, and the concept of external reality is sometimes useful in that sense, and sometimes it gets in the way.
Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.
I fail to see a requirement you think I would have to get around. Just some less-than-useful logical construct.
I think it all just finally clicked. Strawman test (hopefully this is a good enough approximation):
You do imagine patterns and formulas, and your model does (or can) contain a (meta^x)-model that we could use and call “reality” and do whatever other realist-like shenanigans, and does describe the experimental results in some way that we could say “this formula, if it ‘really existed’ and the concept of existence is coherent at all, is the cause of my experimental results and the thinghy that determines them”.
You just naturally exclude going from there to assuming that the meta-model is “real”, “exists”, or is itself what is external to the models and causes everything; something which for other people requires extra mental effort and does relate to the problem of origin.
Sure. What I was attempting to say is that if I look at your model of the world, and within this model find a sub-part that happens to be a meta-model of the world like that program, I could also point at a smaller sub-part of that meta-model and say “Within this meta-model that you have in your model of the world, this is the modeled ‘cause’ of your experimental results, they all happen according to this algorithm”.
So now, given that the above is at least a reasonable approximation of your beliefs, the hypotheses for one of us misinterpreting Eliezer have risen quite considerably.
Personally, I tend to mentally “simplify” my model by saying that the program in question “is” (reality), for purposes of not having to redefine and debate things with people. Sometimes, though, when I encounter people who think “quarks are really real out there and have a real position in a really existing space”, I just get utterly confused. Quarks are just useful models of the interactions in the world. What’s “actually” doing the quark-ing is irrelevant.
Natural language is so bad at metaphysics, IME =\
So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality? I have at least two issues with this formulation. One is that every model supposedly contains this algorithm. Lots of high-level models are polymorphic, you can replace quarks with bits or wooden blocks and they still hold. The other is that, once you put this algorithm outside the model space, you are tempted to consider other similar algorithms which have no connection with the rest of the models whatsoever, like the mathematical universe. The term “exist” gains a meaning not present in its original instrumental Latin definition: to appear or to stand out. And then we are off the firm ground of what can be tested and into the pure unconnected ideas, like “post-utopian” Eliezer so despises, yet apparently implicitly adopts. Or maybe I’m being uncharitable here. He never engaged me on this point.
I think both you and DaFranker might be going a bit too deep down the meta-model rabbit-hole. As far as I understand, when a scientist says “electrons exists”, he does not mean,
Rather, he’s saying something like,
As far as I understand, you would disagree with the second statement. But, if so, how do you explain the fact that our experimental results are so reliable and consistent ? Is this just an ineffable mystery ?
I don’t disagree with the second statement, I find parts of it meaningless or tautological. For example:
The part in bold is redundant. You would normally say “of Higgs decay” or something to that effect.
The part in bold is tautological. Accurate predictions is the definition of not being wrong (within the domain of applicability). In that sense Newtonian physics is not wrong, it’s just not as accurate.
The instrumentalist definition. For realists, and accurate theory can still be wrong because it fails to correspond to reality, or posits non existent entities. For instance, and epicyclic theory of the solar system can be made as accurate as you like.
I meant to make a more further-reaching statement than that. If we believe that our model approximates that (postulated) thing that is causing our experiments to come out a certain way, then we can use this model to devise novel experiments, which are seemingly unrelated to the experiments we are doing now; and we could expect these novel experiments to come out the way we expected, at least on occasion.
For example, we could say, “I have observed this dot of light moving across the sky in a certain way. According to my model, this means that if I were to point my telescope at some other part of sky, we would find a much dimmer dot there, moving in a specific yet different way”.
This is a statement that can only be made if you believe that different patches of the sky are connected, somehow, and if you have a model that describes the entire sky, even the pieces that you haven’t looked at yet.
If different patches of the sky are completely unrelated to each other, the likelihood of you observing what you’d expect is virtually zero, because there are too many possible observations (an infinite number of them, in fact), all equally likely. I would argue that the history of science so far contradicts this assumption of total independence.
This may be off-topic, but I would agree with this statement. Similarly, the statement “the Earth is flat” is not, strictly speaking, wrong. It works perfectly well if you’re trying to lob rocks over a castle wall. Its inaccuracy is too great, however, to launch satellites into orbit.
Sort-of.
I’m saying that there’s a sufficiently fuzzy and inaccurate polymorphic model (or sets of models, or meta-description of the requirements and properties for relevant models,) of “the universe” that could be created and pointed at as “the laws”, which if known fully and accurately could be “computed” or simulated or something and computing this algorithm perfectly would in-principle let us predict all of the experimental results.
If this theoretical, not-perfectly-known sub-algorithm is a perfect description of all the experimental results ever, then I’m perfectly willing to slap the labels “fundamental” and “reality” on it and call it a day, even though I don’t see why this algorithm would be more “fundamentally existing” than the exact same algorithm with all parameters multiplied by two, or some other algorithm that produces the same experimental results in all possible cases.
The only reason I refer to it in the singular—“the sub-algorithm”—is because I suspect we’ll eventually have a way to write and express as “an algorithm” the whole space/set/field of possible algorithms that could perfectly predict inputs, if we knew the exact set that those are in. I’m led to believe it’s probably impossible to find this exact set.
I find this approach very limiting. There is no indication that you can construct anything like that algorithm. Yet by postulating its existence (ahem), you are forced into a mode of thinking where “there is this thing called reality with some fundamental laws which we can hopefully learn some day”. As opposed to “we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on”. Without ever worrying if some day there is nothing more to discover, because we finally found the holy grail, the ultimate laws of reality. I don’t mind if it’s turtles all the way down.
In fact, in the spirit of QM and as often described in SF/F stories, the mere act of discovery may actually change the “laws”, if you are not careful. Or maybe we can some day do it intentionally, construct our own stack of turtles. Oh, the possibilities! And all it takes is to let go of one outdated idea, which is, like Aristotle’s impetus, ripe for discarding.
The claim that reality may be ultimately unknowable or non-algorithmic is different to the claim you have made elsewhere, that there is no reality.
I’m not sure it’s as different as all that from shminux’s perspective.
By way of analogy, I know a lot of people who reject the linguistic habit of treating “atheism” as referring to a positive belief in the absence of a deity, and “agnosticism” as referring to the absence of a positive belief in the presence of a deity. They argue that no, both positions are atheist; in the absence of a positive belief in the presence of a deity, one does not believe in a deity, which is the defining characteristic of the set of atheist positions. (Agnosticism, on this view, is the position that the existence of a deity cannot be known, not merely the observation that one does not currently know it. And, as above, on this view that means agnosticism implies atheism.)
If I substitute (reality, non-realism, the claim that reality is unknowable) for (deity, atheism, agnosticism) I get the assertion that the claim that reality is unknowable is a non-realist position. (Which is not to say that it’s specifically an instrumentalist position, but we’re not currently concerned with choosing among different non-realist positions.)
All of that said, none of it addresses the question which has previously been raised, which is how instrumentalism accounts for the at-least-apparently-non-accidental relationship between past inputs, actions, models, and future inputs. That relationship still strikes me as strong evidence for a realist position.
I can’t see much evidence that the people who construe atheism and agnosticicsm in the way you describe ae actually correct. I agree that the no-reality position and the unknowable-reality position could both be considered anti-realist, but they are still substantively difference. Deriving no-reality from unknowable reality always seems like an error to me, but maybe someone has an impressive defense of it.
Well, I certainly don’t want to get into a dispute about what terms like “atheism”, “agnosticism”, “anti-realism”, etc. ought to mean. All I’ll say about that is if the words aren’t being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it’s best not to use those words.
Leaving language aside, I accept that the difference between “there is no reality” and “whether there is a reality is systematically unknowable” is an important difference to you, and I agree that deriving the former from the latter is tricky.
I’m pretty sure it’s not an important difference to shminux. It certainly isn’t an important difference to me… I can’t imagine why I would ever care about which of those two statements is true if at least one of them is.
I don’t see why not.
Or settle their correct meanings using a dictionary, or something.
If shminux is using arguments for Unknowable Reality as arguments for No Reality, then shminux’s arguments are invalid whatever shminux cares about.
One seems a lot ore far fetched that then other to me.
If all goes well in a definitional dispute, at the end of it we have agreed on what meaning to assign to a word. I don’t really care; I’m usually perfectly happy to assign to it whatever meaning my interlocutor does. In most cases, there was some other more interesting question about the world I was trying to get at, which got derailed by a different discussion about the meanings of words. In most of the remaining cases, the discussion about the meanings of words was less valuable to me than silence would have been.
That’s not to say other people need to share my values, though; if you want to join definitional disputes (by referencing a dictionary or something) go right ahead. I’m just opting out.
I don’t think he is, though I could be wrong about that.
Pretty sure you mixed up “we can’t know the details of reality” with “we can’t know if reality exists”.
That would be interesting, if true.
I have no coherent idea how you conclude that from what I said, though.
Can you unpack your reasoning a little?
Sure.
Agnosticism = believing we can’t know if God exists
Atheism = believing God does not exist
Theism = believing God exists
turtles-all-the-way-down-ism = believing we can’t know what reality is (can’t reach the bottom turtle)
instrumentalism/anti-realism = believing reality does not exist
realism = believing reality exists
Thus anti-realism and realism map to atheism and theism, but agnosticism doesn’t map to infinte-turtle-ism because it says we can’t know if God exists, not what God is.
Or believing that it’s not a meaningful or interesting question to ask
That’s quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.
Those would be ignosticism and apatheism respectively.
Yes, yes, we all know your idiosyncratic definition of “exist”, I was using the standard meaning because I was talking to a realist.
Yeah. The issue here, i gather, has to do a lot with domain specific knowledge—you’re a physicist, you have general idea how physics does not distinguish between, for example, 0 and two worlds of opposite phases which cancel out from our perspective. Which is way different from naive idea of some sort of computer simulation, where of course two simulations with opposite signs being summed, are a very different thing ‘from the inside’ from plain 0. If we start attributing reality to components of the sum in Feynman’s path integral… that’s going to get weird.
You realize that, assuming Feynman’s path integral makes accurate predictions, shiminux will attribute it as much reality as, say, the moon, or your inner experience.
The issue is with all the parts of it, which include your great grandfather’s ghost, twice, with opposite phases, looking over your shoulder.
Since I am not a quantum physicist, I can’t really respond to your objections, and in any case I don’t subscribe to shiminux’s peculiar philosophy.
Thanks for the clarification, it helps.
An agnostic with respect to God (which is what “agnostic” has come to mean by default) would say both that we can’t know if God exists, and also that we can’t know the nature of God. So I think the analogy still holds.
Right. But! An agnostic with respect to the details of reality—an infinite-turtle-ist—need not be an agnostic with respect to reality, even if an agnostic with respect to reality is also an agnostic with respect to it’s details (although I’m not sure if that follows in any case.)
(shrug) Sure. So my analogy only holds between agnostics-about-God (who question the knowability of both the existence and nature of God) and agnostics-about-reality (who question the knowability of both the existence and nature of reality).
As you say, there may well be other people out there, for example those who question the knowability of the details, but not of the existence, of reality. (For a sufficiently broad understanding of “the details” I suspect I’m one of those people, as is almost everyone I know.) I wasn’t talking about them, but I don’t dispute their existence.
Absolutely, but that’s not what shiminux and PrawnOfFate were talking about, is it?
I have to admit, this has gotten rarefied enough that I’ve lost track both of your point and my own.
So, yeah, maybe I’m confusing knowing-X-exists with knowing-details-of-X for various Xes, or maybe I’ve tried to respond to a question about (one, the other, just one, both) with an answer about (the other, one, both, just one). I no longer have any clear notion, either of which is the case or why it should matter, and I recommend we let this particular strand of discourse die unless you’re willing to summarize it in its entirety for my benefit.
I predict that these discussions, even among smart, rational people will go nowhere conclusive until we have a proper theory of self-aware decision making, because that’s what this all hinges on. All the various positions people are taking in this are just packaging up the same underlying confusion, which is how not to go off the rails once your model includes yourself.
Not that I’m paying close attention to this particular thread.
This is not at all important to your point, but the impetus theory of motion was developed by John Philoponus in the 6th century as an attack on Aristotle’s own theory of motion. It was part of a broadly Aristotelian programme, but its not something Aristotle developed. Aristotle himself has only traces of a dynamical theory (the theory being attacked by Philoponus is sort of an off-hand remark), and he concerned himself mostly with what we would probably call kinematics. The Aristotelian principle carried through in Philoponus’ theory is the principle that motion requires the simultaneous action of a mover, which is false with respect to motion but true with respect to acceleration. In fact, if you replace ‘velocity’ with ‘acceleration’ in a certain passage of the Physics, you get F=ma. So we didn’t exactly discard Aristotle’s (or Philoponus’) theory, important precursors as they were to the idea of inertia.
That kind of replacement seems like a serious type error—velocity is not really anything like acceleration. Like saying that if you replace P with zero, you can prove P = NP.
That its a type error is clear enough (I don’t know if its a serious one under an atmosphere). But what follows from that?
Hm.
On your account, “explaining an input” involves having a most-accurate-model (aka “real world”) which alters in response to that input in some fashion that makes the model even more accurate than it was (that is, better able to predict future inputs). Yes?
If so… does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done? If it does… how is that any less limiting than the realist’s view allowing for entering a state where there is no further understanding of reality to be done?
I mean, I recognize that it’s possible to have an instrumentalist account in which no such limitative result applies, just as it’s possible to have a realist account in which no such limitative result applies. But you seem to be saying that there’s something systematically different between instrumentalist and realist accounts here, and I don’t quite see why that should be.
You make a reference a little later on to “mental blocks” that realism makes more likely, and I guess that’s another reference to the same thing, but I don’t quite see what it is that that mental block is blocking, or why an instrumentalist is not subject to equivalent mental blocks.
Does the question make sense? Is it something you can further clarify?
Maybe you are reading too much into what I said. If your view is that what we try to understand is this external reality, it’s quite a small step to assuming that some day it will be understood in its entirety. This sentiment has been expressed over and over by very smart people, like the proverbial Lord Kelvin’s warning that “physics is almost done”, or Laplacian determinism. If you don’t assume that the road you travel leads to a certain destination, you can still decide that there are no more places to go as your last trail disappears, but it is by no means an obvious conclusion.
Well, OK.
I certainly agree that this assumption has been made by realists historically.
And while I’m not exactly sure it’s a bad thing, I’m willing to treat it as one for the sake of discussion.
That said… I still don’t quite get what the systematic value-difference is.
I mean, if my view is instead that what we try to achieve is maximal model accuracy, with no reference to this external reality… then what? Is it somehow a longer step from there to assuming that some day we’ll achieve a perfectly accurate model?
If so, why is that?
If not, then what have I gained by switching from the goal of “understand external reality in its entirety” to the goal of “achieve a perfectly accurate model”?
If I’m following you at all, it seems you’re arguing in favor of a non-idealist position much more than a non-realist position. That is, if it’s a mistake to “assume that the road you travel leads to a certain destination”, it follows that I should detach from “ultimate”-type goals more generally, whether it’s a realist’s goal of ultimately understanding external reality, or an instrumentalist’s goal of ultimately achieving maximal model accuracy, or some other ontology’s goal of ultimately doing something else.
Have I missed a turn somewhere?
Or is instrumentalism somehow better suited to discouraging me from idealism than realism is?
Or something else?
Look, I don’t know if I can add much more. What started my deconversion from realism is watching smart people argue about interpretations of QM, Boltzmann brains and other untestable ontologies. After a while these debates started to seem silly to me, so I had to figure out why. Additionally, I wanted to distill the minimum ontology, something which needn’t be a subject of pointless argument, but only of experimental checking. Eventually I decided that external reality is just an assumption, like any other. This seems to work for me, and saves me a lot of worrying about untestables. Most physicists follow this pragmatic approach, except for a few tenured dudes who can afford to speculate on any topic they like. Max Tegmark and Don Page are more or less famous examples. But few physicists worry about formalizing their ontology of pragmatism. They follow the standard meaning of the terms exist, real, true, etc., and when these terms lead to untestable speculations, their pragmatism takes over and they lose interest, except maybe for some idle chat over a beer. A fine example of compartmentalization. I’ve been trying to decompartmentalize and see where the pragmatic approach leads, and my interpretation of the instrumentalism is the current outcome. It lets me to spot early many statements implications of which a pragmatist would eventually ignore, which is quite satisfying. I am not saying that I have finally worked out the One True Ontology, or that I have resolved every issue to my satisfaction, but it’s the best I’ve been able to cobble together. But I am not willing to trade it for a highly compartmentalized version of realism, or the Eliezerish version of many untestable worlds and timeless this or that. YMMV.
(shrug) OK, I’m content to leave this here, then. Thanks for your time.
So...what is the point of caring about prediction?
But the “turtles all the way down” or the method in which the act of discovery changes the law...
Why can’t that also be modeled? Even if the model is self-modifying meta-recursive turtle-stack infinite “nonsense”, there probably exists some way to describe it, model it, understand it, or at least point towards it.
This very “pointing towards it” is what I’m doing right now. I postulate that no matter the form it takes, even if it seems logically nonsensical, there’s a model which can explain the results proportionally to how much we understand about it (we may end up being never able to perfectly understand it).
Currently, the best fuzzy picture of that model, by my pinpointing of what-I’m-referring-to, is precisely what you’ve just described:
That’s what I’m pointing at. I don’t care either how many turtle stacks or infinities or regresses or recursions or polymorphic interfaces or variables or volatilities there are. The hypothetical description that a perfect agent with perfect information looking at our models and inputs from the outside would give of the program that we are part of is the “algorithm”.
Maybe the turing tape never halts, and just keeps computing on and on more new “laws of physics” as we research on and on and do more exotic things, such that there’s no “true final ultimate laws”. Of course that could happen. I have no solid evidence either way, so why would I restrict my thinking to the hypothesis that there is? I like flexibility in options like that.
So yeah, my definition of that formula is pretty much self-referential and perhaps not always coherently explained. It’s a bit like CEV in that regards, “whatever we would if …” and so on.
Once all reduced away, all I’m really postulating is the continuing ability of possible agents who make models and analyze their own models to point at and frame and describe mathematically and meta-modelize the patterns of experimental results, given sufficient intelligence and ability to model things. It’s not nearly as powerfully predictive or groundbreaking as I might have made it sound in earlier comments.
For more comparisons, it’s a bit like when I say “my utility function”. Clearly, there might not be a final utility function in my brain, it might be circular, or it might regress infinitely, or be infinitely self-modifying and self-referential, but by golly when I say that my best approximation of my utility function values having food much more highly than starving, I’m definitely pointing at and approximating something in there in that mess of patterns, even if I might not know exactly where I’m pointing at.
That “something” is my “true utility function”, even if it would have to be defined with fuzzy self-recursive meta-games and timeless self-determinance or some other crazy shenanigans.
So I guess that’s about also what I refer to when I say “reality”.
I’m not really disagreeing. I’m just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like “but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?” become progressively more nonsensical.
Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.
These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.
Useful for what? Prediction? But realists arent using these models to answer the “what input should I expect” question; they are answering other questions, like “what is real” and “what should we value”.
And “nothing” is an answer to “what is real”. What does instrumentalism predict?
If it’s really better or more “true” on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I’d have to update in favour of something.
But if that’s no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.
Well, I suppose all it would need to peruade is people who don’t already believe it …
More seriously, you’ll have to ask shiminux, because I, as a realist, anticipate this test failing, so naturally I can’t explain why it would succeed.
Huh? I don’t see why the ability to convince people who don’t care about consistency is something that should sway me.
If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn’t need to persuade people who already agree with it—just the rest of us.
And once you’ve self-modified into an instrumentalist, I guess there are other arguments that will now persuade you—for example, that this hypothetical underlying layer of “reality” has no extra predictive power (at least, I think that’s what shiminux finds persuasive.)