The relevance of Porter’s physics beliefs is that any reader who disagrees with Porter’s premises but agrees with the premises used in an article can gain little additional information about the quality of the article by learning that Porter is not convinced by it. ie. Whatever degree of authority Mitchell Porter’s status grants goes (approximately) in the direction of persuading the reader to adopt those different premises.
In this way mentioning Porter’s beliefs is distinctly different from mentioning the people that you now bring up:
For other point, scott aaronson doesn’t seem convinced either. Robin Hanson, while himself (it seems) a MWI believer but doesn’t appear to think that its so conclusively settled.
The relevance of Porter’s physics beliefs is that any reader who disagrees with Porter’s premises but agrees with the premises used in an article can gain little additional information about the quality of the article by learning that Porter is not convinced by it. ie. Whatever degree of authority Mitchell Porter’s status grants goes (approximately) in the direction of persuading the reader to adopt those different premises.
What one can learn is that the allegedly ‘settled’ and ‘solved’ is far from settled and solved and is a matter of opinion as of now. This also goes for qualia and the like; we haven’t reduced them to anything, merely asserted.
It extends all the way up, competence wise—see Roger Penrose.
It’s fine to believe in MWI if that’s where your philosophy falls, its another thing entirely to argue that belief in MWI is independent of priors and a philosophical stance, and yet another to argue that people fail to be swayed by a very biased presentation of the issue which omits every single point that goes in favour of e.g. non-realism, because they are too irrational or too stupid.
No, that set of posts goes on at some length about how MWI has not yet provided a good derivation of the Born probabilities.
But I think it does not do justice to what a huge deal the Born probabilities are. The Born probabilities are the way we use quantum mechanics to make predictions, so saying “MWI has not yet provided a good derivation of the Born probabilities” is equivalent to “MWI does not yet make accurate predictions,” I’m not sure thats clear to people who read the sequences but don’t use quantum mechanics regularly.
Also, by omitting the wide variety of non-Copenhagen interpretations (consistent histories, transactional, Bohm, stochastic-modifications to Schroedinger,etc) the reader is lead to believe that the alternative to Copenhagen-collapse is many worlds, so they won’t use the absence of Born probabilities in many worlds to update towards one of the many non-Copenhagen alternatives.
Note that the Born probabilities really obviously have something to do with the unitarity of QM, while no single-world interpretation is going to have this be anything but a random contingent fact. The unitarity of QM means that integral-squared-modulus quantifies the “amount of causal potency” or “amount of causal fluid” or “amount of conserved real stuff” in a blob of the wavefunction. It would be like discovering that your probability of ending up in a computer corresponded to how large the computer was. You could imagine that God arbitrarily looked over the universe and destroyed all but one computer with probability proportional to its size, but this would be unlikely. It would be much more likely (under circumstances analogous to ours) to guess that the size of the computer had something to do with the amount of person in it.
The problems with Copenhagen are fundamentally one-world problems and they go along with any one-world theory. If I honestly believed that the only reason the QM sequence wasn’t convincing was that I didn’t go through every single one-world theory to refute them separately, I could try to write separate posts for RQM, Bohm, and so on, but I’m not convinced that this is the case. Any single-world theory needs either spooky action at a distance, or really awful amateur epistemology plus spooky action at a distance, and there’s just no reason to even hypothesize single-world theories in the first place.
(I’m not sure I have time to write the post about Relational Special Relativity in which length and time just aren’t the same for all observers and so we don’t have to suppose that Minkowskian spacetime is objectively real, and anyway the purpose of a theory is to tell us how long things are so there’s no point in a theory which doesn’t say that, and those silly Minkowskians can’t explain how much subjective time things seem to take except by waving their hands about how the brain contains some sort of hypothetical computer in which computing elements complete cycles in Minkowskian intervals, in contrast to the proper ether theory in which the amount of conscious time that passes clearly corresponds to the Lorentzian rule for how much time is real relative to a given vantage point...)
The problems with Copenhagen are fundamentally one-world problems and they go along with any one-world theory. If I honestly believed that the only reason the QM sequence wasn’t convincing was that I didn’t go through every single one-world theory to refute them separately, I could try to write separate posts for RQM, Bohm, and so on, but I’m not convinced that this is the case. Any single-world theory needs either spooky action at a distance, or really awful amateur epistemology plus spooky action at a distance, and there’s just no reason to even hypothesize single-world theories in the first place.
It is not worth writing separate posts for each interpretation. However it is becoming increasingly apparent that to the extent that the QM sequence matters at all it may be worth writing a single post which outlines how your arguments apply to the other interpretations. ie.:
A brief summary of and a link to your arguments in favor of locality then an explicit mention of how this leads to rejecting “Ensemble, Copenhagen, de Broglie–Bohm theory, von Neumann, Stochastic, Objective collapse and Transactional” interpretations and theories.
A brief summary of and a link to your arguments about realism in general and quantum realism in particular and why the wavefunction not being considered ‘real’ counts against “Ensemble, Copenhagen, Stochastic and Relational” interpretations.
Some outright mockery of the notion that observation and observers have some kind of intrinsic or causal role (Coppenhagen, von Neumann and Relational).
Mention hidden variables and the complexity burden thereof (de Broglie–Bohm, Popper).
Having such a post as part of the sequence would make it trivial to dismiss claims like:
You lead the reader toward a false dichotomy (Copenhagen or many worlds) in order to suggest that the low probability of copenhagen implies many worlds. This ignores a vast array of other interpretations.
… as straw men. As it stands however this kind of claim (evidently, by reception) persuades many readers, despite this being significantly different to the reasoning that you intended to convey.
If it worth you maintaining active endorsement of your QM posts it may be worth ensuring both that it is somewhat difficult to actively misrepresent them and also that the meaning of your claims are as clear as they can conveniently be made. If there are Mihaly Barasz’s out there who you can recruit via the sanity of your physics epistemology there are also quite possibly IMO gold medalists out there who could be turned off by seeing negative caricatures of your QM work so readily accepted and then not bother looking further.
Note that the Born probabilities really obviously have something to do with the unitarity of QM, while no single-world interpretation is going to have this be anything but a random contingent fact.
Not so. If we insist that our predictions need to be probabilities (take the Born probabilities as fundamental/necessary), then unitarity becomes equivalent to the statement that probabilities have to sum to 1, and we can then try to piece together what our update equation should look like. This is the approach taken by the ‘minimalist’/‘ensemble’ interpretation that Ballentine’s textbook champions, he uses probabilities sum to 1 and some group theory (related to the Galilean symmetry group) to motivate the form of the Schroedinger equation. Edit to clarify: In some sense, its the reverse of many worlds- instead of taking the Schroedinger axioms as fundamental and attempting to derive Born, take the operator/probability axioms seriously and try to derive Schroedinger.
I believe the same consideration could be said of the consistent histories approach, but I’d have to think about it before I’d fully commit.
Edit to add: Also, what about “non-spooky” action at a distance? Something like the transactional interpretation, where we take relativity seriously and use both the forward and backward Green’s function of the Dirac/Klein-Gordon equation? This integrates very nicely with Barbour’s timeless physics, properly derives the Born rule, has a single world, BUT requires some stochastic modifications to the Schroedinger equation.
What surprises me in the QM interpretational world is that the interaction process itself is clearly more than just a unitary evolution of some wave function, given how the number of particles is not conserved, requiring the full QFT approach, and probably more, yet (nearly?) all interpretations stop at the QM level, without any attempt at some sort of second quantization. Am I missing something here?
Mostly just that QFT is very difficult and not rigorously formulated. Haag’s theorem (and Wightman’s extension) tell us that an interacting quantum field theory can’t live in a nice Hilbert space, so there is a very real sense that realistic QFTs only exist peturbatively. This makes interpretation something of a nightmare.
Basically, we ignore a bunch of messy complications (and potential inconsistency) just to shut-up-and-calculate, no one wants to dig up all that ‘just’ to get to the messy business of interpretation.
More or less. If the axiomatic field theory guys ever make serious progress, expect a flurry of me-too type interpretation papers to immediately follow. Until then, good luck interpreting a theory that isn’t even fully formulated yet.
If you ever are in a bar after a particle phenomenology conference lets out, ask the general room what, exactly, a particle is, and what it means that the definition is NOT observer independent.
Then what is it, exactly, that particle detectors detect? Because it surely can’t be interaction free limits of fields. Also, when we go to the Schreodinger equation with a potential, what are we modeling? It can’t be a particle, there is non-perturbative potential! Also, for any charged particle, the IR divergence prevents the limit, so you have to be careful- ‘real’ electrons are linear combination of ‘bare’ electrons and photons.
What I meant was that if you think of a field excitation propagation “between interactions”, they can be identified with particles. And you are right, I was neglecting those pesky massless virtual photons in the IR limit. As for the SE with a potential, this is clearly a semi-classical setup, there are no external classical potentials, they all come as some mean-field pictures of a reasonably stable many-particle interaction (a contradiction in terms though it might be). I think I pointed that out earlier in some thread.
The more I learn about the whole thing, the more I realize that all of Quantum Physics is basically a collection of miraculously working hacks, like narrow trails in a forest full of unknown deadly wildlife. This is markedly different from the classical physics, including relativity, where most of the territory is mapped, but there are still occasional dangers, most of which are clearly marked with orange cones.
Somebody: Virtual photons don’t actually exist: they’re just a bookkeeping device to help you do the maths.
Someone else, in a different context: Real photons don’t actually exist: each photon is emitted somewhere and absorbed somewhere else a possibly long but still finite amount of time later, making that a virtual photon. Real photons are just a mathematical construct approximating virtual photons that live long enough.
Me (in yet a different context, jokingly): [quotes the two people above] So, virtual photons don’t exist, and real photons don’t exist. Therefore, no photons exist at all.
Me (in yet a different context, jokingly): [quotes the two people above] So, virtual photons don’t exist, and real photons don’t exist. Therefore, no photons exist at all.
This is less joking then you think- its more or less correct. If you change the final to conclusion to “there isn’t a good definition of photon” you’d be there. Its worse for QCD, where the theory has an SU(3) symmetry you pretty much have to sever in order to treat the theory perturbatively.
all of Quantum Physics is basically a collection of miraculously working hacks
It really is. When you look at the experiments they’re performing, it’s kind of a miracle they get any kind of usable data at all. And explaining it to intelligent people is this near-infinite recursion of “But how do they know that experiment says what they say it does” going back more than a century with more than one strange loop.
Seriously, I’ve tried explaining just the proof that electrons exist, and in the end the best argument is that all the math we’ve built assuming their existence have really good predictive value. Which sounds like great evidence until you start confronting all the strange loops (the best experiments assume electromagnetic fields...) in that evidence, and I don’t even know how to -begin- untangling those. I’m convinced you could construct parallel physics with completely different mechanics (maybe the narrow trails aren’t as narrow as you’d think?) and get exactly the same results. And quantum field theory’s history of parallel physics doesn’t exactly help my paranoia there, even if they did eventually clean -most- of it up.
in the end the best argument is that all the math we’ve built assuming their existence have really good predictive value.
I fail to see the difference between this and “electrons exist”. But then my definition of existence only talks about models, anyway.
I am also not sure what strange loops you are referring to, feel free to give a couple of examples.
I’m convinced you could construct parallel physics with completely different mechanics [...] and get exactly the same results.
Most likely. It happens quite often (like Heisenberg’s matrix mechanics vs Schrodinger’s wave mechanics). Again, I have no problem with multiple models giving the same predictions, so I fail to see the source of your paranoia...
My beef with quantum physics is that there are many straightforward questions within its own framework it does not have answers to.
Then it’s equivalent to “electrons exist”. This is quite a common occurrence in physics, especially these days, holography and all. It also happens in condensed matter a lot, where quasi-particles like holes and phonons are a standard approximation. Do holes “exist” in a doped semiconductor? Certainly as much as electrons exist, unless you are a hard reductionist insisting that it makes sense to talk about simulating a Boeing 747 from quarks.
One example is mentioned; the proofs of electrons assumes the existences of (electircally charged) electromagnetic fields (Thomson’s experiment), the proof of electromagnetic fields -as- electrically charged comes from electron scattering and similar experiments.
(I’m fine with “electrons exist as a phenomenon, even if they’re not the phenomenon we expect them to be”, but that tends to put people in an even more skeptical frame of mind then before I started “explaining”. I’ve generally given up such explanations; it appears I’m hopelessly bad at it.)
Another strange loop is in the quantization of energy (which requires electrical fields to be quantized, the evidence for which comes quantization of energy to begin with). Strange loops are -fine-, taken as a whole—taken as a whole the evidence can be pretty good—but when you’re stepping a skeptical person through it step by step it, it’s hard to justify the next step when the previous step depends on it. The Big Bang Theory is another—the theory requires something to plug the gap in expected versus received background radiation, and the evidence for the plug (dark energy, for example) pretty much requires BBT to be true to be meaningful.
(Although it may be that a large part of the problem with the strange loops is that only the earliest experiments tend to be easily found in textbooks and on the Internet, and later less loop-prone experiments don’t get much attention.)
One example is mentioned; the proofs of electrons assumes the existences of (electircally charged) electromagnetic fields (Thomson’s experiment), the proof of electromagnetic fields -as- electrically charged comes from electron scattering and similar experiments.
The existence of electromagnetic fields is just the existence of light. You can build up the whole theory of electricity and magnetism without mentioning electrons. Charge is just a definition that tells us that some types of matter attract some other types of matter.
Once you have electromagnetic fields understood well, you can ask questions like “well, what is this piece of metal made up of, what is this piece of plastic made up of”, etc, and you can measure charges and masses of the various constituents. Its not actually self-referential in the way you propose.
You’re correct that you can build up the theory without electrons—exactly this happened. That history produced linearly stepwise theories isn’t the same as the evidence being linearly stepwise, however.
Light IS electromagnetic fields. the phrase “electrically charged electromagnetic fields” is a contradiction- the fields aren’t charged. Charges react to the field.
If the fields WERE charged in some way, the theory would be non-linear.
In this case there is no loop- you can develop the electromagnetic theory around light, and from there proceed to electrons if you like.
Light, in the theory you’re indirectly referencing, is a disturbance in the electromagnetic field, not the field itself.
The fields are charged, hence all the formulas involving them reflecting charge in one form or another (charge density is pretty common); the amplitude of the field is defined as the force exerted on positively charged matter in the field. (The reason for this definition is that most electromagnetic fields we interact with are negatively charged, or have negative charge density, on account of electrons being more easily manipulated than cations, protons, plasma, or antimatter.)
With some creative use of relativity you can render the charge irrelevant for the purposes of (a carefully chosen) calculation. This is not the same as the charge not existing, however.
You are using charge in some non-standard way. Charges are source or sinks of the field.
An electromagnetic field does not sink or source more field- if it did, Maxwell’s equations would be non-linear. There is no such thing as a ‘negatively charged electromagnetic field’- there are just electromagnetic fields. Now, the electromagnetic field can have a negative (or positive) amplitude but this is not the same as saying its negatively charged.
Still not clear what you are having trouble with. I interpret “electron exist” as “I have this model I call electron which is better at predicting certain future inputs than any competing model”. Not sure what it has to do with morality or paperclipping.
How do you interpret “such-and-such an entity is required by such-and-such a theory, which seems to work, bit turns out not to exist”. Do things wink in and out of existence as one theory replaces another?
“Given a model that predicts accurately, what would you do differently if the objects described in the model do or don’t exist at some ontological level? If there is no difference, what are we worrying about?”
I think you overread shminux. My attempted steelman of his position would be:
Of course there is something external to our minds, which we all experience. Call that “reality” if you like. Whatever reality is, it creates regularity such that we humans can make and share predictions. Are there atoms, or quarks, or forces out there in the territory? Experts in the field have said yes, but sociological analysis like The Structure of Scientific Revolutions gives us reasons to be skeptical. More importantly, resolving that metaphysical discussion does nothing to help us make better predictions in the future.
I happen to disagree with him because I think resolving that dispute has the potential to help us make better predictions in the future. But your comment appears to strawman shminux by asserting that he doesn’t believe in external reality at all, when he clearly believes there is some cause of the regularity that allows his models to make accurate predictions.
Saying “there is regularity” is different from saying “regularity occurs because quarks are real.”
If this steelman is correct, my support for schminux’s position has risen considerably, but so has my posterior belief that schminux and Eliezer actually have the same substantial beliefs once you get past the naming and modeling and wording differences.
Given schminux and Eliezer’s long-standing disagreement and both affirming that they have different beliefs, this makes it seem more likely that there’s either a fundamental miscommunication, that I misunderstand the implications of the steel-manning or of Eliezer’s descriptions of his beliefs, or that this steel-manning is incorrect. Which in turn, given that they are both quite more highly experienced in explicit rationality and reduction than I am, makes the first of the above three less likely, and thus makes it back less-than-it-would-first-seem still-slightly-more-likely that they actually agree, but also more likely that this steelman strawmans schminux in some relevant way.
Argh. I think I might need to maintain a bayes belief network for this if I want to think about it any more than that.
Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies ‘beliefs’, and the latter thingy ‘reality’.
I refuse to postulate an extra “thingy that determines my experimental results”. Occam’s razor and such.
So uhm. How do the experimental results, y’know, happen?
I think I understand everything else. Your position makes perfect sense. Except for that last non-postulate. Perhaps I’m just being obstinate, but there needs to be something to the pattern / regularity.
If I look at a set of models, a set of predictions, a set of experiments, and the corresponding set of experimental results, all as one big blob:
The models led to predictions—predictions about the experimental results, which are part of the model. The experiments were made according to the model that describes how to test those predictions (I might be wording this a bit confusingly?). But the experimental results… just “are”. They magically are like they are, for no reason, and they are ontologically basic in the sense that nothing at all ever determines them.
To me, it defies any reasonable logical description, and to my knowledge there does not exist a possible program that would generate this (i.e. if the program “randomly” generates the experimental results, then the randomness generator is the cause of the results, and thus is that thinghy, and for any regularity observable then the algorithm that causes that regularity in the resulting program output is the thinghy). Since as far as I can tell there is no possible logical construct that could ever result in a causeless ontologically basic “experimental result set” that displays regularity and can be predicted and tested, I don’t see how it’s even possible to consistently form a system where there are even models and experiences.
In short, if there is nothing at all whatsoever from which the experimental results arise, not even just a mathematical formula that can be pointed at and called ‘reality’, then this doesn’t even seem like a well-formed mathematically-expressible program, let alone one that is occam/solomonoff “simpler” than a well-formed program that implicitly contains a formula for experimental results.
No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say “Look here! This is what ‘determines’ what experimental results I see and restricts the possible futures! Let’s call this thinghy/subset/formula ‘reality’!”
I don’t see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.
No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say “Look here! This is what ‘determines’ what experimental results I see and restricts the possible futures! Let’s call this thinghy/subset/formula ‘reality’!”
I don’t see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.
As far as I can tell, those two paragraphs are pretty much Eliezer’s position on this, and he’s just putting that subset as an arbitrary variable, saying something like “Sure, we might not know said subset of the program or where exactly it is or what computational form it takes, but let’s just have a name for it anyway so we can talk about things more easily”.
So uhm. How do the experimental results, y’know, happen?
Are you trying to solve the question of origin? How did the external reality, that thing that determines the experimental results, in the realist model, y’know, happen?
I discount your musings about “ontological basis”, perhaps uncharitably. Instrumentally, all I care about is making accurate predictions, and the concept of external reality is sometimes useful in that sense, and sometimes it gets in the way.
No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say “Look here! This is what ‘determines’ what experimental results I see and restricts the possible futures! Let’s call this thinghy/subset/formula ‘reality’!”
Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.
I don’t see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.
I fail to see a requirement you think I would have to get around. Just some less-than-useful logical construct.
I think it all just finally clicked. Strawman test (hopefully this is a good enough approximation):
You do imagine patterns and formulas, and your model does (or can) contain a (meta^x)-model that we could use and call “reality” and do whatever other realist-like shenanigans, and does describe the experimental results in some way that we could say “this formula, if it ‘really existed’ and the concept of existence is coherent at all, is the cause of my experimental results and the thinghy that determines them”.
You just naturally exclude going from there to assuming that the meta-model is “real”, “exists”, or is itself what is external to the models and causes everything; something which for other people requires extra mental effort and does relate to the problem of origin.
Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.
Sure. What I was attempting to say is that if I look at your model of the world, and within this model find a sub-part that happens to be a meta-model of the world like that program, I could also point at a smaller sub-part of that meta-model and say “Within this meta-model that you have in your model of the world, this is the modeled ‘cause’ of your experimental results, they all happen according to this algorithm”.
So now, given that the above is at least a reasonable approximation of your beliefs, the hypotheses for one of us misinterpreting Eliezer have risen quite considerably.
Personally, I tend to mentally “simplify” my model by saying that the program in question “is” (reality), for purposes of not having to redefine and debate things with people. Sometimes, though, when I encounter people who think “quarks are really real out there and have a real position in a really existing space”, I just get utterly confused. Quarks are just useful models of the interactions in the world. What’s “actually” doing the quark-ing is irrelevant.
So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality? I have at least two issues with this formulation. One is that every model supposedly contains this algorithm. Lots of high-level models are polymorphic, you can replace quarks with bits or wooden blocks and they still hold. The other is that, once you put this algorithm outside the model space, you are tempted to consider other similar algorithms which have no connection with the rest of the models whatsoever, like the mathematical universe. The term “exist” gains a meaning not present in its original instrumental Latin definition: to appear or to stand out. And then we are off the firm ground of what can be tested and into the pure unconnected ideas, like “post-utopian” Eliezer so despises, yet apparently implicitly adopts. Or maybe I’m being uncharitable here. He never engaged me on this point.
I think both you and DaFranker might be going a bit too deep down the meta-model rabbit-hole. As far as I understand, when a scientist says “electrons exists”, he does not mean,
These mathematical formulae that I wrote down describe an objective reality with 100% accuracy.
Rather, he’s saying something like,
There must be some reason why all my experiments keep coming out the way they do, and not in some other way. Sure, this could be happening purely by chance, but the probability of this is so tiny as to be negligible. These formulae describe a model of whatever it is that’s supplying my experimental results, and this model predicts future results correctly 99.999999% of the time, so it can’t be entirely wrong.
As far as I understand, you would disagree with the second statement. But, if so, how do you explain the fact that our experimental results are so reliable and consistent ? Is this just an ineffable mystery ?
I don’t disagree with the second statement, I find parts of it meaningless or tautological. For example:
These formulae describe a model of whatever it is that’s supplying my experimental results
The part in bold is redundant. You would normally say “of Higgs decay” or something to that effect.
, and this model predicts future results correctly 99.999999% of the time, so it can’t be entirely wrong.
The part in bold is tautological. Accurate predictions is the definition of not being wrong (within the domain of applicability). In that sense Newtonian physics is not wrong, it’s just not as accurate.
The part in bold is tautological. Accurate predictions is the definition of not being wrong
The instrumentalist definition. For realists, and accurate theory can still be wrong because it fails to correspond to reality, or posits non existent entities. For instance, and epicyclic theory of the solar system can be made as accurate as you like.
Accurate predictions is the definition of not being wrong (within the domain of applicability)
I meant to make a more further-reaching statement than that. If we believe that our model approximates that (postulated) thing that is causing our experiments to come out a certain way, then we can use this model to devise novel experiments, which are seemingly unrelated to the experiments we are doing now; and we could expect these novel experiments to come out the way we expected, at least on occasion.
For example, we could say, “I have observed this dot of light moving across the sky in a certain way. According to my model, this means that if I were to point my telescope at some other part of sky, we would find a much dimmer dot there, moving in a specific yet different way”.
This is a statement that can only be made if you believe that different patches of the sky are connected, somehow, and if you have a model that describes the entire sky, even the pieces that you haven’t looked at yet.
If different patches of the sky are completely unrelated to each other, the likelihood of you observing what you’d expect is virtually zero, because there are too many possible observations (an infinite number of them, in fact), all equally likely. I would argue that the history of science so far contradicts this assumption of total independence.
In that sense Newtonian physics is not wrong, it’s just not as accurate.
This may be off-topic, but I would agree with this statement. Similarly, the statement “the Earth is flat” is not, strictly speaking, wrong. It works perfectly well if you’re trying to lob rocks over a castle wall. Its inaccuracy is too great, however, to launch satellites into orbit.
So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality?
Sort-of.
I’m saying that there’s a sufficiently fuzzy and inaccurate polymorphic model (or sets of models, or meta-description of the requirements and properties for relevant models,) of “the universe” that could be created and pointed at as “the laws”, which if known fully and accurately could be “computed” or simulated or something and computing this algorithm perfectly would in-principle let us predict all of the experimental results.
If this theoretical, not-perfectly-known sub-algorithm is a perfect description of all the experimental results ever, then I’m perfectly willing to slap the labels “fundamental” and “reality” on it and call it a day, even though I don’t see why this algorithm would be more “fundamentally existing” than the exact same algorithm with all parameters multiplied by two, or some other algorithm that produces the same experimental results in all possible cases.
The only reason I refer to it in the singular—“the sub-algorithm”—is because I suspect we’ll eventually have a way to write and express as “an algorithm” the whole space/set/field of possible algorithms that could perfectly predict inputs, if we knew the exact set that those are in. I’m led to believe it’s probably impossible to find this exact set.
I find this approach very limiting. There is no indication that you can construct anything like that algorithm. Yet by postulating its existence (ahem), you are forced into a mode of thinking where “there is this thing called reality with some fundamental laws which we can hopefully learn some day”. As opposed to “we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on”. Without ever worrying if some day there is nothing more to discover, because we finally found the holy grail, the ultimate laws of reality. I don’t mind if it’s turtles all the way down.
In fact, in the spirit of QM and as often described in SF/F stories, the mere act of discovery may actually change the “laws”, if you are not careful. Or maybe we can some day do it intentionally, construct our own stack of turtles. Oh, the possibilities! And all it takes is to let go of one outdated idea, which is, like Aristotle’s impetus, ripe for discarding.
I’m not sure it’s as different as all that from shminux’s perspective.
By way of analogy, I know a lot of people who reject the linguistic habit of treating “atheism” as referring to a positive belief in the absence of a deity, and “agnosticism” as referring to the absence of a positive belief in the presence of a deity. They argue that no, both positions are atheist; in the absence of a positive belief in the presence of a deity, one does not believe in a deity, which is the defining characteristic of the set of atheist positions. (Agnosticism, on this view, is the position that the existence of a deity cannot be known, not merely the observation that one does not currently know it. And, as above, on this view that means agnosticism implies atheism.)
If I substitute (reality, non-realism, the claim that reality is unknowable) for (deity, atheism, agnosticism) I get the assertion that the claim that reality is unknowable is a non-realist position. (Which is not to say that it’s specifically an instrumentalist position, but we’re not currently concerned with choosing among different non-realist positions.)
All of that said, none of it addresses the question which has previously been raised, which is how instrumentalism accounts for the at-least-apparently-non-accidental relationship between past inputs, actions, models, and future inputs. That relationship still strikes me as strong evidence for a realist position.
I can’t see much evidence that the people who construe atheism and agnosticicsm in the way you describe ae actually correct. I agree that the no-reality position and the unknowable-reality position could both be considered
anti-realist, but they are still substantively difference. Deriving no-reality from unknowable reality always seems like an error to me, but maybe someone has an impressive defense of it.
Well, I certainly don’t want to get into a dispute about what terms like “atheism”, “agnosticism”, “anti-realism”, etc. ought to mean. All I’ll say about that is if the words aren’t being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it’s best not to use those words.
Leaving language aside, I accept that the difference between “there is no reality” and “whether there is a reality is systematically unknowable” is an important difference to you, and I agree that deriving the former from the latter is tricky.
I’m pretty sure it’s not an important difference to shminux. It certainly isn’t an important difference to me… I can’t imagine why I would ever care about which of those two statements is true if at least one of them is.
Well, I certainly don’t want to get into a dispute about what terms like “atheism”, “agnosticism”, “anti-realism”, etc. ought to mean.
I don’t see why not.
All I’ll say about that is if the words aren’t being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it’s best not to use those words.
Or settle their correct meanings using a dictionary, or something.
Leaving language aside, I accept that the difference between “there is no reality” and “whether there is a reality is systematically unknowable” is an important difference to you, and I agree that deriving the former from the latter is tricky.
I’m pretty sure it’s not an important difference to shminux.
If shminux is using arguments for Unknowable Reality as arguments for No Reality, then shminux’s arguments are invalid whatever shminux cares about.
It certainly isn’t an important difference to me… I can’t imagine why I would ever care about which of those two statements is true if at least one of them is.
One seems a lot ore far fetched that then other to me.
Well, I certainly don’t want to get into a dispute about what terms like “atheism”, “agnosticism”, “anti-realism”, etc. ought to mean.
I don’t see why not.
If all goes well in a definitional dispute, at the end of it we have agreed on what meaning to assign to a word. I don’t really care; I’m usually perfectly happy to assign to it whatever meaning my interlocutor does. In most cases, there was some other more interesting question about the world I was trying to get at, which got derailed by a different discussion about the meanings of words. In most of the remaining cases, the discussion about the meanings of words was less valuable to me than silence would have been.
That’s not to say other people need to share my values, though; if you want to join definitional disputes (by referencing a dictionary or something) go right ahead. I’m just opting out.
If shminux is using arguments for Unknowable Reality as arguments for No Reality,
I don’t think he is, though I could be wrong about that.
Agnosticism = believing we can’t know if God exists
Atheism = believing God does not exist
Theism = believing God exists
turtles-all-the-way-down-ism = believing we can’t know what reality is (can’t reach the bottom turtle)
instrumentalism/anti-realism = believing reality does not exist
realism = believing reality exists
Thus anti-realism and realism map to atheism and theism, but agnosticism doesn’t map to infinte-turtle-ism because it says we can’t know if God exists, not what God is.
Agnosticism = believing we can’t know if God exists
Or believing that it’s not a meaningful or interesting question to ask
instrumentalism/anti-realism = believing reality does not exist
That’s quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.
Or believing that it’s not a meaningful or interesting question to ask
Those would be ignosticism and apatheism respectively.
That’s quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.
Yes, yes, we all know your idiosyncratic definition of “exist”, I was using the standard meaning because I was talking to a realist.
Yeah. The issue here, i gather, has to do a lot with domain specific knowledge—you’re a physicist, you have general idea how physics does not distinguish between, for example, 0 and two worlds of opposite phases which cancel out from our perspective. Which is way different from naive idea of some sort of computer simulation, where of course two simulations with opposite signs being summed, are a very different thing ‘from the inside’ from plain 0. If we start attributing reality to components of the sum in Feynman’s path integral… that’s going to get weird.
You realize that, assuming Feynman’s path integral makes accurate predictions, shiminux will attribute it as much reality as, say, the moon, or your inner experience.
Thanks for the clarification, it helps. An agnostic with respect to God (which is what “agnostic” has come to mean by default) would say both that we can’t know if God exists, and also that we can’t know the nature of God. So I think the analogy still holds.
Right. But! An agnostic with respect to the details of reality—an infinite-turtle-ist—need not be an agnostic with respect to reality, even if an agnostic with respect to reality is also an agnostic with respect to it’s details (although I’m not sure if that follows in any case.)
(shrug) Sure. So my analogy only holds between agnostics-about-God (who question the knowability of both the existence and nature of God) and agnostics-about-reality (who question the knowability of both the existence and nature of reality).
As you say, there may well be other people out there, for example those who question the knowability of the details, but not of the existence, of reality. (For a sufficiently broad understanding of “the details” I suspect I’m one of those people, as is almost everyone I know.) I wasn’t talking about them, but I don’t dispute their existence.
I have to admit, this has gotten rarefied enough that I’ve lost track both of your point and my own.
So, yeah, maybe I’m confusing knowing-X-exists with knowing-details-of-X for various Xes, or maybe I’ve tried to respond to a question about (one, the other, just one, both) with an answer about (the other, one, both, just one). I no longer have any clear notion, either of which is the case or why it should matter, and I recommend we let this particular strand of discourse die unless you’re willing to summarize it in its entirety for my benefit.
I predict that these discussions, even among smart, rational people will go nowhere conclusive until we have a proper theory of self-aware decision making, because that’s what this all hinges on. All the various positions people are taking in this are just packaging up the same underlying confusion, which is how not to go off the rails once your model includes yourself.
Not that I’m paying close attention to this particular thread.
And all it takes is to let go of one outdated idea, which is, like Aristotle’s impetus, ripe for discarding.
This is not at all important to your point, but the impetus theory of motion was developed by John Philoponus in the 6th century as an attack on Aristotle’s own theory of motion. It was part of a broadly Aristotelian programme, but its not something Aristotle developed. Aristotle himself has only traces of a dynamical theory (the theory being attacked by Philoponus is sort of an off-hand remark), and he concerned himself mostly with what we would probably call kinematics. The Aristotelian principle carried through in Philoponus’ theory is the principle that motion requires the simultaneous action of a mover, which is false with respect to motion but true with respect to acceleration. In fact, if you replace ‘velocity’ with ‘acceleration’ in a certain passage of the Physics, you get F=ma. So we didn’t exactly discard Aristotle’s (or Philoponus’) theory, important precursors as they were to the idea of inertia.
In fact, if you replace ‘velocity’ with ‘acceleration’ in a certain passage of the Physics, you get F=ma.
That kind of replacement seems like a serious type error—velocity is not really anything like acceleration. Like saying that if you replace P with zero, you can prove P = NP.
“we can keep refining our models and explain more and more inputs”
Hm.
On your account, “explaining an input” involves having a most-accurate-model (aka “real world”) which alters in response to that input in some fashion that makes the model even more accurate than it was (that is, better able to predict future inputs). Yes?
If so… does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done? If it does… how is that any less limiting than the realist’s view allowing for entering a state where there is no further understanding of reality to be done?
I mean, I recognize that it’s possible to have an instrumentalist account in which no such limitative result applies, just as it’s possible to have a realist account in which no such limitative result applies. But you seem to be saying that there’s something systematically different between instrumentalist and realist accounts here, and I don’t quite see why that should be.
You make a reference a little later on to “mental blocks” that realism makes more likely, and I guess that’s another reference to the same thing, but I don’t quite see what it is that that mental block is blocking, or why an instrumentalist is not subject to equivalent mental blocks.
Does the question make sense? Is it something you can further clarify?
If so… does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done?
Maybe you are reading too much into what I said. If your view is that what we try to understand is this external reality, it’s quite a small step to assuming that some day it will be understood in its entirety. This sentiment has been expressed over and over by very smart people, like the proverbial Lord Kelvin’s warning that “physics is almost done”, or Laplacian determinism. If you don’t assume that the road you travel leads to a certain destination, you can still decide that there are no more places to go as your last trail disappears, but it is by no means an obvious conclusion.
If your view is that what we try to understand is this external reality, it’s quite a small step to assuming that some day it will be understood in its entirety.
Well, OK. I certainly agree that this assumption has been made by realists historically. And while I’m not exactly sure it’s a bad thing, I’m willing to treat it as one for the sake of discussion.
That said… I still don’t quite get what the systematic value-difference is. I mean, if my view is instead that what we try to achieve is maximal model accuracy, with no reference to this external reality… then what? Is it somehow a longer step from there to assuming that some day we’ll achieve a perfectly accurate model? If so, why is that? If not, then what have I gained by switching from the goal of “understand external reality in its entirety” to the goal of “achieve a perfectly accurate model”?
If I’m following you at all, it seems you’re arguing in favor of a non-idealist position much more than a non-realist position. That is, if it’s a mistake to “assume that the road you travel leads to a certain destination”, it follows that I should detach from “ultimate”-type goals more generally, whether it’s a realist’s goal of ultimately understanding external reality, or an instrumentalist’s goal of ultimately achieving maximal model accuracy, or some other ontology’s goal of ultimately doing something else.
Have I missed a turn somewhere? Or is instrumentalism somehow better suited to discouraging me from idealism than realism is? Or something else?
Look, I don’t know if I can add much more. What started my deconversion from realism is watching smart people argue about interpretations of QM, Boltzmann brains and other untestable ontologies. After a while these debates started to seem silly to me, so I had to figure out why. Additionally, I wanted to distill the minimum ontology, something which needn’t be a subject of pointless argument, but only of experimental checking. Eventually I decided that external reality is just an assumption, like any other. This seems to work for me, and saves me a lot of worrying about untestables. Most physicists follow this pragmatic approach, except for a few tenured dudes who can afford to speculate on any topic they like. Max Tegmark and Don Page are more or less famous examples. But few physicists worry about formalizing their ontology of pragmatism. They follow the standard meaning of the terms exist, real, true, etc., and when these terms lead to untestable speculations, their pragmatism takes over and they lose interest, except maybe for some idle chat over a beer. A fine example of compartmentalization. I’ve been trying to decompartmentalize and see where the pragmatic approach leads, and my interpretation of the instrumentalism is the current outcome. It lets me to spot early many statements implications of which a pragmatist would eventually ignore, which is quite satisfying. I am not saying that I have finally worked out the One True Ontology, or that I have resolved every issue to my satisfaction, but it’s the best I’ve been able to cobble together. But I am not willing to trade it for a highly compartmentalized version of realism, or the Eliezerish version of many untestable worlds and timeless this or that. YMMV.
But the “turtles all the way down” or the method in which the act of discovery changes the law...
Why can’t that also be modeled? Even if the model is self-modifying meta-recursive turtle-stack infinite “nonsense”, there probably exists some way to describe it, model it, understand it, or at least point towards it.
This very “pointing towards it” is what I’m doing right now. I postulate that no matter the form it takes, even if it seems logically nonsensical, there’s a model which can explain the results proportionally to how much we understand about it (we may end up being never able to perfectly understand it).
Currently, the best fuzzy picture of that model, by my pinpointing of what-I’m-referring-to, is precisely what you’ve just described:
“we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on”.
That’s what I’m pointing at. I don’t care either how many turtle stacks or infinities or regresses or recursions or polymorphic interfaces or variables or volatilities there are. The hypothetical description that a perfect agent with perfect information looking at our models and inputs from the outside would give of the program that we are part of is the “algorithm”.
Maybe the turing tape never halts, and just keeps computing on and on more new “laws of physics” as we research on and on and do more exotic things, such that there’s no “true final ultimate laws”. Of course that could happen. I have no solid evidence either way, so why would I restrict my thinking to the hypothesis that there is? I like flexibility in options like that.
So yeah, my definition of that formula is pretty much self-referential and perhaps not always coherently explained. It’s a bit like CEV in that regards, “whatever we would if …” and so on.
Once all reduced away, all I’m really postulating is the continuing ability of possible agents who make models and analyze their own models to point at and frame and describe mathematically and meta-modelize the patterns of experimental results, given sufficient intelligence and ability to model things. It’s not nearly as powerfully predictive or groundbreaking as I might have made it sound in earlier comments.
For more comparisons, it’s a bit like when I say “my utility function”. Clearly, there might not be a final utility function in my brain, it might be circular, or it might regress infinitely, or be infinitely self-modifying and self-referential, but by golly when I say that my best approximation of my utility function values having food much more highly than starving, I’m definitely pointing at and approximating something in there in that mess of patterns, even if I might not know exactly where I’m pointing at.
That “something” is my “true utility function”, even if it would have to be defined with fuzzy self-recursive meta-games and timeless self-determinance or some other crazy shenanigans.
So I guess that’s about also what I refer to when I say “reality”.
I’m not really disagreeing. I’m just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like “but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?” become progressively more nonsensical.
Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.
These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.
the idea of some objective reality becomes progressively less useful
Useful for what? Prediction? But realists arent using these models to answer the “what input should I expect” question; they are answering other questions, like “what is real” and “what should we value”.
And “nothing” is an answer to “what is real”. What does instrumentalism predict?
If it’s really better or more “true” on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I’d have to update in favour of something.
If it’s really better or more “true” on some level
But if that’s no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive
theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.
If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn’t need to persuade people who already agree with it—just the rest of us.
And once you’ve self-modified into an instrumentalist, I guess there are other arguments that will now persuade you—for example, that this hypothetical underlying layer of “reality” has no extra predictive power (at least, I think that’s what shiminux finds persuasive.)
But your comment appears to strawman shminux by asserting that he doesn’t believe in external reality at all, when he clearly believes there is some cause of the regularity that allows his models to make accurate predictions.
I’m not sure. I have seen comments that contradict that interpretation. if shminux was the kind of irrealist who believes in an external world of an unknown nature, smninux would have no reason not to call it reality But sminux insists reality is our current best model.
ETA:
anotherr example
“I refuse to postulate an extra “thingy that determines my experimental results”.
Of course there is something external to our minds, which we all experience. …
Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.
I’m not sure I understand your point of view, given these two statements. If experts in the field are able to predict future inputs with a reasonably high degree of certainty; and if we agree that these inputs are external to our minds; is it not reasonable to conclude that such experts have built an approximate mental model of at least a small portion of whatever it is that causes the inputs ? Or are you asserting that they just got lucky ?
Sorry for the newbie question, I’m late to this discussion and am probably missing a lot of context...
I’m making similar queries here, since this intrigues me and I was similarly confused by the non-postulate. Maybe between all the cross-interrogations we’ll finally understand what schminux is saying ;)
The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.
The inputs appear to be highly repeatable and consistent with each other.
Some are and some aren’t. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into “there is this one universal thing that determines all inputs ever”.
What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that “one universal thing” being out there to be a little bit higher ?
And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?
Intuitively, to me at least, it seems simpler to assume that everything has a cause, including the regularity of experimental results, and that a mathematical algorithm being computed with the outputs resulting in what we perceive as inputs / experimental results is simpler as a cause than randomness, magic, or nothingness.
See also my other reply to your other reply (heh). I think I’m piecing together your description of things now. I find your consistency with it rather admirable (and very epistemologically hygienic, I might add).
Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.
Experts in the field have said things that were very philosophically naive. The steel-manning of those types of statements is isomorphic to physical realism.
And you are using territory in a weird way. If I understood the purpose of your usage, I might be able to understand it better. In my usage, “territory” seems roughly like the thing you call “inputs + implication of some regularity in inputs.” That’s how I’ve interpreted Yudkowsky’s use of the word as well. Honestly, my perception was that the proper understanding of territory was not exactly central to your dispute with him.
In short, Yudkowsky says the map “corresponds” the the territory in sufficiently fine grain that sentences like “atoms exist” are meaningful. You seem to think that the metaphor of the map is hopelessly misleading. I’m somewhere between, in that I think the map metaphor is helpful, but the map is not fine-grained enough to think “atoms exist” is a meaningful sentence.
I think this philosophy-of-science entry in the SEP is helpful, if only by defining the terms of the debate. I mostly like Feyerabend’s thinking, Yudkowsky and most of this community does not, and your position seems to trying to avoid the debate. Which you could do more easily if you would recognize what we mean with our words.
For outside observers: No, I haven’t defined map or corresponds. Also, meaningful != true. Newtonian physics is meaningful and false.
And you are using territory in a weird way. If I understood the purpose of your usage, I might be able to understand it better. In my usage, “territory” seems roughly like the thing you call “inputs + implication of some regularity in inputs.”
Well, almost the same thing. To me regularity is the first (well-tested) meta-model, not a separate assumption.
That’s how I’ve interpreted Yudkowsky’s use of the word as well.
I’m not so sure, see my reply to DaFranker.
Honestly, my perception was that the proper understanding of territory was not exactly central to your dispute with him.
I think it is absolutely central. Once you postulate external reality, a whole lot of previously meaningless questions become meaningful, including whether something “exists”, like ideas, numbers, Tegmark’s level 4, many untestable worlds and so on.
I think this philosophy-of-science entry in the SEP is helpful, if only by defining the terms of the debate.
Only marginally. My feeling is that this apparent incommensurability is due to people not realizing that their disagreements are due to some deeply buried implicit assumptions and the lack of desire to find these assumptions and discuss them.
I think it is absolutely central. Once you postulate external reality, a whole lot of previously meaningless questions become meaningful, including whether something “exists”, like ideas, numbers, Tegmark’s level 4, many untestable worlds and so on.
Not to mention question like “If we send these colonists over the horizon, does that kill them or not?”
Which brings me to a question: I can never quite figure out how your instrumentalism interacts with preferences. Without assuming the existence of something you care about, on what basis do you make decisions?
In other words, instrumentalism is a fine epistemic position, but how to actually build an instrumental agent with good consequences is unclear. Doesn’t wireheading become an issue?
If I’m accidentally assuming something that is confusing me, please point it out.
Not to mention question like “If we send these colonists over the horizon, does that kill them or not?”
This question is equally meaningful in both cases, and equally answerable. And the answer happens to be the same, too.
Which brings me to a question: I can never quite figure out how your instrumentalism interacts with preferences. Without assuming the existence of something you care about, on what basis do you make decisions?
Your argument reminds me of “Obviously morality comes from God, if you don’t believe in God, what’s to stop you from killing people if you can get away with it?” It is probably an uncharitable reading of it, though.
The “What I care about” thingie is currently one of those inputs. Like, what compels me to reply to your comment? It can partly be explained by the existing models in psychology, sociology and other natural sciences, and in part is still a mystery. Some day it will hopefully be able to analyze and simulate mind and brain better, and explain how this desire arises, and why one shminux decides to reply to and not ignore your comment. Maybe I feel good when smart people publicly agree with me. Maybe I’m satisfying some other preference I’m not aware of.
It’s not an argument; it’s an honest question. I’m sympathetic to instrumentalism, I just want to know how you frame the whole preferences issue, because I can’t figure out how to do it. It probably is like the God is Morality thing, but I can’t just accidentally find my way out of such a pickle without some help.
I frame it as “here’s all these possible worlds, some being better than others, and only one being ‘real’, and then here’s this evidence I see, which discriminates which possible worlds are probable, and here’s the things I can do that that further affect which is the real world, and I want to steer towards the good ones.” As you know, this makes a lot of assumptions and is based pretty directly on the fact that that’s how human imagination works.
If there is a better way to do it, which you seem to think that there is, I’m interested. I don’t understand your answer above, either.
Well, I’ll give it another go, despite someone diligently downvoting all my related comments.
“here’s all these possible worlds, some being better than others, and only one being ‘real’, and then here’s this evidence I see, which discriminates which possible worlds are probable, and here’s the things I can do that that further affect which is the real world, and I want to steer towards the good ones.”
Same here, with a marginally different dictionary. Although you are getting close to a point I’ve been waiting for people to bring up for some time now.
So, what are those possible worlds but models? And isn’t the “real world” just the most accurate model? Properly modeling your actions lets you affect the preferred “world” model’s accuracy, and such. The remaining issue is whether the definition of “good” or “preferred” depends on realist vs instrumentalist outlook, and I don’t see how. Maybe you can clarify.
First, let me apologize pre-emptively if I’m retreading old ground, I haven’t carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I’m doing so. That said… my understanding of your account of existence is something like the following:
A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn’t. Similarly, a model may include various hypothesized entities that represent certain consistent patterns of experience, such as this keyboard I’m typing on, my experiences of which consistently correlate with my experiences of text appearing on my monitor, responses to my text later appearing on my monitor, etc.
On your account, all it means to say “my keyboard exists” is that my experience consistently demonstrates patterns of that sort, and consequently I’m confident of the relevant predictions made by the set of models (M1) that have in the past predicted patterns of that sort, not-so-confident of relevant predictions made by the set of models (M2) that predict contradictory patterns, etc. etc. etc.
We can also say that M1 all share a common property K that allows such predictions. In common language, we are accustomed to referring to K as an “object” which “exists” (specifically, we refer to K as “my keyboard”) which is as good a way of talking as any though sloppy in the way of all natural language.
We can consequently say that M1 all agree on the existence of K, though of course that may well elide over many important differences in the ways that various models in M1 instantiate K.
We can also say that M1 models are more “accurate” than M2 models with respect to those patterns of experience that led us to talk about K in the first place. That is, M1 models predict relevant experience more reliably/precisely/whatever.
And in this way we can gradually converge on a single model (MR1), which includes various objects, and which is more accurate than all the other models we’re aware of. We can call MR1 “the real world,” by which we mean the most accurate model.
Of course, this doesn’t preclude uncovering a new model MR2 tomorrow which is even more accurate, at which point we would call MR2 “the real world”. And MR2 might represent K in a completely different way, such that the real world would now, while still containing the existence of my keyboard, contain it in a completely different way. For example, MR1 might represent K as a collection of atoms, and MR2 might represent K as a set of parameters in a configuration space, and when I transition from MR1 to MR2 the real world goes from my keyboard being a collection of atoms to my keyboard being a set of parameters in a configuration space.
Similarly, it doesn’t preclude our experiences starting to systematically change such that the predictions made by MR1 are no longer reliable, in which case MR stops being the most accurate model, and some other model (MR3) is the most accurate model, at which point we would call MR3 “the real world”. For example, MR3 might not contain K at all, and I would suddenly “realize” that there never was a keyboard.
All of which is fine, but the difficulty arises when after identifying MR1 as the real world we make the error of reifying MRn, projecting its patterns onto some kind of presumed “reality” R to which we attribute a kind of pseudo-existence independent of all models. Then we misinterpret the accuracy of a model as referring, not to how well it predicts future experience, but to how well it corresponds to R.
Of course, none of this precludes being mistaken about the real world… that is, I might think that MR1 is the real world, when in fact I just haven’t fully evaluated the predictive value of the various models I’m aware of, and if I were to perform such an evaluation I’d realize that no, actually, MR4 is the real world. And, knowing this, I might have various degrees of confidence in various models, which I can describe as “possible worlds.”
And I might have preferences as to which of those worlds is real. For example, MP1 and MP2 might both be possible worlds, and I am happier in MP1 than MP2, so I prefer MP1 be the real world. Similarly, I might prefer MP1 to MP2 for various other reasons other than happiness.
Which, again, is fine, but again we can make the reification error by assigning to R various attributes which correspond, not only to the real world (that is, the most accurate model), but to the various possible worlds MRx..y. But this isn’t a novel error, it’s just the extension of the original error of reification of the real world onto possible worlds.
That said, talking about it gets extra-confusing now, because there’s now several different mistaken ideas about reality floating around… the original “naive realist” mistake of positing R that corresponds to MR, the “multiverse” mistake of positing R that corresponds to MRx..y, etc. When I say to a naive realist that treating R as something that exists outside of a model is just an error, for example, the naive realist might misunderstand me as trying to say something about the multiverse and the relationships between things that “exist in the world” (outside of a model) and “exist in possible worlds” (outside of a model), which in fact has nothing at all to do with my point, which is that the whole idea of existence outside of a model is confused in the first place.
As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.
The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like “everything imaginable exists”. This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
Maybe you should teach your steelmanning skills, or make a post out of it.
I’ve thought about this, but on consideration the only part of it I understand explicitly enough to “teach” is Miller’s Law (the first one), and there’s really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they’re wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they’re wrong… but rarely do I feel it useful to tell them so.)
The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
Yes. I’m not sure what to say about that on your account, and that was in fact where I was going to go next.
Actually, more generally, I’m not sure what distinguishes experiences we have from those we don’t have in the first place, on your account, even leaving aside how one can alter future experiences.
After all, we’ve said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren’t properties of the individual models (though they can of course be represented by properties of models). But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
I assume you reject that conclusion, but I’m not sure how. On a naive realist’s view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally… indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)
(On a multiverse realist’s view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)
Another unaddressed issue derives from your wording: “how do you affect your future experiences?” I may well ask whether there’s anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that’s roughly the same problem for an instrumentalist as it is for a realist… that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.
But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
Somewhere way upstream I said that I postulate experiences (I called them inputs), so they “exist” in this sense. We certainly don’t experience “everything”, so that’s how you tell “between experiences we have and those we don’t have”. I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I’m failing to Millerize it.
So “existence” properly refers to a property of subsets of models (e.g., “my keyboard exists” asserts that M1 contain K), as discussed earlier, and “existence” also properly refer to a property of inputs (e.g., “my experience of my keyboard sitting on my desk exists” and “my experience of my keyboard dancing the Macarena doesn’t exist” are both coherent, if perhaps puzzling, things to say), as discussed here. Yes?
Which is not necessarily to say that “existence” refers to the same property of subsets of models and of inputs. It might, it might not, we haven’t yet encountered grounds to say one way or the other. Yes?
OK. So far, so good.
And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability.
Well, I agree that when a realist solipsist says “Mine is the only mind that exists” they are using “exists” in a way that is meaningless to an instrumentalist.
That said, I don’t see what stops an instrumentalist solipsist from saying “Mine is the only mind that exists” while using “exists” in the ways that instrumentalists understand that term to have meaning.
That said, I still don’t quite understand how “exists” applies to minds on your account. You said here that “mind is also a model”, which I understand to mean that minds exist as subsets of models, just like keyboards do.
But you also agreed that a model is a “mental construct”… which I understand to refer to a construct created/maintained by a mind.
The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of “existence” that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren’t mental constructs.
My reasoning here is similar to how if you said “Red boxes are contained by blue boxes” and “Blue boxes are contained by red boxes” I would conclude that at least one of those statements had an implicit “some but not all” clause prepended to it… I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true.
Does that make sense? If so, can you clarify which is the case? If not, can you say more about why not?
I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true [implicitly assuming that X is not the same as Y, I am guessing].
And what do you mean here by “true”, in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it’s the latter, how would you test for it?
Just to be clear, are you suggesting that on your account I have no grounds for treating “All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes” differently from “All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes” in the way I discussed?
If you are suggesting that, then I don’t quite know how to proceed. Suggestions welcomed.
If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework
Actually, thinking about this a little bit more, a “simpler” question might be whether it’s meaningful on this account to talk about minds existing. I think the answer is again that it isn’t, as I said about experiences above… models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.
If that’s the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.
So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can’t base anything such (non)existence.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error
Sort of, yes. Except mind is also a model.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.
If I understand both your and shiminux’s comments, this might express the same thing in different terms:
We have experiences (“inputs”.)
We wish to optimize these inputs according to whatever goal structure.
In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
Some of these models are more accurate than others. We might call accurate models “real”.
However, the term “real” holds no special ontological value, and they might later prove inaccurate or be replaced by better models.
Thus, we have a perfectly functioning agent with no conception (or need for) a territory—there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn’t very useful for such an agent.
Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.
As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.
I find it’s much more difficult to express my own positions in ways that are easily understood, though. It’s harder to figure out what is salient and where the vastest inferential gulfs are.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
I actually tried this a few times, even started a post draft titled “explain realism to a baby AI”. In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.
Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.
Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.
(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don’t threaten your own arguments (since they would be threatening the other side’s arguments, as it were.))
Maybe we should organize a discussion where everyone has to take positions other than their own?
It seems to me to be one of the basic exercises in rationality, also known as “Devil’s advocate”. However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back. Not sure how much of this is taught or practiced at CFAR camps.
Yup. In my experience, though, Devil’s Advocates are usually pitted against people genuinely arguing their cause, not other devil’s advocates.
However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back.
Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil’s advocate, though, IIRC.
Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it… though I’m not sure how good a job of it I’ll do.
I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still …
if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so
Wait, there are nonrealists other than shiminux here?
I think I got a cumulative total of some 100 downvotes on this thread, so somehow I don’t believe that a top-level post would be welcome. However, if TheOtherDave were to write one as a description of an interesting ontology he does not subscribe to, this would probably go over much better. I doubt he would be interested, though.
As it happens, I agree with your position. I was actually thinking of making a post that pinpoints to all the important comments here without taking a position, while asking the discussion to continue there. However, making an argumentative post is also possible, although I might not be willing to expend to effort.
Cool. If you are motivated at some point to articulate an anti-realist account of how non-accidental correlations between inputs come to arise (in whatever format you see fit), I’d appreciate that.
As I understand it, the word “how” is used to demand a model for an event. Since I already have models for the correlations of my inputs, I don’t feel the need for further explanation. More concretely, should you ask “How does closing your eyes lead to a blackout of your vision?” I would answer “After I close my eyes, my eyelids block all of the light from getting into my eye.”, and I consider this answer satisfying. Just because I don’t believe in a ontologically fundamental reality, doesn’t mean I don’t believe in eyes and eyelids and light.
In M1, vision depends on light, which is blocked by eyelids. Therefore in M1, we predict that closing my eyes leads to a blackout of vision. In M2, vision depends on something else, which is not blocked by eyelids. Therefore in M2, we predict that closing my eyes does not lead to a blackout of vision.
At some later time, an event occurs in M1: specifically, I close my eyelids. At the same time, I have a blackout of vision. This increases my confidence in the predictive power of M1.
So far, so good.
At the same time, an identical event-pair occurs in M2: I close my eyes and my vision blacks out. This decreases my confidence in the predictive power of M2.
If I’ve understood you correctly, both the realist and the instrumentalist account of all of the above is “there are two models, M1 and M2, the same events occur in both, and as a consequence of those events we decide M1 is more accurate than M2.”
The realist account goes on to say “the reason the same events occur in both models is because they are both fed by the same set of externally realized events, which exist outside of either model.” The instrumentalist account, IIUC, says “the reason the same events occur in both models is not worth discussing; they just do.”
That’s still possible, for convenience purposes, even if shiminux is unwilling to describe their beliefs—your beliefs, apparently, I think a lot of people will have some questions to ask you now—in a top-level post.
Ooh, excellent point. I’d do it myself, but unfortunately my reason for suggesting it is that I want to understand your position better—my puny argument would be torn to shreds, I have too many holes in my understanding :(
The actual world is also a possible world. Non actual possible worlds are only accessible as models. Realists believe they can bring the actual world into line with desired models to some exitent
And isn’t the “real world” just the most accurate model?
Not for realists.
Properly modeling your actions lets you affect the preferred “world” model’s accuracy, and such. The remaining issue is whether the definition of “good” or “preferred” depends on realist vs instrumentalist outlook, and I don’t see how. Maybe you can clarify.
For realist, wireheading isn’t a good aim. For anti realists, it is the only aim.
For realist, wireheading isn’t a good aim. For anti realists, it is the only aim.
Realism doesn’t preclude ethical frameworks that endorse wireheading.
I’m less clear about the second part, though.
Rejecting (sufficiently well implemented) wireheading requires valuing things other than one’s own experience. I’m not yet clear on how one goes about valuing things other than one’s own experience in an instrumentalist framework, but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
Realism doesn’t preclude ethical frameworks that endorse wireheading
No, but they are a minority interest.
’m not yet clear on how one goes about valuing things other than one’s own experience in an instrumentalist framework, but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
If someone accepts that reality exists, you have a head start. Why do anti-realists care about accurate prediction? They don’t think predictive models represent and external reality, and they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.
they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.
My understanding of shminux’s position is that accurate models can be used, somehow, to improve inputs.
I don’t yet understand how that is even in principle possible on his model, though I hope to improve my understanding.
Your last statement shows that you have much to learn from TheOtherDave about the principle of charity. Specifically, don’t think the other person to be stupider than you are, without a valid reason. So, if you come up with a trivial objection to their point, consider that they might have come across it before and addressed it in some way. They might still be wrong, but likely not in the obvious ways.
Sorry, just realized I skipped over the first part of your comment.
It happens, but this should not be the initial assumption.
Doesn’t that depend on the prior? I think most holders of certain religious or political beliefs, for instance, do so for trivially wrong reasons*. Perhaps you mean it should not be the default assumption here?
If I answer ‘yes’ to this, then I am confusing the map with the territory, surely? Yes, there may very well be a possible world that’s a perfect match for a given model, but how would I tell it apart from all the near-misses?
The “real world” is a good deal more accurate than the most accurate model of it that we have of it.
Well, I’ll give it another go, despite someone diligently downvoting all my related comments.
It’s not me, FWIW; I find the discussion interesting.
That said, I’m not sure what methodology you use to determine which actions to take, given your statement that ” the “real world” just the most accurate model”. If all you cared about was the accuracy of your model, would it not be easier to avoid taking any physical actions, and simply change your model on the fly as it suits you ? This way, you could always make your model fit what you observe. Yes, you’d be grossly overfitting the data, but is that even a problem ?
I didn’t say it’s all I care about. Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice, depending on my preferences, the effort required and the odds of success, just like your garden variety realist would. As Eliezer used to emphasize, “it all adds up to normality”.
I am guessing that you, TimS and nyan_sandwich all seem to think that my version of instrumentalism is incompatible with having preferences over possible worlds. I have trouble understanding where this twist is coming from.
It’s not that I think that your version of instrumentalism is incompatible with preferences, it’s more like I’m not sure I understand what the word “preferences” even means in your context. You say “possible worlds”, but, as far as I can tell, you mean something like, “possible models that predict future inputs”.
Firstly, I’m not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a “preference” for you means something like, “a desire to make one model more accurate than the rest”, but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn’t it ?
Your having a preference for worlds without, eg, slavery can’t possibly translate into something iike “i want to change the world external to me so that it no longer contains slaves”. I have trouble understanding what it would translate to. You could adopt models where things you don’t like don’t exist, but they wouldn’t be accurate.
Your having a preference for worlds without, eg, slavery can’t possibly translate into something iike “i want to change the world external to me so that it no longer contains slaves”.
No, but it translates to its equivalent:
I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).
I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).
So you’re saying you have a preference over the map, as opposed to the territory (your experiences, in this case)
That sounds subject to some standard pitfalls, offhand, where you try to fool yourself into choosing the “no-slaves” map instead of trying to optimize, well, reality, such as the slaves—perhaps with an experience machine, through simple self-deception, or maybe some sort of exploit involving Occam’s Razor.
I agree that self-deception is a “real” possibility. Then again, it is also a possibility for a realist. Or a dualist. In fact, confusing map and territory is one of the most common pitfalls, as you well know. Would it be more likely for an instrumentalist to become instrumenta-lost? I don’t see why it would be the case. For example, from my point of view, you arbitrarily chose a comforting Christian map (is it an inverse of “some sort of exploit involving Occam’s Razor”?) instead of a cold hard uncaring one, even though you seem to be preferring realism over instrumentalism.
Ah, no, sorry, I meant that those options would satisfy your stated preferences, not that they were pitfalls on the road to it. I’m suggesting that since you don’t want to fall into those pitfalls, those aren’t actually your preferences, whether because you’ve made a mistake or I have (please tell me if I have.)
I propose a ww2 mechanical aiming computer as an example of a model. Built based on the gears that can be easily and conveniently manufactured, there’s very little doubt that universe does not use anything even remotely similar to produce the movement of the projectile through the air, even if we assume that such question is meaningful.
A case can be made that physics is not that much different from ww2 aiming computer (built out of mathematics that is available and can be conveniently used). And with regards to MWI, a case can be made that it is similar to removing the only ratchet in the mechanical computer and proclaiming rest of the gears the reality because somehow “from the inside” it would allegedly still feel the same even though the mechanical computer, without this ratchet, doesn’t even work any more for predicting anything.
Of course, it is not clear how close physics is to a mechanical aiming computer in terms of how the internals can correspond to the real world.
So, what are those possible worlds but models? And isn’t the “real world” just the most accurate model? Properly modeling your actions lets you affect the preferred “world” model’s accuracy, and such. The remaining issue is whether the definition of “good” or “preferred” depends on realist vs instrumentalist outlook, and I don’t see how. Maybe you can clarify.
Interesting. So we prefer that some models or others be accurate, and take actions that we expect to make that happen, in our current bag of models.
Ok I think I get it. I was confused about what the referent of your preferences would be if you did not have your models referring to something. I see that you have made the accuracy of various models the referent of preferences. This seems reasonable enough.
I can see now that I’m confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.
It works fine—as long as you only care about optimizing inputs, in which case I invite you to go play in the holodeck while the rest of us optimize the real world.
If you can’t find a holodeck, I sure hope you don’t accidentally sacrifice your life to save somebody or further some noble cause. After all, you wont be there to experience the resulting inputs, so what’s the point?
It’s not a utility function over inputs, it’s over the accuracy of models.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
It all adds up to normality, which probably means we are confused and there is an even simpler underlying model of the situation.
It’s not a utility function over inputs, it’s over the accuracy of models.
Affecting the accuracy of a specified model—a term defined as “how well it predicts future inputs”—is a subset of optimizing future inputs.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
You’re still thinking like a realist. A holodeck doesn’t prevent you from observing the real world—there is no “real world”. It prevents you testing how well certain models predict experiences when you take the action “leave the holodeck”, unless of course you leave the holodeck—it’s an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
Pardon?
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don’t win the lottery with quantum suicide.
It all adds up to normality
You know, not every belief adds up to normality—just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because “it all adds up to normality”.
Only marginally. My feeling is that this apparent incommensurability is due to people not realizing that their disagreements are due to some deeply buried implicit assumptions and the lack of desire to find these assumptions and discuss them.
That’s the standard physical realist response to Kuhn and Feyerabend. I find it confusing to hear it from you, because you certainly are not a standard physical realist.
In short, I think you are being a little too a la carte with your selection from various parts of philosophy of science.
Right, that’s fair, but it’s not really apparent from your reply which is A and which is ~A. I understand that physical realists say the same things as shminux, who professes not to be a physical realist—but then, I bet physical realists say that water is wet, too...
I don’t know that shminux has inadvertently endorsed A and ~A. I’m suspicious that this has occurred because he resists the standard physical realist definition of territory / reality, but responds to a quasi-anti-realist position with an physical realist answer that I suspect depends on the rejected definition of reality.
If I knew precisely where the contradiction was, I’d point it out explicitly. But I don’t, so I can’t.
“Given a model that predicts accurately, what would you do differently if the objects described in the model do or don’t exist at some ontological level? If there is no difference, what are we worrying about?”
If I recall correctly he abandons that particular rejection when he gets an actual answer to the first question. Specifically, he argues against belief in the implied invisible when said belief leads to making actual decisions that will result in outcomes that he will not personally be able to verify (eg. when considering Relativity and accelerated expansion of the universe).
(2) the ontological status of objects that, in principle, could never be observed (directly or indirectly)
I took shminux as trying to duck the first debate (by adopting physical pragmatism), but I think most answers to the first question do not necessarily imply particular answers to the second question.
I can imagine using a model that contains elements that are merely convenient pretenses, and don’t actually exist—like using simpler Newtonian models of gravity despite knowing GR is true (or at least more likely to be true than Newton.)
If some of these models featured things that I care about, it wouldn’t matter, as long as I didn’t think actual reality featured these things. For example, if an easy hack for predicting the movement of a simple robot was to imagine it being sentient (because I can easily calculate what humanlike minds wold do using mys own neural circutry,) I still wouldn’t care if it was crushed, because the sentient being described by the model doesn’t actually exist—the robot merely uses similar pathfinding.
Does that answer your question, TimS’-model-of-shiminux?
I don’t understand the paperclipping reference, but MugaSofer is a hard-core moral realist (I think). Physical pragmatism (your position) is a reasonable stance in the physical realism / anti-realism debate, but I’m not sure what the parallel position is in the moral realism / anti-realism debate.
(Edit: And for some moral realists, the justification for that position is the “obvious” truth of physical realism and the non-intuitiveness of physical facts and moral facts having a different ontological status.)
In short, “physical prediction” is a coherent concept in a way that “moral prediction” does not seem to be. A sentence of the form “I predict retaliation if I wrong someone” is a psychological prediction, not a moral prediction. Defining what “wrong” means in that sentence is the core of the moral realism / anti-realism debate.
In short, “physical prediction” is a coherent concept in a way that “moral prediction” does not seem to be.
I don’t see it.
A sentence of the form “I predict retaliation if I wrong someone” is a psychological prediction, not a moral prediction. Defining what “wrong” means in that sentence is the core of the moral realism / anti-realism debate.
Do we really have to define “wrong” here? It seems more useful to say “certain actions of mine may cause this person to experience a violation of their innate sense of fairness”, or something to that effect. Now we are doing cognitive science, not some vague philosophizing.
Do we really have to define “wrong” here? It seems more useful to say “certain actions of mine may cause this person to experience a violation of their innate sense of fairness”, or something to that effect.
At a minimum, we need an enforceable procedure for resolving disagreements between different people when each of their “innate senses of fairness” disagree. Negotiated settlement might be the gold-standard, but history shows this seldom has actually resolved major disputes.
Defining “wrong” helps because it provides a universal principled basis for others to intervene in the conflict. Alliance building also provides a basis, but is hardly universally principled (or fair, for most usages of “fair”).
Defining “wrong” helps because it provides a universal principled basis for others to intervene in the conflict.
Yes, it definitely helps to define “wrong” as a rough acceptable behavior boundary in a certain group. But promoting it from a convenient shortcut in your models into something bigger is hardly useful. Well, it is useful to you if you can convince others that your definition of “wrong” is the one true one and everyone else ought to abide by it or burn in hell. Again, we are out of philosophy and into psychology.
I’m glad we agree that defining “wrong” is useful, but I’m still confused how you think we go about defining “wrong.” One could assert:
Wrong is what society punishes.
But that doesn’t tell us how society figures out what to punish, or whether there are constraints on society’s classifications. Psychology doesn’t seem to answer these questions—there once were societies that practiced human sacrifice or human slavery.
In common usage, we’d like to be able say those societies were doing wrong, and your usage seems inconsistent with using “wrong” in that way.
In common usage, we’d like to be able say those societies were doing wrong, and your usage seems inconsistent with using “wrong” in that way.
No, they weren’t. Your model of objective wrongness is not a good one, it fails a number of tests.
“Human sacrifice and human slavery” is wrong now in the Westernized society, because it fits under the agreed definition of wrong today. It was not wrong then. It might not be wrong again in the future, after some x-risk-type calamity.
The evolution of the agreed-upon concept of wrong is a fascinating subject in human psychology, sociology and whatever other natural science is relevant. I am guessing that more formerly acceptable behaviors get labeled as “wrong” as the overall standard of living rises and average suffering decreases. As someone mentioned before, torturing cats is no longer the good clean fun it used to be. But that’s just a guess, I would defer to the expert in the area, hopefully there are some around.
Some time in the future a perfectly normal activity of the day will be labeled as “wrong”. It might be eating animals, or eating plants, or having more than 1.0 children per person, or refusing sex when asked politely, or using anonymous nicks on a public forum, or any other activity we find perfectly innocuous.
Conversely, there were plenty of “wrong” behaviors which aren’t wrong anymore, at least not in the modern West, like proclaiming that Jesus is not the Son of God, or doing witchcraft, or marrying a person of the same sex, or...
The definition of wrong as an agreed upon boundary of acceptable behavior matches observations. The way people come to such an agreement is a topic eminently worth studying, but it should not be confused with studying the concept of wrong as if it were some universal truth.
Your position on moral realism has a respectable pedigree in moral philosophy, but I don’t think it is parallel to your position on physical realism.
As I understand it, your response to the question “Are there electrons?” is something like:
This is a wrong question. Trying to find the answer doesn’t resolve any actual decision you face.
By contrast, your response to “Is human sacrifice wrong?” is something like:
Not in the sense you mean, because “wrong” in that sense does not exist.
I don’t think there are philosophical reasons why your positions on those two issues should be in parallel, but you seem to think that your positions are in parallel, and it does not look that way to me.
I don’t think there are philosophical reasons why your positions on those two issues should be in parallel, but you seem to think that your positions are in parallel, and it does not look that way to me.
Without a notion of objective underlying reality, shminux had nothing to cash out any moral theory in.
As I understand it, your response to the question “Are there electrons?” is something like: This is a wrong question. Trying to find the answer doesn’t resolve any actual decision you face. By contrast, your response to “Is human sacrifice wrong?” is something like: Not in the sense you mean, because “wrong” in that sense does not exist.
Not quite.
“Are there electrons?” “Yes, electron is an accurate model, though it it has its issues.”
“Does light propagate in ether?” “Aether is not a good model, it fails a number of tests.”
“is human sacrifice an unacceptable behavior in the US today?” “Yes, this model is quite accurate.”
“Is ‘wrong’ independent of the group that defines it?” “No, this model fails a number of tests.”
Seems pretty consistent to me, with all the parallels you want.
You are not using the word “tests” consistently in your examples. For luminiferous aether, test means something like “makes accurate predictions.” Substituting that into your answer to wrong yields:
No, this model fails to make accurate predictions.
Which I’m having trouble parsing as an answer to the question. If you don’t mean for that substitution to be sensible, then your parallelism does not seem to hold together.
But in deference to your statement here, I am happy to drop this topic if you’d like me to. It is not my intent to badger you, and you don’t have any obligation to continue a conversation you don’t find enjoyable or productive.
I suggest editing in additional line-breaks so that the quote is distinguished from your own contribution. (You need at least two ‘enters’ between the end of the quote and the start of your own words.)
I expected that this discussion would not achieve anything.
Simply put, the mistake both of you are making was already addressed by the meta-ethics sequence. But for a non-LW reference, see Speakers Use Their Actual Language. “Wrong” does not refer to “whatever ‘wrong’ means in our language at the time”. That would be circular. “Wrong” refers to some objective set of characteristics, that set being the same as those that we in reality disapprove of. Modulo logical uncertainty etc etc.
I expected this would not make sense to you since you can’t cash out objective characteristics in terms of predictive black boxes.
I expected that this discussion would not achieve anything.
Congratulations on a successful prediction. Of course, if you had made it before this conversation commenced, you could have saved us all the effort; next time you know something would fail, speaking up would be helpful.
Simply put, the mistake both of you are making was already addressed by the meta-ethics sequence. But for a non-LW reference, see Speakers Use Their Actual Language. “Wrong” does not refer to “whatever ‘wrong’ means in our language at the time”. That would be circular. “Wrong” refers to some objective set of characteristics, that set being the same as those that we in reality disapprove of. Modulo logical uncertainty etc etc.
I think shminux is claiming that this set of characteristics changes dynamically, and thus it is more useful to define “wrong” dynamically as well. I disagree, but then we already have a term for this (“unacceptable”) so why reurpose “wrong”?
I expected this would not make sense to you since you can’t cash out objective characteristics in terms of predictive black boxes.
Who does “you” refer to here? All participants in this discussion? Sminux only?
we already have a term for this (“unacceptable”) so why reurpose “wrong”?
Presumably shminux doesn’t consider it a repurposing, but rather an articulation of the word’s initial purpose.
next time you know something would fail, speaking up would be helpful.
Well, OK.
Using relative terms in absolute ways invites communication failure.
If I use “wrong” to denote a relationship between a particular act and a particular judge (as shminux does) but I only specify the act and leave the judge implicit (e.g., “murder is wrong”), I’m relying on my listener to have a shared model of the world in order for my meaning to get across. If I’m not comfortable relying on that, I do better to specify the judge I have in mind.
Presumably shminux doesn’t consider it a repurposing, but rather an articulation of the word’s initial purpose.
Is shiminux a native English speaker? Because that’s certainly not how the term is usually used. Ah well, he’s tapped out anyway.
Well, OK.
Using relative terms in absolute ways invites communication failure.
If I use “wrong” to denote a relationship between a particular act and a particular judge (as shminux does) but I only specify the act and leave the judge implicit (e.g., “murder is wrong”), I’m relying on my listener to have a shared model of the world in order for my meaning to get across. If I’m not comfortable relying on that, I do better to specify the judge I have in mind.
Oh, I can see why it failed—they were using the same term in different ways, each insisting their meaning was “correct”—I just meant you could use this knowledge to help avoid this ahead of time.
I just meant you could use this knowledge to help avoid this ahead of time.
I understand. I’m suggesting it in that context.
That is, I’m asserting now that “if I find myself in a conversation where such terms are being used and I have reason to believe the participants might not share implicit arguments, make the argumentsexplicit” is a good rule to follow in my next conversation.
Congratulations on a successful prediction. Of course, if you had made it before this conversation commenced, you could have saved us all the effort; next time you know something would fail, speaking up would be helpful.
Sorry. I guess I was feeling too cynical and discouraged at the time to think that such a thing would be helpful.
Who does “you” refer to here? All participants in this discussion? Sminux only?
In this case I meant to refer to only shminux, who calls himself an instrumentalist and does not like to talk about the territory (as opposed to AIXI-style predictive models).
No, they weren’t. Your model of objective wrongness is not a good one, it fails a number of tests.
“Human sacrifice and human slavery” is wrong now in the Westernized society, because it fits under the agreed definition of wrong today. It was not wrong then. It might not be wrong again in the future, after some x-risk-type calamity.
[...]
The definition of wrong as an agreed upon boundary of acceptable behavior matches observations. The way people come to such an agreement is a topic eminently worth studying, but it should not be confused with studying the concept of wrong as if it were some universal truth.
This concept of “wrong” is useful, but a) there is an existing term which people understand to mean what you describe—“acceptable”—and b) it does not serve the useful function people currently expect “wrong” to serve; that of describing our extrapolated desires—it is not prescriptive.
I would advise switching to the more common term, but if you must use it this way I would suggest warning people first, to prevent confusion.
You or TimS are the ones who introduced the term “wrong” into the conversation, I’m simply interpreting it in a way that makes sense to me. Tapping out due to lack of progress.
You or TimS are the ones who introduced the term “wrong” into the conversation
That would be TimS, because he’s the one discussing your views on moral realism with you.
I’m simply interpreting it in a way that makes sense to me.
And I’m simply warning you that using the term in a nonstandard way is predictably going to result in confusion, as it has in this case.
Tapping out due to lack of progress.
Well, that’s your prerogative, obviously, but please don’t tap out of your discussion with Tim on my account. And, um, if it’s not on my account, you might want to say it to him, not me.
Feeling or not, it’s a sense that exists in other primates, not just humans. You can certainly quantify the emotional reaction to real or perceived unfairness, which was my whole point: use cognitive science, not philosophy. And cognitive science is about building models and testing them, like any natural science.
Well, the trouble occurs when you start talking about the existence of things that, unlike electrons, you actually care about.
Say I value sentient life. If that life doesn’t factor into my predictions, does it somehow not exist? Should I stop caring about it? (The same goes for paperclips, if you happen to value those.)
EDIT: I assume you consider the least computationally complex model “better at predicting certain future inputs”?
Say I value sentient life. If that life doesn’t factor into my predictions, does it somehow not exist? Should I stop caring about it?
You have it backwards. You also use the term “exist” in the way I don’t. You don’t have to worry about refining models predicting inputs you don’t care about.
I assume you consider the least computationally complex model “better at predicting certain future inputs”?
If there is a luxury of choice of multiple models which give the same predictions, sure. Usually we are lucky if there is one good model.
Well, I am trying to get you to clarify what you mean.
You don’t have to worry about refining models predicting inputs you don’t care about.
But as I said, I don’t care about inputs, except instrumentally. I care about sentient minds (or paperclips.)
Usually we are lucky if there is one good model.
Ah … no. Invisible pink unicorns and Russel’s Teapots abound. For example, what if any object passing over the cosmological horizon disappeared? Or the universe was created last Thursday, but perfectly designed to appear billions of years old? These hypotheses don’t do any worse at predicting; they just violate Occam’s Razor.
Well, I am trying to get you to clarify what you mean.
Believe me, I have tried many times in our discussions over last several months. Unfortunately we seem to be speaking different languages which happen to use the same English syntax.
Invisible pink unicorns and Russel’s Teapots abound.
Fine, I’ll clarify. You can always complicate an existing model in a trivial way, which is what all your examples are doing. I was talking about models of which one is not a trivial extension of the other with no new predictive power. That’s just silly.
Fine, I’ll clarify. You can always complicate an existing model in a trivial way, which is what all your examples are doing. I was talking about models of which one is not a trivial extension of the other with no new predictive power. That’s just silly.
Well, considering how many people seem to think that interpretations of QM other than their own are just “trivial extensions with no new predictive power”, it’s an important point.
Believe me, I have tried many times in our discussions over last several months. Unfortunately we seem to be speaking different languages which happen to use the same English syntax.
Well, it’s pretty obvious we use different definitions of “existence”. Not sure if that qualifies as a different language, as such.
That said, you seem to be having serious trouble parsing my question, so maybe there are other differences too.
Look, you understand the concept of a paperclip maximizer, yes? How would a paperclip maximizer that used your criteria for existence act differently?
EDIT: incidentally, we haven’t been discussing this “over the last several months”. We’ve been discussing it since the fifth.
Well, considering how many people seem to think that interpretations of QM other than their own are just “trivial extensions with no new predictive power”, it’s an important point.
The interpretations are usually far from trivial and most aspire to provide an inspiration for building a testable model some day. Some even have, and been falsified. That’s quite different from last thursdayism.
How would a paperclip maximizer that used your criteria for existence act differently?
Why would it? A paperclip maximizer is already instrumental, it has one goal in mind, maximizing the number of paperclips in the universe (which it presumably can measure with some sensors). It may have to develop advanced scientific concepts, like General Relativity, to be assured that the paperclips disappearing behind the cosmological horizon can still be counted toward the total, given some mild assumptions, like the Copernican principle.
Anyway, I’m quite skeptical that we are getting anywhere in this discussion.
it has one goal in mind, maximizing the number of paperclips in the universe
In which universe? It doesn’t know. And it may have uncertainty with regards to true number. There’s going to be hypothetical universes that produce same observations but have ridiculously huge amounts of invisible paperclips at stake, which are influenced by paperclipper’s actions (it may even be that the simplest extra addition that makes agent’s actions influence invisible paperclips would utterly dominate all theories starting from some length, as it leaves most length for a busy beaver like construction that makes the amount of insivisible paperclips ridiculously huge. One extra bit for a busy beaver is seriously a lot more paperclips). So given some sort of length prior that ignores size of hypothetical universe (the kind that won’t discriminate against MWI just because its big), those aren’t assigned low enough prior, and dominate it’s expected utility calculations.
The interpretations are usually far from trivial and most aspire to provide an inspiration for building a testable model some day. Some even have, and been falsified. That’s quite different from last thursdayism.
Well, I probably don’t know enough about QM to judge if they’re correct; but it’s certainly a claim made fairly regularly.
Why would it? A paperclip maximizer is already instrumental, it has one goal in mind, maximizing the number of paperclips in the universe (which it presumably can measure with some sensors). It may have to develop advanced scientific concepts, like General Relativity, to be assured that the paperclips disappearing behind the cosmological horizon can still be counted toward the total, given some mild assumptions, like the Copernican principle.
Let’s say it simplifies the equations not to model the paperclips as paperclips—it might be sufficient to treat them as a homogeneous mass of metal, for example. Does this mean that they do not, in fact, exist? Should a paperclipper avoid this at all costs, because it’s equivalent to them disappearing?
Removing the territory/map distinction means something that wants to change the territory could end up changing the map … doesn’t it?
I’m wondering because I care about people, but it’s often simpler to model people without treating them as, well, sentient.
Anyway, I’m quite skeptical that we are getting anywhere in this discussion.
Well, I’ve been optimistic that I’d clarified myself pretty much every comment now, so I have to admit I’m updating downwards on that.
I’m convinced you could construct parallel physics with completely different mechanics (maybe the narrow trails aren’t as narrow as you’d think?) and get exactly the same results.
Depends on what you mean by ‘different mechanics.’ Weinberg’s field theory textbook develops the argument that only quantum field theory, as a structure, allows for certain phenomenologically important characteristics (mostly cluster decomposition).
However, there IS an enormous amount of leeway within the field theory- you can make a theory where electric monopoles exist as explicit degrees of freedom, and magnetic monopoles are topological gauge-field configurations and its dual to a theory where magnetic monopoles are the degrees of freedom and electric monopoles exist as field configurations. While these theories SEEM very different, they make identical predictions.
Similarly, if you can only make finite numbers of measurements, adding extra dimensions is equivalent to adding lots of additional forces (the dimensional deconstruction idea), etc. Some 5d theories with gravity make the same predictions as some 4d theories without.
Seriously, I’ve tried explaining just the proof that electrons exist, and in the end the best argument is that all the math we’ve built assuming their existence have really good predictive value. Which sounds like great evidence until you start confronting all the strange loops (the best experiments assume electromagnetic fields...) in that evidence, and I don’t even know how to -begin- untangling those.
The same is more-or-less true if you replace ‘electrons’ with ‘temperature’.
The more I learn about the whole thing, the more I realize that all of Quantum Physics is basically a collection of miraculously working hacks, like narrow trails in a forest full of unknown deadly wildlife. This is markedly different from the classical physics, including relativity, where most of the territory is mapped, but there are still occasional dangers, most of which are clearly marked with orange cones.
Yes. While I’m not terribly up-to-date with the ‘state-of-the-art’ in theoretical physics, I feel like the situation today with renormalization and stuff is like it was until 1905 for the Lorentz-FitzGerald contraction or the black-body radiation, when people were mystified by the fact that the equations worked because they didn’t know (or, at least, didn’t want to admit) what the hell they meant. A new Einstein clearing this stuff up is perhaps overdue now. (The most obvious candidate is “something to do with quantum gravity”, but I’m prepared to be surprised.)
You guys are making possible sources of confusion between the map and the territory sound like they’re specific to QFT while they actually aren’t. “Oh, I know what a ball is. It’s an object where all the points on the surface are at the same distance from the centre.” “How can there be such a thing? The positions of atoms on the surface would fluctuate due to thermal motion. Then what is it, exactly, that you play billiards with?” (Can you find another example of this in a different recent LW thread?)
Your ball point is very different. My driving point is that there isn’t even a nice, platonic-ideal type definition of particle IN THE MAP, let alone something that connects to the territory. I understand how my above post may lead you to misunderstand what I was trying to get it..
To rephrase my above comment, I might say: some of the features a MAP of a particle needs is that its detectable in some way, and that it can be described in a non-relativistic limit by a Schroedinger equation. The standard QFT definitions for particle lack both these features. Its also not-fully consistent in the case of charged particles.
In QFT there is lots of confusion about how the map works, unlike classical mechanics.
There is no ‘rigid’ in special relativity, the best you can do is Born-rigid. Even so, its trivial to define a ball in special relativity, just define it in the frame of a corotating observer and use four vectors to move to the same collection of events in other frames You learn that a ‘ball’ in special relativity has some observer dependent properties, but thats because length and time are observer dependent in special relativity. So ‘radius’ isn’t a good concept, but ‘the radius so-and-so measures’ IS a good concept.
The Unruh effect is a specific instance of my general-point (particle definition is observer dependent). All you’ve done is give a name to the sub-class of my point (not all observers see the same particles).
So should we expect ontology to be observer independent? If we should, what happens to particles?
And yet it proclaims the issue settled in favour of MWI and argues of how wrong science is for not settling on MWI and so on. The connection—that this deficiency is why MWI can’t be settled on, sure does not come up here. Speaking of which, under any formal metric that he loves to allude to (e.g. Kolmogorov complexity), MWI as it is, is not even a valid code for among other things this reason.
It doesn’t matter how much simpler MWI is if we don’t even know that it isn’t too simple, merely guess that it might not be too simple.
edit: ohh, and lack of derivation of Born’s rules is not the kind of thing I meant by argument in favour of non-realism. You can be non-realist with or without having derived Born’s rules. How QFT deals with relativistic issues, as outlined by e.g. Mitchell Porter , is a quite good reason to doubt reality of what goes on mathematically in-between input and output. There’s a view that (current QM) internals are an artefact of the set of mathematical tricks which we like / can use effectively. The view that internal mathematics is to the world as rods and cogs and gears inside a WW2 aiming computer are to a projectile flying through the air.
What one can learn is that the allegedly ‘settled’ and ‘solved’ is far from settled and solved and is a matter of opinion as of now. This also goes for qualia and the like; we haven’t reduced them to anything, merely asserted.
The relevance of Porter’s physics beliefs is that any reader who disagrees with Porter’s premises but agrees with the premises used in an article can gain little additional information about the quality of the article by learning that Porter is not convinced by it. ie. Whatever degree of authority Mitchell Porter’s status grants goes (approximately) in the direction of persuading the reader to adopt those different premises.
In this way mentioning Porter’s beliefs is distinctly different from mentioning the people that you now bring up:
What one can learn is that the allegedly ‘settled’ and ‘solved’ is far from settled and solved and is a matter of opinion as of now. This also goes for qualia and the like; we haven’t reduced them to anything, merely asserted.
It extends all the way up, competence wise—see Roger Penrose.
It’s fine to believe in MWI if that’s where your philosophy falls, its another thing entirely to argue that belief in MWI is independent of priors and a philosophical stance, and yet another to argue that people fail to be swayed by a very biased presentation of the issue which omits every single point that goes in favour of e.g. non-realism, because they are too irrational or too stupid.
No, that set of posts goes on at some length about how MWI has not yet provided a good derivation of the Born probabilities.
But I think it does not do justice to what a huge deal the Born probabilities are. The Born probabilities are the way we use quantum mechanics to make predictions, so saying “MWI has not yet provided a good derivation of the Born probabilities” is equivalent to “MWI does not yet make accurate predictions,” I’m not sure thats clear to people who read the sequences but don’t use quantum mechanics regularly.
Also, by omitting the wide variety of non-Copenhagen interpretations (consistent histories, transactional, Bohm, stochastic-modifications to Schroedinger,etc) the reader is lead to believe that the alternative to Copenhagen-collapse is many worlds, so they won’t use the absence of Born probabilities in many worlds to update towards one of the many non-Copenhagen alternatives.
Note that the Born probabilities really obviously have something to do with the unitarity of QM, while no single-world interpretation is going to have this be anything but a random contingent fact. The unitarity of QM means that integral-squared-modulus quantifies the “amount of causal potency” or “amount of causal fluid” or “amount of conserved real stuff” in a blob of the wavefunction. It would be like discovering that your probability of ending up in a computer corresponded to how large the computer was. You could imagine that God arbitrarily looked over the universe and destroyed all but one computer with probability proportional to its size, but this would be unlikely. It would be much more likely (under circumstances analogous to ours) to guess that the size of the computer had something to do with the amount of person in it.
The problems with Copenhagen are fundamentally one-world problems and they go along with any one-world theory. If I honestly believed that the only reason the QM sequence wasn’t convincing was that I didn’t go through every single one-world theory to refute them separately, I could try to write separate posts for RQM, Bohm, and so on, but I’m not convinced that this is the case. Any single-world theory needs either spooky action at a distance, or really awful amateur epistemology plus spooky action at a distance, and there’s just no reason to even hypothesize single-world theories in the first place.
(I’m not sure I have time to write the post about Relational Special Relativity in which length and time just aren’t the same for all observers and so we don’t have to suppose that Minkowskian spacetime is objectively real, and anyway the purpose of a theory is to tell us how long things are so there’s no point in a theory which doesn’t say that, and those silly Minkowskians can’t explain how much subjective time things seem to take except by waving their hands about how the brain contains some sort of hypothetical computer in which computing elements complete cycles in Minkowskian intervals, in contrast to the proper ether theory in which the amount of conscious time that passes clearly corresponds to the Lorentzian rule for how much time is real relative to a given vantage point...)
It is not worth writing separate posts for each interpretation. However it is becoming increasingly apparent that to the extent that the QM sequence matters at all it may be worth writing a single post which outlines how your arguments apply to the other interpretations. ie.:
A brief summary of and a link to your arguments in favor of locality then an explicit mention of how this leads to rejecting “Ensemble, Copenhagen, de Broglie–Bohm theory, von Neumann, Stochastic, Objective collapse and Transactional” interpretations and theories.
A brief summary of and a link to your arguments about realism in general and quantum realism in particular and why the wavefunction not being considered ‘real’ counts against “Ensemble, Copenhagen, Stochastic and Relational” interpretations.
Some outright mockery of the notion that observation and observers have some kind of intrinsic or causal role (Coppenhagen, von Neumann and Relational).
Mention hidden variables and the complexity burden thereof (de Broglie–Bohm, Popper).
Having such a post as part of the sequence would make it trivial to dismiss claims like:
… as straw men. As it stands however this kind of claim (evidently, by reception) persuades many readers, despite this being significantly different to the reasoning that you intended to convey.
If it worth you maintaining active endorsement of your QM posts it may be worth ensuring both that it is somewhat difficult to actively misrepresent them and also that the meaning of your claims are as clear as they can conveniently be made. If there are Mihaly Barasz’s out there who you can recruit via the sanity of your physics epistemology there are also quite possibly IMO gold medalists out there who could be turned off by seeing negative caricatures of your QM work so readily accepted and then not bother looking further.
Not so. If we insist that our predictions need to be probabilities (take the Born probabilities as fundamental/necessary), then unitarity becomes equivalent to the statement that probabilities have to sum to 1, and we can then try to piece together what our update equation should look like. This is the approach taken by the ‘minimalist’/‘ensemble’ interpretation that Ballentine’s textbook champions, he uses probabilities sum to 1 and some group theory (related to the Galilean symmetry group) to motivate the form of the Schroedinger equation. Edit to clarify: In some sense, its the reverse of many worlds- instead of taking the Schroedinger axioms as fundamental and attempting to derive Born, take the operator/probability axioms seriously and try to derive Schroedinger.
I believe the same consideration could be said of the consistent histories approach, but I’d have to think about it before I’d fully commit.
Edit to add: Also, what about “non-spooky” action at a distance? Something like the transactional interpretation, where we take relativity seriously and use both the forward and backward Green’s function of the Dirac/Klein-Gordon equation? This integrates very nicely with Barbour’s timeless physics, properly derives the Born rule, has a single world, BUT requires some stochastic modifications to the Schroedinger equation.
What surprises me in the QM interpretational world is that the interaction process itself is clearly more than just a unitary evolution of some wave function, given how the number of particles is not conserved, requiring the full QFT approach, and probably more, yet (nearly?) all interpretations stop at the QM level, without any attempt at some sort of second quantization. Am I missing something here?
Mostly just that QFT is very difficult and not rigorously formulated. Haag’s theorem (and Wightman’s extension) tell us that an interacting quantum field theory can’t live in a nice Hilbert space, so there is a very real sense that realistic QFTs only exist peturbatively. This makes interpretation something of a nightmare.
Basically, we ignore a bunch of messy complications (and potential inconsistency) just to shut-up-and-calculate, no one wants to dig up all that ‘just’ to get to the messy business of interpretation.
Are you saying that people knowingly look where it’s light, instead of where they lost the keys?
More or less. If the axiomatic field theory guys ever make serious progress, expect a flurry of me-too type interpretation papers to immediately follow. Until then, good luck interpreting a theory that isn’t even fully formulated yet.
If you ever are in a bar after a particle phenomenology conference lets out, ask the general room what, exactly, a particle is, and what it means that the definition is NOT observer independent.
Oh, I know what a particle is. It’s a flat-space interaction-free limit of a field. But I see your point about observer dependence.
Then what is it, exactly, that particle detectors detect? Because it surely can’t be interaction free limits of fields. Also, when we go to the Schreodinger equation with a potential, what are we modeling? It can’t be a particle, there is non-perturbative potential! Also, for any charged particle, the IR divergence prevents the limit, so you have to be careful- ‘real’ electrons are linear combination of ‘bare’ electrons and photons.
What I meant was that if you think of a field excitation propagation “between interactions”, they can be identified with particles. And you are right, I was neglecting those pesky massless virtual photons in the IR limit. As for the SE with a potential, this is clearly a semi-classical setup, there are no external classical potentials, they all come as some mean-field pictures of a reasonably stable many-particle interaction (a contradiction in terms though it might be). I think I pointed that out earlier in some thread.
The more I learn about the whole thing, the more I realize that all of Quantum Physics is basically a collection of miraculously working hacks, like narrow trails in a forest full of unknown deadly wildlife. This is markedly different from the classical physics, including relativity, where most of the territory is mapped, but there are still occasional dangers, most of which are clearly marked with orange cones.
Somebody: Virtual photons don’t actually exist: they’re just a bookkeeping device to help you do the maths.
Someone else, in a different context: Real photons don’t actually exist: each photon is emitted somewhere and absorbed somewhere else a possibly long but still finite amount of time later, making that a virtual photon. Real photons are just a mathematical construct approximating virtual photons that live long enough.
Me (in yet a different context, jokingly): [quotes the two people above] So, virtual photons don’t exist, and real photons don’t exist. Therefore, no photons exist at all.
This is less joking then you think- its more or less correct. If you change the final to conclusion to “there isn’t a good definition of photon” you’d be there. Its worse for QCD, where the theory has an SU(3) symmetry you pretty much have to sever in order to treat the theory perturbatively.
It really is. When you look at the experiments they’re performing, it’s kind of a miracle they get any kind of usable data at all. And explaining it to intelligent people is this near-infinite recursion of “But how do they know that experiment says what they say it does” going back more than a century with more than one strange loop.
Seriously, I’ve tried explaining just the proof that electrons exist, and in the end the best argument is that all the math we’ve built assuming their existence have really good predictive value. Which sounds like great evidence until you start confronting all the strange loops (the best experiments assume electromagnetic fields...) in that evidence, and I don’t even know how to -begin- untangling those. I’m convinced you could construct parallel physics with completely different mechanics (maybe the narrow trails aren’t as narrow as you’d think?) and get exactly the same results. And quantum field theory’s history of parallel physics doesn’t exactly help my paranoia there, even if they did eventually clean -most- of it up.
I fail to see the difference between this and “electrons exist”. But then my definition of existence only talks about models, anyway.
I am also not sure what strange loops you are referring to, feel free to give a couple of examples.
Most likely. It happens quite often (like Heisenberg’s matrix mechanics vs Schrodinger’s wave mechanics). Again, I have no problem with multiple models giving the same predictions, so I fail to see the source of your paranoia...
My beef with quantum physics is that there are many straightforward questions within its own framework it does not have answers to.
Imagine there’s a different, as-yet-unknown [ETA: simpler] model that doesn’t have electrons but makes the same experimental predictions as ours.
Then it’s equivalent to “electrons exist”. This is quite a common occurrence in physics, especially these days, holography and all. It also happens in condensed matter a lot, where quasi-particles like holes and phonons are a standard approximation. Do holes “exist” in a doped semiconductor? Certainly as much as electrons exist, unless you are a hard reductionist insisting that it makes sense to talk about simulating a Boeing 747 from quarks.
I meant for the as-yet-unknown model to be simpler than ours. (Do epicycles exist? After all, they do predict the motion of planets.)
One example is mentioned; the proofs of electrons assumes the existences of (electircally charged) electromagnetic fields (Thomson’s experiment), the proof of electromagnetic fields -as- electrically charged comes from electron scattering and similar experiments.
(I’m fine with “electrons exist as a phenomenon, even if they’re not the phenomenon we expect them to be”, but that tends to put people in an even more skeptical frame of mind then before I started “explaining”. I’ve generally given up such explanations; it appears I’m hopelessly bad at it.)
Another strange loop is in the quantization of energy (which requires electrical fields to be quantized, the evidence for which comes quantization of energy to begin with). Strange loops are -fine-, taken as a whole—taken as a whole the evidence can be pretty good—but when you’re stepping a skeptical person through it step by step it, it’s hard to justify the next step when the previous step depends on it. The Big Bang Theory is another—the theory requires something to plug the gap in expected versus received background radiation, and the evidence for the plug (dark energy, for example) pretty much requires BBT to be true to be meaningful.
(Although it may be that a large part of the problem with the strange loops is that only the earliest experiments tend to be easily found in textbooks and on the Internet, and later less loop-prone experiments don’t get much attention.)
The existence of electromagnetic fields is just the existence of light. You can build up the whole theory of electricity and magnetism without mentioning electrons. Charge is just a definition that tells us that some types of matter attract some other types of matter.
Once you have electromagnetic fields understood well, you can ask questions like “well, what is this piece of metal made up of, what is this piece of plastic made up of”, etc, and you can measure charges and masses of the various constituents. Its not actually self-referential in the way you propose.
Light isn’t electrically charged.
You’re correct that you can build up the theory without electrons—exactly this happened. That history produced linearly stepwise theories isn’t the same as the evidence being linearly stepwise, however.
Light IS electromagnetic fields. the phrase “electrically charged electromagnetic fields” is a contradiction- the fields aren’t charged. Charges react to the field.
If the fields WERE charged in some way, the theory would be non-linear.
In this case there is no loop- you can develop the electromagnetic theory around light, and from there proceed to electrons if you like.
Light, in the theory you’re indirectly referencing, is a disturbance in the electromagnetic field, not the field itself.
The fields are charged, hence all the formulas involving them reflecting charge in one form or another (charge density is pretty common); the amplitude of the field is defined as the force exerted on positively charged matter in the field. (The reason for this definition is that most electromagnetic fields we interact with are negatively charged, or have negative charge density, on account of electrons being more easily manipulated than cations, protons, plasma, or antimatter.)
With some creative use of relativity you can render the charge irrelevant for the purposes of (a carefully chosen) calculation. This is not the same as the charge not existing, however.
You are using charge in some non-standard way. Charges are source or sinks of the field.
An electromagnetic field does not sink or source more field- if it did, Maxwell’s equations would be non-linear. There is no such thing as a ‘negatively charged electromagnetic field’- there are just electromagnetic fields. Now, the electromagnetic field can have a negative (or positive) amplitude but this is not the same as saying its negatively charged.
Really? How does that work if, say, there’s a human in Schrodinger’s Box?
How does what work?
How does a model-based definition of existence interact with morality? Or paperclipping, for that matter?
Still not clear what you are having trouble with. I interpret “electron exist” as “I have this model I call electron which is better at predicting certain future inputs than any competing model”. Not sure what it has to do with morality or paperclipping.
How do you interpret “such-and-such an entity is required by such-and-such a theory, which seems to work, bit turns out not to exist”. Do things wink in and out of existence as one theory replaces another?
I think shminux’s response is something like:
“Given a model that predicts accurately, what would you do differently if the objects described in the model do or don’t exist at some ontological level? If there is no difference, what are we worrying about?”
Why worry about prediction if it doesn’t relate to a real world?
I think you overread shminux. My attempted steelman of his position would be:
I happen to disagree with him because I think resolving that dispute has the potential to help us make better predictions in the future. But your comment appears to strawman shminux by asserting that he doesn’t believe in external reality at all, when he clearly believes there is some cause of the regularity that allows his models to make accurate predictions.
Saying “there is regularity” is different from saying “regularity occurs because quarks are real.”
If this steelman is correct, my support for schminux’s position has risen considerably, but so has my posterior belief that schminux and Eliezer actually have the same substantial beliefs once you get past the naming and modeling and wording differences.
Given schminux and Eliezer’s long-standing disagreement and both affirming that they have different beliefs, this makes it seem more likely that there’s either a fundamental miscommunication, that I misunderstand the implications of the steel-manning or of Eliezer’s descriptions of his beliefs, or that this steel-manning is incorrect. Which in turn, given that they are both quite more highly experienced in explicit rationality and reduction than I am, makes the first of the above three less likely, and thus makes it back less-than-it-would-first-seem still-slightly-more-likely that they actually agree, but also more likely that this steelman strawmans schminux in some relevant way.
Argh. I think I might need to maintain a bayes belief network for this if I want to think about it any more than that.
The disagreement starts here:
I refuse to postulate an extra “thingy that determines my experimental results”. Occam’s razor and such.
So uhm. How do the experimental results, y’know, happen?
I think I understand everything else. Your position makes perfect sense. Except for that last non-postulate. Perhaps I’m just being obstinate, but there needs to be something to the pattern / regularity.
If I look at a set of models, a set of predictions, a set of experiments, and the corresponding set of experimental results, all as one big blob:
The models led to predictions—predictions about the experimental results, which are part of the model. The experiments were made according to the model that describes how to test those predictions (I might be wording this a bit confusingly?). But the experimental results… just “are”. They magically are like they are, for no reason, and they are ontologically basic in the sense that nothing at all ever determines them.
To me, it defies any reasonable logical description, and to my knowledge there does not exist a possible program that would generate this (i.e. if the program “randomly” generates the experimental results, then the randomness generator is the cause of the results, and thus is that thinghy, and for any regularity observable then the algorithm that causes that regularity in the resulting program output is the thinghy). Since as far as I can tell there is no possible logical construct that could ever result in a causeless ontologically basic “experimental result set” that displays regularity and can be predicted and tested, I don’t see how it’s even possible to consistently form a system where there are even models and experiences.
In short, if there is nothing at all whatsoever from which the experimental results arise, not even just a mathematical formula that can be pointed at and called ‘reality’, then this doesn’t even seem like a well-formed mathematically-expressible program, let alone one that is occam/solomonoff “simpler” than a well-formed program that implicitly contains a formula for experimental results.
No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say “Look here! This is what ‘determines’ what experimental results I see and restricts the possible futures! Let’s call this thinghy/subset/formula ‘reality’!”
I don’t see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.
As far as I can tell, those two paragraphs are pretty much Eliezer’s position on this, and he’s just putting that subset as an arbitrary variable, saying something like “Sure, we might not know said subset of the program or where exactly it is or what computational form it takes, but let’s just have a name for it anyway so we can talk about things more easily”.
Are you trying to solve the question of origin? How did the external reality, that thing that determines the experimental results, in the realist model, y’know, happen?
I discount your musings about “ontological basis”, perhaps uncharitably. Instrumentally, all I care about is making accurate predictions, and the concept of external reality is sometimes useful in that sense, and sometimes it gets in the way.
Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.
I fail to see a requirement you think I would have to get around. Just some less-than-useful logical construct.
I think it all just finally clicked. Strawman test (hopefully this is a good enough approximation):
You do imagine patterns and formulas, and your model does (or can) contain a (meta^x)-model that we could use and call “reality” and do whatever other realist-like shenanigans, and does describe the experimental results in some way that we could say “this formula, if it ‘really existed’ and the concept of existence is coherent at all, is the cause of my experimental results and the thinghy that determines them”.
You just naturally exclude going from there to assuming that the meta-model is “real”, “exists”, or is itself what is external to the models and causes everything; something which for other people requires extra mental effort and does relate to the problem of origin.
Sure. What I was attempting to say is that if I look at your model of the world, and within this model find a sub-part that happens to be a meta-model of the world like that program, I could also point at a smaller sub-part of that meta-model and say “Within this meta-model that you have in your model of the world, this is the modeled ‘cause’ of your experimental results, they all happen according to this algorithm”.
So now, given that the above is at least a reasonable approximation of your beliefs, the hypotheses for one of us misinterpreting Eliezer have risen quite considerably.
Personally, I tend to mentally “simplify” my model by saying that the program in question “is” (reality), for purposes of not having to redefine and debate things with people. Sometimes, though, when I encounter people who think “quarks are really real out there and have a real position in a really existing space”, I just get utterly confused. Quarks are just useful models of the interactions in the world. What’s “actually” doing the quark-ing is irrelevant.
Natural language is so bad at metaphysics, IME =\
So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality? I have at least two issues with this formulation. One is that every model supposedly contains this algorithm. Lots of high-level models are polymorphic, you can replace quarks with bits or wooden blocks and they still hold. The other is that, once you put this algorithm outside the model space, you are tempted to consider other similar algorithms which have no connection with the rest of the models whatsoever, like the mathematical universe. The term “exist” gains a meaning not present in its original instrumental Latin definition: to appear or to stand out. And then we are off the firm ground of what can be tested and into the pure unconnected ideas, like “post-utopian” Eliezer so despises, yet apparently implicitly adopts. Or maybe I’m being uncharitable here. He never engaged me on this point.
I think both you and DaFranker might be going a bit too deep down the meta-model rabbit-hole. As far as I understand, when a scientist says “electrons exists”, he does not mean,
Rather, he’s saying something like,
As far as I understand, you would disagree with the second statement. But, if so, how do you explain the fact that our experimental results are so reliable and consistent ? Is this just an ineffable mystery ?
I don’t disagree with the second statement, I find parts of it meaningless or tautological. For example:
The part in bold is redundant. You would normally say “of Higgs decay” or something to that effect.
The part in bold is tautological. Accurate predictions is the definition of not being wrong (within the domain of applicability). In that sense Newtonian physics is not wrong, it’s just not as accurate.
The instrumentalist definition. For realists, and accurate theory can still be wrong because it fails to correspond to reality, or posits non existent entities. For instance, and epicyclic theory of the solar system can be made as accurate as you like.
I meant to make a more further-reaching statement than that. If we believe that our model approximates that (postulated) thing that is causing our experiments to come out a certain way, then we can use this model to devise novel experiments, which are seemingly unrelated to the experiments we are doing now; and we could expect these novel experiments to come out the way we expected, at least on occasion.
For example, we could say, “I have observed this dot of light moving across the sky in a certain way. According to my model, this means that if I were to point my telescope at some other part of sky, we would find a much dimmer dot there, moving in a specific yet different way”.
This is a statement that can only be made if you believe that different patches of the sky are connected, somehow, and if you have a model that describes the entire sky, even the pieces that you haven’t looked at yet.
If different patches of the sky are completely unrelated to each other, the likelihood of you observing what you’d expect is virtually zero, because there are too many possible observations (an infinite number of them, in fact), all equally likely. I would argue that the history of science so far contradicts this assumption of total independence.
This may be off-topic, but I would agree with this statement. Similarly, the statement “the Earth is flat” is not, strictly speaking, wrong. It works perfectly well if you’re trying to lob rocks over a castle wall. Its inaccuracy is too great, however, to launch satellites into orbit.
Sort-of.
I’m saying that there’s a sufficiently fuzzy and inaccurate polymorphic model (or sets of models, or meta-description of the requirements and properties for relevant models,) of “the universe” that could be created and pointed at as “the laws”, which if known fully and accurately could be “computed” or simulated or something and computing this algorithm perfectly would in-principle let us predict all of the experimental results.
If this theoretical, not-perfectly-known sub-algorithm is a perfect description of all the experimental results ever, then I’m perfectly willing to slap the labels “fundamental” and “reality” on it and call it a day, even though I don’t see why this algorithm would be more “fundamentally existing” than the exact same algorithm with all parameters multiplied by two, or some other algorithm that produces the same experimental results in all possible cases.
The only reason I refer to it in the singular—“the sub-algorithm”—is because I suspect we’ll eventually have a way to write and express as “an algorithm” the whole space/set/field of possible algorithms that could perfectly predict inputs, if we knew the exact set that those are in. I’m led to believe it’s probably impossible to find this exact set.
I find this approach very limiting. There is no indication that you can construct anything like that algorithm. Yet by postulating its existence (ahem), you are forced into a mode of thinking where “there is this thing called reality with some fundamental laws which we can hopefully learn some day”. As opposed to “we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on”. Without ever worrying if some day there is nothing more to discover, because we finally found the holy grail, the ultimate laws of reality. I don’t mind if it’s turtles all the way down.
In fact, in the spirit of QM and as often described in SF/F stories, the mere act of discovery may actually change the “laws”, if you are not careful. Or maybe we can some day do it intentionally, construct our own stack of turtles. Oh, the possibilities! And all it takes is to let go of one outdated idea, which is, like Aristotle’s impetus, ripe for discarding.
The claim that reality may be ultimately unknowable or non-algorithmic is different to the claim you have made elsewhere, that there is no reality.
I’m not sure it’s as different as all that from shminux’s perspective.
By way of analogy, I know a lot of people who reject the linguistic habit of treating “atheism” as referring to a positive belief in the absence of a deity, and “agnosticism” as referring to the absence of a positive belief in the presence of a deity. They argue that no, both positions are atheist; in the absence of a positive belief in the presence of a deity, one does not believe in a deity, which is the defining characteristic of the set of atheist positions. (Agnosticism, on this view, is the position that the existence of a deity cannot be known, not merely the observation that one does not currently know it. And, as above, on this view that means agnosticism implies atheism.)
If I substitute (reality, non-realism, the claim that reality is unknowable) for (deity, atheism, agnosticism) I get the assertion that the claim that reality is unknowable is a non-realist position. (Which is not to say that it’s specifically an instrumentalist position, but we’re not currently concerned with choosing among different non-realist positions.)
All of that said, none of it addresses the question which has previously been raised, which is how instrumentalism accounts for the at-least-apparently-non-accidental relationship between past inputs, actions, models, and future inputs. That relationship still strikes me as strong evidence for a realist position.
I can’t see much evidence that the people who construe atheism and agnosticicsm in the way you describe ae actually correct. I agree that the no-reality position and the unknowable-reality position could both be considered anti-realist, but they are still substantively difference. Deriving no-reality from unknowable reality always seems like an error to me, but maybe someone has an impressive defense of it.
Well, I certainly don’t want to get into a dispute about what terms like “atheism”, “agnosticism”, “anti-realism”, etc. ought to mean. All I’ll say about that is if the words aren’t being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it’s best not to use those words.
Leaving language aside, I accept that the difference between “there is no reality” and “whether there is a reality is systematically unknowable” is an important difference to you, and I agree that deriving the former from the latter is tricky.
I’m pretty sure it’s not an important difference to shminux. It certainly isn’t an important difference to me… I can’t imagine why I would ever care about which of those two statements is true if at least one of them is.
I don’t see why not.
Or settle their correct meanings using a dictionary, or something.
If shminux is using arguments for Unknowable Reality as arguments for No Reality, then shminux’s arguments are invalid whatever shminux cares about.
One seems a lot ore far fetched that then other to me.
If all goes well in a definitional dispute, at the end of it we have agreed on what meaning to assign to a word. I don’t really care; I’m usually perfectly happy to assign to it whatever meaning my interlocutor does. In most cases, there was some other more interesting question about the world I was trying to get at, which got derailed by a different discussion about the meanings of words. In most of the remaining cases, the discussion about the meanings of words was less valuable to me than silence would have been.
That’s not to say other people need to share my values, though; if you want to join definitional disputes (by referencing a dictionary or something) go right ahead. I’m just opting out.
I don’t think he is, though I could be wrong about that.
Pretty sure you mixed up “we can’t know the details of reality” with “we can’t know if reality exists”.
That would be interesting, if true.
I have no coherent idea how you conclude that from what I said, though.
Can you unpack your reasoning a little?
Sure.
Agnosticism = believing we can’t know if God exists
Atheism = believing God does not exist
Theism = believing God exists
turtles-all-the-way-down-ism = believing we can’t know what reality is (can’t reach the bottom turtle)
instrumentalism/anti-realism = believing reality does not exist
realism = believing reality exists
Thus anti-realism and realism map to atheism and theism, but agnosticism doesn’t map to infinte-turtle-ism because it says we can’t know if God exists, not what God is.
Or believing that it’s not a meaningful or interesting question to ask
That’s quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.
Those would be ignosticism and apatheism respectively.
Yes, yes, we all know your idiosyncratic definition of “exist”, I was using the standard meaning because I was talking to a realist.
Yeah. The issue here, i gather, has to do a lot with domain specific knowledge—you’re a physicist, you have general idea how physics does not distinguish between, for example, 0 and two worlds of opposite phases which cancel out from our perspective. Which is way different from naive idea of some sort of computer simulation, where of course two simulations with opposite signs being summed, are a very different thing ‘from the inside’ from plain 0. If we start attributing reality to components of the sum in Feynman’s path integral… that’s going to get weird.
You realize that, assuming Feynman’s path integral makes accurate predictions, shiminux will attribute it as much reality as, say, the moon, or your inner experience.
The issue is with all the parts of it, which include your great grandfather’s ghost, twice, with opposite phases, looking over your shoulder.
Since I am not a quantum physicist, I can’t really respond to your objections, and in any case I don’t subscribe to shiminux’s peculiar philosophy.
Thanks for the clarification, it helps.
An agnostic with respect to God (which is what “agnostic” has come to mean by default) would say both that we can’t know if God exists, and also that we can’t know the nature of God. So I think the analogy still holds.
Right. But! An agnostic with respect to the details of reality—an infinite-turtle-ist—need not be an agnostic with respect to reality, even if an agnostic with respect to reality is also an agnostic with respect to it’s details (although I’m not sure if that follows in any case.)
(shrug) Sure. So my analogy only holds between agnostics-about-God (who question the knowability of both the existence and nature of God) and agnostics-about-reality (who question the knowability of both the existence and nature of reality).
As you say, there may well be other people out there, for example those who question the knowability of the details, but not of the existence, of reality. (For a sufficiently broad understanding of “the details” I suspect I’m one of those people, as is almost everyone I know.) I wasn’t talking about them, but I don’t dispute their existence.
Absolutely, but that’s not what shiminux and PrawnOfFate were talking about, is it?
I have to admit, this has gotten rarefied enough that I’ve lost track both of your point and my own.
So, yeah, maybe I’m confusing knowing-X-exists with knowing-details-of-X for various Xes, or maybe I’ve tried to respond to a question about (one, the other, just one, both) with an answer about (the other, one, both, just one). I no longer have any clear notion, either of which is the case or why it should matter, and I recommend we let this particular strand of discourse die unless you’re willing to summarize it in its entirety for my benefit.
I predict that these discussions, even among smart, rational people will go nowhere conclusive until we have a proper theory of self-aware decision making, because that’s what this all hinges on. All the various positions people are taking in this are just packaging up the same underlying confusion, which is how not to go off the rails once your model includes yourself.
Not that I’m paying close attention to this particular thread.
This is not at all important to your point, but the impetus theory of motion was developed by John Philoponus in the 6th century as an attack on Aristotle’s own theory of motion. It was part of a broadly Aristotelian programme, but its not something Aristotle developed. Aristotle himself has only traces of a dynamical theory (the theory being attacked by Philoponus is sort of an off-hand remark), and he concerned himself mostly with what we would probably call kinematics. The Aristotelian principle carried through in Philoponus’ theory is the principle that motion requires the simultaneous action of a mover, which is false with respect to motion but true with respect to acceleration. In fact, if you replace ‘velocity’ with ‘acceleration’ in a certain passage of the Physics, you get F=ma. So we didn’t exactly discard Aristotle’s (or Philoponus’) theory, important precursors as they were to the idea of inertia.
That kind of replacement seems like a serious type error—velocity is not really anything like acceleration. Like saying that if you replace P with zero, you can prove P = NP.
That its a type error is clear enough (I don’t know if its a serious one under an atmosphere). But what follows from that?
Hm.
On your account, “explaining an input” involves having a most-accurate-model (aka “real world”) which alters in response to that input in some fashion that makes the model even more accurate than it was (that is, better able to predict future inputs). Yes?
If so… does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done? If it does… how is that any less limiting than the realist’s view allowing for entering a state where there is no further understanding of reality to be done?
I mean, I recognize that it’s possible to have an instrumentalist account in which no such limitative result applies, just as it’s possible to have a realist account in which no such limitative result applies. But you seem to be saying that there’s something systematically different between instrumentalist and realist accounts here, and I don’t quite see why that should be.
You make a reference a little later on to “mental blocks” that realism makes more likely, and I guess that’s another reference to the same thing, but I don’t quite see what it is that that mental block is blocking, or why an instrumentalist is not subject to equivalent mental blocks.
Does the question make sense? Is it something you can further clarify?
Maybe you are reading too much into what I said. If your view is that what we try to understand is this external reality, it’s quite a small step to assuming that some day it will be understood in its entirety. This sentiment has been expressed over and over by very smart people, like the proverbial Lord Kelvin’s warning that “physics is almost done”, or Laplacian determinism. If you don’t assume that the road you travel leads to a certain destination, you can still decide that there are no more places to go as your last trail disappears, but it is by no means an obvious conclusion.
Well, OK.
I certainly agree that this assumption has been made by realists historically.
And while I’m not exactly sure it’s a bad thing, I’m willing to treat it as one for the sake of discussion.
That said… I still don’t quite get what the systematic value-difference is.
I mean, if my view is instead that what we try to achieve is maximal model accuracy, with no reference to this external reality… then what? Is it somehow a longer step from there to assuming that some day we’ll achieve a perfectly accurate model?
If so, why is that?
If not, then what have I gained by switching from the goal of “understand external reality in its entirety” to the goal of “achieve a perfectly accurate model”?
If I’m following you at all, it seems you’re arguing in favor of a non-idealist position much more than a non-realist position. That is, if it’s a mistake to “assume that the road you travel leads to a certain destination”, it follows that I should detach from “ultimate”-type goals more generally, whether it’s a realist’s goal of ultimately understanding external reality, or an instrumentalist’s goal of ultimately achieving maximal model accuracy, or some other ontology’s goal of ultimately doing something else.
Have I missed a turn somewhere?
Or is instrumentalism somehow better suited to discouraging me from idealism than realism is?
Or something else?
Look, I don’t know if I can add much more. What started my deconversion from realism is watching smart people argue about interpretations of QM, Boltzmann brains and other untestable ontologies. After a while these debates started to seem silly to me, so I had to figure out why. Additionally, I wanted to distill the minimum ontology, something which needn’t be a subject of pointless argument, but only of experimental checking. Eventually I decided that external reality is just an assumption, like any other. This seems to work for me, and saves me a lot of worrying about untestables. Most physicists follow this pragmatic approach, except for a few tenured dudes who can afford to speculate on any topic they like. Max Tegmark and Don Page are more or less famous examples. But few physicists worry about formalizing their ontology of pragmatism. They follow the standard meaning of the terms exist, real, true, etc., and when these terms lead to untestable speculations, their pragmatism takes over and they lose interest, except maybe for some idle chat over a beer. A fine example of compartmentalization. I’ve been trying to decompartmentalize and see where the pragmatic approach leads, and my interpretation of the instrumentalism is the current outcome. It lets me to spot early many statements implications of which a pragmatist would eventually ignore, which is quite satisfying. I am not saying that I have finally worked out the One True Ontology, or that I have resolved every issue to my satisfaction, but it’s the best I’ve been able to cobble together. But I am not willing to trade it for a highly compartmentalized version of realism, or the Eliezerish version of many untestable worlds and timeless this or that. YMMV.
(shrug) OK, I’m content to leave this here, then. Thanks for your time.
So...what is the point of caring about prediction?
But the “turtles all the way down” or the method in which the act of discovery changes the law...
Why can’t that also be modeled? Even if the model is self-modifying meta-recursive turtle-stack infinite “nonsense”, there probably exists some way to describe it, model it, understand it, or at least point towards it.
This very “pointing towards it” is what I’m doing right now. I postulate that no matter the form it takes, even if it seems logically nonsensical, there’s a model which can explain the results proportionally to how much we understand about it (we may end up being never able to perfectly understand it).
Currently, the best fuzzy picture of that model, by my pinpointing of what-I’m-referring-to, is precisely what you’ve just described:
That’s what I’m pointing at. I don’t care either how many turtle stacks or infinities or regresses or recursions or polymorphic interfaces or variables or volatilities there are. The hypothetical description that a perfect agent with perfect information looking at our models and inputs from the outside would give of the program that we are part of is the “algorithm”.
Maybe the turing tape never halts, and just keeps computing on and on more new “laws of physics” as we research on and on and do more exotic things, such that there’s no “true final ultimate laws”. Of course that could happen. I have no solid evidence either way, so why would I restrict my thinking to the hypothesis that there is? I like flexibility in options like that.
So yeah, my definition of that formula is pretty much self-referential and perhaps not always coherently explained. It’s a bit like CEV in that regards, “whatever we would if …” and so on.
Once all reduced away, all I’m really postulating is the continuing ability of possible agents who make models and analyze their own models to point at and frame and describe mathematically and meta-modelize the patterns of experimental results, given sufficient intelligence and ability to model things. It’s not nearly as powerfully predictive or groundbreaking as I might have made it sound in earlier comments.
For more comparisons, it’s a bit like when I say “my utility function”. Clearly, there might not be a final utility function in my brain, it might be circular, or it might regress infinitely, or be infinitely self-modifying and self-referential, but by golly when I say that my best approximation of my utility function values having food much more highly than starving, I’m definitely pointing at and approximating something in there in that mess of patterns, even if I might not know exactly where I’m pointing at.
That “something” is my “true utility function”, even if it would have to be defined with fuzzy self-recursive meta-games and timeless self-determinance or some other crazy shenanigans.
So I guess that’s about also what I refer to when I say “reality”.
I’m not really disagreeing. I’m just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like “but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?” become progressively more nonsensical.
Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.
These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.
Useful for what? Prediction? But realists arent using these models to answer the “what input should I expect” question; they are answering other questions, like “what is real” and “what should we value”.
And “nothing” is an answer to “what is real”. What does instrumentalism predict?
If it’s really better or more “true” on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I’d have to update in favour of something.
But if that’s no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.
Well, I suppose all it would need to peruade is people who don’t already believe it …
More seriously, you’ll have to ask shiminux, because I, as a realist, anticipate this test failing, so naturally I can’t explain why it would succeed.
Huh? I don’t see why the ability to convince people who don’t care about consistency is something that should sway me.
If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn’t need to persuade people who already agree with it—just the rest of us.
And once you’ve self-modified into an instrumentalist, I guess there are other arguments that will now persuade you—for example, that this hypothetical underlying layer of “reality” has no extra predictive power (at least, I think that’s what shiminux finds persuasive.)
I’m not sure. I have seen comments that contradict that interpretation. if shminux was the kind of irrealist who believes in an external world of an unknown nature, smninux would have no reason not to call it reality But sminux insists reality is our current best model.
ETA:
anotherr example
“I refuse to postulate an extra “thingy that determines my experimental results”.
Thank you for your steelmanning (well, your second or third one, people keep reading what I write extremely uncharitably). I really appreciate it!
Most certainly. I call these experiences inputs.
Don’t, just call it inputs.
No, reality is a (meta-)model which basically states that these inputs are somewhat predictable, and little else.
The question is meaningless if you don’t postulate territory.
Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.
I see the cited link as a research in cognitive sciences (what is thinkable and in what situations), not any statement about some mythical territory.
But understanding how and why people think what they think is likely very helpful in constructing models which make better predictions.
I’d love to be convinced of that… But first I’d have to be convinced that the dispute is meaningful to begin with.
Indeed. Mainly because I don’t use the term “real”, at least not in the same way realists do.
Again, thank you for being charitable. That’s a first from someone who disagrees.
I’m not sure I understand your point of view, given these two statements. If experts in the field are able to predict future inputs with a reasonably high degree of certainty; and if we agree that these inputs are external to our minds; is it not reasonable to conclude that such experts have built an approximate mental model of at least a small portion of whatever it is that causes the inputs ? Or are you asserting that they just got lucky ?
Sorry for the newbie question, I’m late to this discussion and am probably missing a lot of context...
I’m making similar queries here, since this intrigues me and I was similarly confused by the non-postulate. Maybe between all the cross-interrogations we’ll finally understand what schminux is saying ;)
why assume that something does, unless it’s an accurate assumption (i.e. testable, tested and confirmed)?
Because there are stable relationships between outputs (actions) and inputs. We all test that hypothesis multiple times a day.
The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.
Some are and some aren’t. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into “there is this one universal thing that determines all inputs ever”.
What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that “one universal thing” being out there to be a little bit higher ?
And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?
Intuitively, to me at least, it seems simpler to assume that everything has a cause, including the regularity of experimental results, and that a mathematical algorithm being computed with the outputs resulting in what we perceive as inputs / experimental results is simpler as a cause than randomness, magic, or nothingness.
See also my other reply to your other reply (heh). I think I’m piecing together your description of things now. I find your consistency with it rather admirable (and very epistemologically hygienic, I might add).
Experts in the field have said things that were very philosophically naive. The steel-manning of those types of statements is isomorphic to physical realism.
And you are using territory in a weird way. If I understood the purpose of your usage, I might be able to understand it better. In my usage, “territory” seems roughly like the thing you call “inputs + implication of some regularity in inputs.” That’s how I’ve interpreted Yudkowsky’s use of the word as well. Honestly, my perception was that the proper understanding of territory was not exactly central to your dispute with him.
In short, Yudkowsky says the map “corresponds” the the territory in sufficiently fine grain that sentences like “atoms exist” are meaningful. You seem to think that the metaphor of the map is hopelessly misleading. I’m somewhere between, in that I think the map metaphor is helpful, but the map is not fine-grained enough to think “atoms exist” is a meaningful sentence.
I think this philosophy-of-science entry in the SEP is helpful, if only by defining the terms of the debate. I mostly like Feyerabend’s thinking, Yudkowsky and most of this community does not, and your position seems to trying to avoid the debate. Which you could do more easily if you would recognize what we mean with our words.
For outside observers:
No, I haven’t defined map or corresponds. Also, meaningful != true. Newtonian physics is meaningful and false.
Well, almost the same thing. To me regularity is the first (well-tested) meta-model, not a separate assumption.
I’m not so sure, see my reply to DaFranker.
I think it is absolutely central. Once you postulate external reality, a whole lot of previously meaningless questions become meaningful, including whether something “exists”, like ideas, numbers, Tegmark’s level 4, many untestable worlds and so on.
Only marginally. My feeling is that this apparent incommensurability is due to people not realizing that their disagreements are due to some deeply buried implicit assumptions and the lack of desire to find these assumptions and discuss them.
Not to mention question like “If we send these colonists over the horizon, does that kill them or not?”
Which brings me to a question: I can never quite figure out how your instrumentalism interacts with preferences. Without assuming the existence of something you care about, on what basis do you make decisions?
In other words, instrumentalism is a fine epistemic position, but how to actually build an instrumental agent with good consequences is unclear. Doesn’t wireheading become an issue?
If I’m accidentally assuming something that is confusing me, please point it out.
This question is equally meaningful in both cases, and equally answerable. And the answer happens to be the same, too.
Your argument reminds me of “Obviously morality comes from God, if you don’t believe in God, what’s to stop you from killing people if you can get away with it?” It is probably an uncharitable reading of it, though.
The “What I care about” thingie is currently one of those inputs. Like, what compels me to reply to your comment? It can partly be explained by the existing models in psychology, sociology and other natural sciences, and in part is still a mystery. Some day it will hopefully be able to analyze and simulate mind and brain better, and explain how this desire arises, and why one shminux decides to reply to and not ignore your comment. Maybe I feel good when smart people publicly agree with me. Maybe I’m satisfying some other preference I’m not aware of.
It’s not an argument; it’s an honest question. I’m sympathetic to instrumentalism, I just want to know how you frame the whole preferences issue, because I can’t figure out how to do it. It probably is like the God is Morality thing, but I can’t just accidentally find my way out of such a pickle without some help.
I frame it as “here’s all these possible worlds, some being better than others, and only one being ‘real’, and then here’s this evidence I see, which discriminates which possible worlds are probable, and here’s the things I can do that that further affect which is the real world, and I want to steer towards the good ones.” As you know, this makes a lot of assumptions and is based pretty directly on the fact that that’s how human imagination works.
If there is a better way to do it, which you seem to think that there is, I’m interested. I don’t understand your answer above, either.
Well, I’ll give it another go, despite someone diligently downvoting all my related comments.
Same here, with a marginally different dictionary. Although you are getting close to a point I’ve been waiting for people to bring up for some time now.
So, what are those possible worlds but models? And isn’t the “real world” just the most accurate model? Properly modeling your actions lets you affect the preferred “world” model’s accuracy, and such. The remaining issue is whether the definition of “good” or “preferred” depends on realist vs instrumentalist outlook, and I don’t see how. Maybe you can clarify.
Hrm.
First, let me apologize pre-emptively if I’m retreading old ground, I haven’t carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I’m doing so. That said… my understanding of your account of existence is something like the following:
A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn’t. Similarly, a model may include various hypothesized entities that represent certain consistent patterns of experience, such as this keyboard I’m typing on, my experiences of which consistently correlate with my experiences of text appearing on my monitor, responses to my text later appearing on my monitor, etc.
On your account, all it means to say “my keyboard exists” is that my experience consistently demonstrates patterns of that sort, and consequently I’m confident of the relevant predictions made by the set of models (M1) that have in the past predicted patterns of that sort, not-so-confident of relevant predictions made by the set of models (M2) that predict contradictory patterns, etc. etc. etc.
We can also say that M1 all share a common property K that allows such predictions. In common language, we are accustomed to referring to K as an “object” which “exists” (specifically, we refer to K as “my keyboard”) which is as good a way of talking as any though sloppy in the way of all natural language.
We can consequently say that M1 all agree on the existence of K, though of course that may well elide over many important differences in the ways that various models in M1 instantiate K.
We can also say that M1 models are more “accurate” than M2 models with respect to those patterns of experience that led us to talk about K in the first place. That is, M1 models predict relevant experience more reliably/precisely/whatever.
And in this way we can gradually converge on a single model (MR1), which includes various objects, and which is more accurate than all the other models we’re aware of. We can call MR1 “the real world,” by which we mean the most accurate model.
Of course, this doesn’t preclude uncovering a new model MR2 tomorrow which is even more accurate, at which point we would call MR2 “the real world”. And MR2 might represent K in a completely different way, such that the real world would now, while still containing the existence of my keyboard, contain it in a completely different way. For example, MR1 might represent K as a collection of atoms, and MR2 might represent K as a set of parameters in a configuration space, and when I transition from MR1 to MR2 the real world goes from my keyboard being a collection of atoms to my keyboard being a set of parameters in a configuration space.
Similarly, it doesn’t preclude our experiences starting to systematically change such that the predictions made by MR1 are no longer reliable, in which case MR stops being the most accurate model, and some other model (MR3) is the most accurate model, at which point we would call MR3 “the real world”. For example, MR3 might not contain K at all, and I would suddenly “realize” that there never was a keyboard.
All of which is fine, but the difficulty arises when after identifying MR1 as the real world we make the error of reifying MRn, projecting its patterns onto some kind of presumed “reality” R to which we attribute a kind of pseudo-existence independent of all models. Then we misinterpret the accuracy of a model as referring, not to how well it predicts future experience, but to how well it corresponds to R.
Of course, none of this precludes being mistaken about the real world… that is, I might think that MR1 is the real world, when in fact I just haven’t fully evaluated the predictive value of the various models I’m aware of, and if I were to perform such an evaluation I’d realize that no, actually, MR4 is the real world. And, knowing this, I might have various degrees of confidence in various models, which I can describe as “possible worlds.”
And I might have preferences as to which of those worlds is real. For example, MP1 and MP2 might both be possible worlds, and I am happier in MP1 than MP2, so I prefer MP1 be the real world. Similarly, I might prefer MP1 to MP2 for various other reasons other than happiness.
Which, again, is fine, but again we can make the reification error by assigning to R various attributes which correspond, not only to the real world (that is, the most accurate model), but to the various possible worlds MRx..y. But this isn’t a novel error, it’s just the extension of the original error of reification of the real world onto possible worlds.
That said, talking about it gets extra-confusing now, because there’s now several different mistaken ideas about reality floating around… the original “naive realist” mistake of positing R that corresponds to MR, the “multiverse” mistake of positing R that corresponds to MRx..y, etc. When I say to a naive realist that treating R as something that exists outside of a model is just an error, for example, the naive realist might misunderstand me as trying to say something about the multiverse and the relationships between things that “exist in the world” (outside of a model) and “exist in possible worlds” (outside of a model), which in fact has nothing at all to do with my point, which is that the whole idea of existence outside of a model is confused in the first place.
Have I understood your position?
As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.
The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like “everything imaginable exists”. This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
I’ve thought about this, but on consideration the only part of it I understand explicitly enough to “teach” is Miller’s Law (the first one), and there’s really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they’re wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they’re wrong… but rarely do I feel it useful to tell them so.)
The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.
Yes. I’m not sure what to say about that on your account, and that was in fact where I was going to go next.
Actually, more generally, I’m not sure what distinguishes experiences we have from those we don’t have in the first place, on your account, even leaving aside how one can alter future experiences.
After all, we’ve said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren’t properties of the individual models (though they can of course be represented by properties of models). But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
I assume you reject that conclusion, but I’m not sure how. On a naive realist’s view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally… indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)
(On a multiverse realist’s view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)
Another unaddressed issue derives from your wording: “how do you affect your future experiences?” I may well ask whether there’s anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that’s roughly the same problem for an instrumentalist as it is for a realist… that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.
Somewhere way upstream I said that I postulate experiences (I called them inputs), so they “exist” in this sense. We certainly don’t experience “everything”, so that’s how you tell “between experiences we have and those we don’t have”. I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I’m failing to Millerize it.
OK.
So “existence” properly refers to a property of subsets of models (e.g., “my keyboard exists” asserts that M1 contain K), as discussed earlier, and “existence” also properly refer to a property of inputs (e.g., “my experience of my keyboard sitting on my desk exists” and “my experience of my keyboard dancing the Macarena doesn’t exist” are both coherent, if perhaps puzzling, things to say), as discussed here.
Yes?
Which is not necessarily to say that “existence” refers to the same property of subsets of models and of inputs. It might, it might not, we haven’t yet encountered grounds to say one way or the other.
Yes?
OK. So far, so good.
And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:
Well, I agree that when a realist solipsist says “Mine is the only mind that exists” they are using “exists” in a way that is meaningless to an instrumentalist.
That said, I don’t see what stops an instrumentalist solipsist from saying “Mine is the only mind that exists” while using “exists” in the ways that instrumentalists understand that term to have meaning.
That said, I still don’t quite understand how “exists” applies to minds on your account. You said here that “mind is also a model”, which I understand to mean that minds exist as subsets of models, just like keyboards do.
But you also agreed that a model is a “mental construct”… which I understand to refer to a construct created/maintained by a mind.
The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of “existence” that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren’t mental constructs.
My reasoning here is similar to how if you said “Red boxes are contained by blue boxes” and “Blue boxes are contained by red boxes” I would conclude that at least one of those statements had an implicit “some but not all” clause prepended to it… I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true.
Does that make sense?
If so, can you clarify which is the case?
If not, can you say more about why not?
And what do you mean here by “true”, in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it’s the latter, how would you test for it?
Beats me.
Just to be clear, are you suggesting that on your account I have no grounds for treating “All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes” differently from “All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes” in the way I discussed?
If you are suggesting that, then I don’t quite know how to proceed. Suggestions welcomed.
If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework
Actually, thinking about this a little bit more, a “simpler” question might be whether it’s meaningful on this account to talk about minds existing. I think the answer is again that it isn’t, as I said about experiences above… models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.
If that’s the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.
So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can’t base anything such (non)existence.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
Sort of, yes. Except mind is also a model.
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
Nor do I.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
ie, realism explain how you can predict at all.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Fixed that for you.
I suppose shiminux would claim that explanatory or not, it complicates the model and thus makes it more costly, computationally speaking.
But that’s a terrible argument. if you can’t justify a posit by the explanatory work it does, then the optimum number of posits to make is zero.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
Shminux seems to be positing inputs and models at the least.
I think you quoted the wrong thing there, BTW.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.
And inverted stupidity is..?
If I understand both your and shiminux’s comments, this might express the same thing in different terms:
We have experiences (“inputs”.)
We wish to optimize these inputs according to whatever goal structure.
In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
Some of these models are more accurate than others. We might call accurate models “real”.
However, the term “real” holds no special ontological value, and they might later prove inaccurate or be replaced by better models.
Thus, we have a perfectly functioning agent with no conception (or need for) a territory—there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn’t very useful for such an agent.
Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.
As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.
I find it’s much more difficult to express my own positions in ways that are easily understood, though. It’s harder to figure out what is salient and where the vastest inferential gulfs are.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
I actually tried this a few times, even started a post draft titled “explain realism to a baby AI”. In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.
Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.
You are right. I will give it a go. Just because it’s obvious doesn’t mean it should not be explicit.
Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.
(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don’t threaten your own arguments (since they would be threatening the other side’s arguments, as it were.))
It seems to me to be one of the basic exercises in rationality, also known as “Devil’s advocate”. However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back. Not sure how much of this is taught or practiced at CFAR camps.
Yup. In my experience, though, Devil’s Advocates are usually pitted against people genuinely arguing their cause, not other devil’s advocates.
Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil’s advocate, though, IIRC.
Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it… though I’m not sure how good a job of it I’ll do.
I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still …
Wait, there are nonrealists other than shiminux here?
Beats me.
Actually, that’s just the model I was already using. I noticed it was shorter than Dave’s, so I figured it might be useful.
I suggest we move the discussion to a top-level discussion thread. The comment tree here is huge and hard to navigate.
If shiminux could write an actual post on his beliefs, that might help a great deal, actually.
I think I got a cumulative total of some 100 downvotes on this thread, so somehow I don’t believe that a top-level post would be welcome. However, if TheOtherDave were to write one as a description of an interesting ontology he does not subscribe to, this would probably go over much better. I doubt he would be interested, though.
As it happens, I agree with your position. I was actually thinking of making a post that pinpoints to all the important comments here without taking a position, while asking the discussion to continue there. However, making an argumentative post is also possible, although I might not be willing to expend to effort.
Cool.
If you are motivated at some point to articulate an anti-realist account of how non-accidental correlations between inputs come to arise (in whatever format you see fit), I’d appreciate that.
As I understand it, the word “how” is used to demand a model for an event. Since I already have models for the correlations of my inputs, I don’t feel the need for further explanation. More concretely, should you ask “How does closing your eyes lead to a blackout of your vision?” I would answer “After I close my eyes, my eyelids block all of the light from getting into my eye.”, and I consider this answer satisfying. Just because I don’t believe in a ontologically fundamental reality, doesn’t mean I don’t believe in eyes and eyelids and light.
OK. So, say I have two models, M1 and M2.
In M1, vision depends on light, which is blocked by eyelids. Therefore in M1, we predict that closing my eyes leads to a blackout of vision. In M2, vision depends on something else, which is not blocked by eyelids. Therefore in M2, we predict that closing my eyes does not lead to a blackout of vision.
At some later time, an event occurs in M1: specifically, I close my eyelids. At the same time, I have a blackout of vision. This increases my confidence in the predictive power of M1.
So far, so good.
At the same time, an identical event-pair occurs in M2: I close my eyes and my vision blacks out. This decreases my confidence in the predictive power of M2.
If I’ve understood you correctly, both the realist and the instrumentalist account of all of the above is “there are two models, M1 and M2, the same events occur in both, and as a consequence of those events we decide M1 is more accurate than M2.”
The realist account goes on to say “the reason the same events occur in both models is because they are both fed by the same set of externally realized events, which exist outside of either model.” The instrumentalist account, IIUC, says “the reason the same events occur in both models is not worth discussing; they just do.”
Is that right?
That’s still possible, for convenience purposes, even if shiminux is unwilling to describe their beliefs—your beliefs, apparently, I think a lot of people will have some questions to ask you now—in a top-level post.
Ooh, excellent point. I’d do it myself, but unfortunately my reason for suggesting it is that I want to understand your position better—my puny argument would be torn to shreds, I have too many holes in my understanding :(
The actual world is also a possible world. Non actual possible worlds are only accessible as models. Realists believe they can bring the actual world into line with desired models to some exitent
Not for realists.
For realist, wireheading isn’t a good aim. For anti realists, it is the only aim.
Realism doesn’t preclude ethical frameworks that endorse wireheading.
I’m less clear about the second part, though.
Rejecting (sufficiently well implemented) wireheading requires valuing things other than one’s own experience. I’m not yet clear on how one goes about valuing things other than one’s own experience in an instrumentalist framework, but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
See The Domain of Your Utility Function.
No, but they are a minority interest.
If someone accepts that reality exists, you have a head start. Why do anti-realists care about accurate prediction? They don’t think predictive models represent and external reality, and they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.
My understanding of shminux’s position is that accurate models can be used, somehow, to improve inputs.
I don’t yet understand how that is even in principle possible on his model, though I hope to improve my understanding.
Your last statement shows that you have much to learn from TheOtherDave about the principle of charity. Specifically, don’t think the other person to be stupider than you are, without a valid reason. So, if you come up with a trivial objection to their point, consider that they might have come across it before and addressed it in some way. They might still be wrong, but likely not in the obvious ways.
So where did you address it?
The trouble, of course, is that sometimes people really are wrong in “obvious” ways. Probably not high-status LWers, I guess.
It happens, but this should not be the initial assumption. And I’m not sure who you mean by “high-status LWers”.
Sorry, just realized I skipped over the first part of your comment.
Doesn’t that depend on the prior? I think most holders of certain religious or political beliefs, for instance, do so for trivially wrong reasons*. Perhaps you mean it should not be the default assumption here?
*Most conspiracy theories, for example.
I was referring to you. PrawnOfFate should not have expected you to make such a mistake, give the evidence.
If I answer ‘yes’ to this, then I am confusing the map with the territory, surely? Yes, there may very well be a possible world that’s a perfect match for a given model, but how would I tell it apart from all the near-misses?
The “real world” is a good deal more accurate than the most accurate model of it that we have of it.
It’s not me, FWIW; I find the discussion interesting.
That said, I’m not sure what methodology you use to determine which actions to take, given your statement that ” the “real world” just the most accurate model”. If all you cared about was the accuracy of your model, would it not be easier to avoid taking any physical actions, and simply change your model on the fly as it suits you ? This way, you could always make your model fit what you observe. Yes, you’d be grossly overfitting the data, but is that even a problem ?
I didn’t say it’s all I care about. Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice, depending on my preferences, the effort required and the odds of success, just like your garden variety realist would. As Eliezer used to emphasize, “it all adds up to normality”.
Would you do so if picking another model required less effort ? I’m not sure how you can justify doing that.
I am guessing that you, TimS and nyan_sandwich all seem to think that my version of instrumentalism is incompatible with having preferences over possible worlds. I have trouble understanding where this twist is coming from.
It’s not that I think that your version of instrumentalism is incompatible with preferences, it’s more like I’m not sure I understand what the word “preferences” even means in your context. You say “possible worlds”, but, as far as I can tell, you mean something like, “possible models that predict future inputs”.
Firstly, I’m not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a “preference” for you means something like, “a desire to make one model more accurate than the rest”, but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn’t it ?
Your having a preference for worlds without, eg, slavery can’t possibly translate into something iike “i want to change the world external to me so that it no longer contains slaves”. I have trouble understanding what it would translate to. You could adopt models where things you don’t like don’t exist, but they wouldn’t be accurate.
No, but it translates to its equivalent:
And how do you arrange that?
So you’re saying you have a preference over the map, as opposed to the territory (your experiences, in this case)
That sounds subject to some standard pitfalls, offhand, where you try to fool yourself into choosing the “no-slaves” map instead of trying to optimize, well, reality, such as the slaves—perhaps with an experience machine, through simple self-deception, or maybe some sort of exploit involving Occam’s Razor.
I agree that self-deception is a “real” possibility. Then again, it is also a possibility for a realist. Or a dualist. In fact, confusing map and territory is one of the most common pitfalls, as you well know. Would it be more likely for an instrumentalist to become instrumenta-lost? I don’t see why it would be the case. For example, from my point of view, you arbitrarily chose a comforting Christian map (is it an inverse of “some sort of exploit involving Occam’s Razor”?) instead of a cold hard uncaring one, even though you seem to be preferring realism over instrumentalism.
Ah, no, sorry, I meant that those options would satisfy your stated preferences, not that they were pitfalls on the road to it. I’m suggesting that since you don’t want to fall into those pitfalls, those aren’t actually your preferences, whether because you’ve made a mistake or I have (please tell me if I have.)
I propose a ww2 mechanical aiming computer as an example of a model. Built based on the gears that can be easily and conveniently manufactured, there’s very little doubt that universe does not use anything even remotely similar to produce the movement of the projectile through the air, even if we assume that such question is meaningful.
A case can be made that physics is not that much different from ww2 aiming computer (built out of mathematics that is available and can be conveniently used). And with regards to MWI, a case can be made that it is similar to removing the only ratchet in the mechanical computer and proclaiming rest of the gears the reality because somehow “from the inside” it would allegedly still feel the same even though the mechanical computer, without this ratchet, doesn’t even work any more for predicting anything.
Of course, it is not clear how close physics is to a mechanical aiming computer in terms of how the internals can correspond to the real world.
Interesting. So we prefer that some models or others be accurate, and take actions that we expect to make that happen, in our current bag of models.
Ok I think I get it. I was confused about what the referent of your preferences would be if you did not have your models referring to something. I see that you have made the accuracy of various models the referent of preferences. This seems reasonable enough.
I can see now that I’m confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.
I like how you put it into some fancy language, and now it sounds almost profound.
It is entirely possible that I’m talking out of my ass here, and you will find a killer argument against this approach.
Likewise the converse. I reckon both will get killed by a proper approach.
It works fine—as long as you only care about optimizing inputs, in which case I invite you to go play in the holodeck while the rest of us optimize the real world.
If you can’t find a holodeck, I sure hope you don’t accidentally sacrifice your life to save somebody or further some noble cause. After all, you wont be there to experience the resulting inputs, so what’s the point?
You are arguing with a strawman.
It’s not a utility function over inputs, it’s over the accuracy of models.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
It all adds up to normality, which probably means we are confused and there is an even simpler underlying model of the situation.
You know, I’m actually not.
Affecting the accuracy of a specified model—a term defined as “how well it predicts future inputs”—is a subset of optimizing future inputs.
You’re still thinking like a realist. A holodeck doesn’t prevent you from observing the real world—there is no “real world”. It prevents you testing how well certain models predict experiences when you take the action “leave the holodeck”, unless of course you leave the holodeck—it’s an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.
Pardon?
Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don’t win the lottery with quantum suicide.
You know, not every belief adds up to normality—just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because “it all adds up to normality”.
That’s the standard physical realist response to Kuhn and Feyerabend. I find it confusing to hear it from you, because you certainly are not a standard physical realist.
In short, I think you are being a little too a la carte with your selection from various parts of philosophy of science.
Is there something wrong with doing that ? As long as the end result is internally consistent, I don’t see the problem.
Sure, my criticism has an implied “And I’m concerned you’ve managed to endorse A and ~A by accident.”
Right, that’s fair, but it’s not really apparent from your reply which is A and which is ~A. I understand that physical realists say the same things as shminux, who professes not to be a physical realist—but then, I bet physical realists say that water is wet, too...
I don’t know that shminux has inadvertently endorsed A and ~A. I’m suspicious that this has occurred because he resists the standard physical realist definition of territory / reality, but responds to a quasi-anti-realist position with an physical realist answer that I suspect depends on the rejected definition of reality.
If I knew precisely where the contradiction was, I’d point it out explicitly. But I don’t, so I can’t.
Yeah, fair enough, I don’t think I understand his position myself at this point…
Sorry if this is a stupid question, but what do you call the thingy that makes these inputs behave regularly?
If I recall correctly he abandons that particular rejection when he gets an actual answer to the first question. Specifically, he argues against belief in the implied invisible when said belief leads to making actual decisions that will result in outcomes that he will not personally be able to verify (eg. when considering Relativity and accelerated expansion of the universe).
I think you are conflating two related, but distinct questions. Physical realism faces challenges from:
(1) the sociological analysis represented by works like Structure of Scientific Revolution
(2) the ontological status of objects that, in principle, could never be observed (directly or indirectly)
I took shminux as trying to duck the first debate (by adopting physical pragmatism), but I think most answers to the first question do not necessarily imply particular answers to the second question.
I am almost certain I am saying a different thing to what you think.
I can imagine using a model that contains elements that are merely convenient pretenses, and don’t actually exist—like using simpler Newtonian models of gravity despite knowing GR is true (or at least more likely to be true than Newton.)
If some of these models featured things that I care about, it wouldn’t matter, as long as I didn’t think actual reality featured these things. For example, if an easy hack for predicting the movement of a simple robot was to imagine it being sentient (because I can easily calculate what humanlike minds wold do using mys own neural circutry,) I still wouldn’t care if it was crushed, because the sentient being described by the model doesn’t actually exist—the robot merely uses similar pathfinding.
Does that answer your question, TimS’-model-of-shiminux?
I don’t understand the paperclipping reference, but MugaSofer is a hard-core moral realist (I think). Physical pragmatism (your position) is a reasonable stance in the physical realism / anti-realism debate, but I’m not sure what the parallel position is in the moral realism / anti-realism debate.
(Edit: And for some moral realists, the justification for that position is the “obvious” truth of physical realism and the non-intuitiveness of physical facts and moral facts having a different ontological status.)
In short, “physical prediction” is a coherent concept in a way that “moral prediction” does not seem to be. A sentence of the form “I predict retaliation if I wrong someone” is a psychological prediction, not a moral prediction. Defining what “wrong” means in that sentence is the core of the moral realism / anti-realism debate.
I don’t see it.
Do we really have to define “wrong” here? It seems more useful to say “certain actions of mine may cause this person to experience a violation of their innate sense of fairness”, or something to that effect. Now we are doing cognitive science, not some vague philosophizing.
At a minimum, we need an enforceable procedure for resolving disagreements between different people when each of their “innate senses of fairness” disagree. Negotiated settlement might be the gold-standard, but history shows this seldom has actually resolved major disputes.
Defining “wrong” helps because it provides a universal principled basis for others to intervene in the conflict. Alliance building also provides a basis, but is hardly universally principled (or fair, for most usages of “fair”).
Yes, it definitely helps to define “wrong” as a rough acceptable behavior boundary in a certain group. But promoting it from a convenient shortcut in your models into something bigger is hardly useful. Well, it is useful to you if you can convince others that your definition of “wrong” is the one true one and everyone else ought to abide by it or burn in hell. Again, we are out of philosophy and into psychology.
I’m glad we agree that defining “wrong” is useful, but I’m still confused how you think we go about defining “wrong.” One could assert:
But that doesn’t tell us how society figures out what to punish, or whether there are constraints on society’s classifications. Psychology doesn’t seem to answer these questions—there once were societies that practiced human sacrifice or human slavery.
In common usage, we’d like to be able say those societies were doing wrong, and your usage seems inconsistent with using “wrong” in that way.
No, they weren’t. Your model of objective wrongness is not a good one, it fails a number of tests.
“Human sacrifice and human slavery” is wrong now in the Westernized society, because it fits under the agreed definition of wrong today. It was not wrong then. It might not be wrong again in the future, after some x-risk-type calamity.
The evolution of the agreed-upon concept of wrong is a fascinating subject in human psychology, sociology and whatever other natural science is relevant. I am guessing that more formerly acceptable behaviors get labeled as “wrong” as the overall standard of living rises and average suffering decreases. As someone mentioned before, torturing cats is no longer the good clean fun it used to be. But that’s just a guess, I would defer to the expert in the area, hopefully there are some around.
Some time in the future a perfectly normal activity of the day will be labeled as “wrong”. It might be eating animals, or eating plants, or having more than 1.0 children per person, or refusing sex when asked politely, or using anonymous nicks on a public forum, or any other activity we find perfectly innocuous.
Conversely, there were plenty of “wrong” behaviors which aren’t wrong anymore, at least not in the modern West, like proclaiming that Jesus is not the Son of God, or doing witchcraft, or marrying a person of the same sex, or...
The definition of wrong as an agreed upon boundary of acceptable behavior matches observations. The way people come to such an agreement is a topic eminently worth studying, but it should not be confused with studying the concept of wrong as if it were some universal truth.
Your position on moral realism has a respectable pedigree in moral philosophy, but I don’t think it is parallel to your position on physical realism.
As I understand it, your response to the question “Are there electrons?” is something like:
This is a wrong question. Trying to find the answer doesn’t resolve any actual decision you face.
By contrast, your response to “Is human sacrifice wrong?” is something like:
Not in the sense you mean, because “wrong” in that sense does not exist.
I don’t think there are philosophical reasons why your positions on those two issues should be in parallel, but you seem to think that your positions are in parallel, and it does not look that way to me.
Without a notion of objective underlying reality, shminux had nothing to cash out any moral theory in.
Not quite.
“Are there electrons?” “Yes, electron is an accurate model, though it it has its issues.”
“Does light propagate in ether?” “Aether is not a good model, it fails a number of tests.”
“is human sacrifice an unacceptable behavior in the US today?” “Yes, this model is quite accurate.”
“Is ‘wrong’ independent of the group that defines it?” “No, this model fails a number of tests.”
Seems pretty consistent to me, with all the parallels you want.
You are not using the word “tests” consistently in your examples. For luminiferous aether, test means something like “makes accurate predictions.” Substituting that into your answer to wrong yields:
Which I’m having trouble parsing as an answer to the question. If you don’t mean for that substitution to be sensible, then your parallelism does not seem to hold together.
But in deference to your statement here, I am happy to drop this topic if you’d like me to. It is not my intent to badger you, and you don’t have any obligation to continue a conversation you don’t find enjoyable or productive.
It’s worth noting that most people who make that claim are using a different definition of “wrong” to you.
I suggest editing in additional line-breaks so that the quote is distinguished from your own contribution. (You need at least two ‘enters’ between the end of the quote and the start of your own words.)
Whoops, thanks.
I expected that this discussion would not achieve anything.
Simply put, the mistake both of you are making was already addressed by the meta-ethics sequence. But for a non-LW reference, see Speakers Use Their Actual Language. “Wrong” does not refer to “whatever ‘wrong’ means in our language at the time”. That would be circular. “Wrong” refers to some objective set of characteristics, that set being the same as those that we in reality disapprove of. Modulo logical uncertainty etc etc.
I expected this would not make sense to you since you can’t cash out objective characteristics in terms of predictive black boxes.
Congratulations on a successful prediction. Of course, if you had made it before this conversation commenced, you could have saved us all the effort; next time you know something would fail, speaking up would be helpful.
I think shminux is claiming that this set of characteristics changes dynamically, and thus it is more useful to define “wrong” dynamically as well. I disagree, but then we already have a term for this (“unacceptable”) so why reurpose “wrong”?
Who does “you” refer to here? All participants in this discussion? Sminux only?
Presumably shminux doesn’t consider it a repurposing, but rather an articulation of the word’s initial purpose.
Well, OK.
Using relative terms in absolute ways invites communication failure.
If I use “wrong” to denote a relationship between a particular act and a particular judge (as shminux does) but I only specify the act and leave the judge implicit (e.g., “murder is wrong”), I’m relying on my listener to have a shared model of the world in order for my meaning to get across. If I’m not comfortable relying on that, I do better to specify the judge I have in mind.
Is shiminux a native English speaker? Because that’s certainly not how the term is usually used. Ah well, he’s tapped out anyway.
Oh, I can see why it failed—they were using the same term in different ways, each insisting their meaning was “correct”—I just meant you could use this knowledge to help avoid this ahead of time.
I understand. I’m suggesting it in that context.
That is, I’m asserting now that “if I find myself in a conversation where such terms are being used and I have reason to believe the participants might not share implicit arguments, make the argumentsexplicit” is a good rule to follow in my next conversation.
Makes sense. Upvoted.
Sorry. I guess I was feeling too cynical and discouraged at the time to think that such a thing would be helpful.
In this case I meant to refer to only shminux, who calls himself an instrumentalist and does not like to talk about the territory (as opposed to AIXI-style predictive models).
You might have been right, at that. My prior for success here was clearly far too high.
This concept of “wrong” is useful, but a) there is an existing term which people understand to mean what you describe—“acceptable”—and b) it does not serve the useful function people currently expect “wrong” to serve; that of describing our extrapolated desires—it is not prescriptive.
I would advise switching to the more common term, but if you must use it this way I would suggest warning people first, to prevent confusion.
You or TimS are the ones who introduced the term “wrong” into the conversation, I’m simply interpreting it in a way that makes sense to me. Tapping out due to lack of progress.
That would be TimS, because he’s the one discussing your views on moral realism with you.
And I’m simply warning you that using the term in a nonstandard way is predictably going to result in confusion, as it has in this case.
Well, that’s your prerogative, obviously, but please don’t tap out of your discussion with Tim on my account. And, um, if it’s not on my account, you might want to say it to him, not me.
Fairness is not about feelings of fairness.
Feeling or not, it’s a sense that exists in other primates, not just humans. You can certainly quantify the emotional reaction to real or perceived unfairness, which was my whole point: use cognitive science, not philosophy. And cognitive science is about building models and testing them, like any natural science.
Well, the trouble occurs when you start talking about the existence of things that, unlike electrons, you actually care about.
Say I value sentient life. If that life doesn’t factor into my predictions, does it somehow not exist? Should I stop caring about it? (The same goes for paperclips, if you happen to value those.)
EDIT: I assume you consider the least computationally complex model “better at predicting certain future inputs”?
You have it backwards. You also use the term “exist” in the way I don’t. You don’t have to worry about refining models predicting inputs you don’t care about.
If there is a luxury of choice of multiple models which give the same predictions, sure. Usually we are lucky if there is one good model.
Well, I am trying to get you to clarify what you mean.
But as I said, I don’t care about inputs, except instrumentally. I care about sentient minds (or paperclips.)
Ah … no. Invisible pink unicorns and Russel’s Teapots abound. For example, what if any object passing over the cosmological horizon disappeared? Or the universe was created last Thursday, but perfectly designed to appear billions of years old? These hypotheses don’t do any worse at predicting; they just violate Occam’s Razor.
Believe me, I have tried many times in our discussions over last several months. Unfortunately we seem to be speaking different languages which happen to use the same English syntax.
Fine, I’ll clarify. You can always complicate an existing model in a trivial way, which is what all your examples are doing. I was talking about models of which one is not a trivial extension of the other with no new predictive power. That’s just silly.
Well, considering how many people seem to think that interpretations of QM other than their own are just “trivial extensions with no new predictive power”, it’s an important point.
Well, it’s pretty obvious we use different definitions of “existence”. Not sure if that qualifies as a different language, as such.
That said, you seem to be having serious trouble parsing my question, so maybe there are other differences too.
Look, you understand the concept of a paperclip maximizer, yes? How would a paperclip maximizer that used your criteria for existence act differently?
EDIT: incidentally, we haven’t been discussing this “over the last several months”. We’ve been discussing it since the fifth.
The interpretations are usually far from trivial and most aspire to provide an inspiration for building a testable model some day. Some even have, and been falsified. That’s quite different from last thursdayism.
Why would it? A paperclip maximizer is already instrumental, it has one goal in mind, maximizing the number of paperclips in the universe (which it presumably can measure with some sensors). It may have to develop advanced scientific concepts, like General Relativity, to be assured that the paperclips disappearing behind the cosmological horizon can still be counted toward the total, given some mild assumptions, like the Copernican principle.
Anyway, I’m quite skeptical that we are getting anywhere in this discussion.
In which universe? It doesn’t know. And it may have uncertainty with regards to true number. There’s going to be hypothetical universes that produce same observations but have ridiculously huge amounts of invisible paperclips at stake, which are influenced by paperclipper’s actions (it may even be that the simplest extra addition that makes agent’s actions influence invisible paperclips would utterly dominate all theories starting from some length, as it leaves most length for a busy beaver like construction that makes the amount of insivisible paperclips ridiculously huge. One extra bit for a busy beaver is seriously a lot more paperclips). So given some sort of length prior that ignores size of hypothetical universe (the kind that won’t discriminate against MWI just because its big), those aren’t assigned low enough prior, and dominate it’s expected utility calculations.
Well, I probably don’t know enough about QM to judge if they’re correct; but it’s certainly a claim made fairly regularly.
Let’s say it simplifies the equations not to model the paperclips as paperclips—it might be sufficient to treat them as a homogeneous mass of metal, for example. Does this mean that they do not, in fact, exist? Should a paperclipper avoid this at all costs, because it’s equivalent to them disappearing?
Removing the territory/map distinction means something that wants to change the territory could end up changing the map … doesn’t it?
I’m wondering because I care about people, but it’s often simpler to model people without treating them as, well, sentient.
Well, I’ve been optimistic that I’d clarified myself pretty much every comment now, so I have to admit I’m updating downwards on that.
Depends on what you mean by ‘different mechanics.’ Weinberg’s field theory textbook develops the argument that only quantum field theory, as a structure, allows for certain phenomenologically important characteristics (mostly cluster decomposition).
However, there IS an enormous amount of leeway within the field theory- you can make a theory where electric monopoles exist as explicit degrees of freedom, and magnetic monopoles are topological gauge-field configurations and its dual to a theory where magnetic monopoles are the degrees of freedom and electric monopoles exist as field configurations. While these theories SEEM very different, they make identical predictions.
Similarly, if you can only make finite numbers of measurements, adding extra dimensions is equivalent to adding lots of additional forces (the dimensional deconstruction idea), etc. Some 5d theories with gravity make the same predictions as some 4d theories without.
The same is more-or-less true if you replace ‘electrons’ with ‘temperature’.
Yes. While I’m not terribly up-to-date with the ‘state-of-the-art’ in theoretical physics, I feel like the situation today with renormalization and stuff is like it was until 1905 for the Lorentz-FitzGerald contraction or the black-body radiation, when people were mystified by the fact that the equations worked because they didn’t know (or, at least, didn’t want to admit) what the hell they meant. A new Einstein clearing this stuff up is perhaps overdue now. (The most obvious candidate is “something to do with quantum gravity”, but I’m prepared to be surprised.)
You guys are making possible sources of confusion between the map and the territory sound like they’re specific to QFT while they actually aren’t. “Oh, I know what a ball is. It’s an object where all the points on the surface are at the same distance from the centre.” “How can there be such a thing? The positions of atoms on the surface would fluctuate due to thermal motion. Then what is it, exactly, that you play billiards with?” (Can you find another example of this in a different recent LW thread?)
Your ball point is very different. My driving point is that there isn’t even a nice, platonic-ideal type definition of particle IN THE MAP, let alone something that connects to the territory. I understand how my above post may lead you to misunderstand what I was trying to get it..
To rephrase my above comment, I might say: some of the features a MAP of a particle needs is that its detectable in some way, and that it can be described in a non-relativistic limit by a Schroedinger equation. The standard QFT definitions for particle lack both these features. Its also not-fully consistent in the case of charged particles.
In QFT there is lots of confusion about how the map works, unlike classical mechanics.
This reminds me of the recent conjecture that the black hole horizon is a firewall, which seems like one of those confusions about the map.
Why, is there a nice, platonic-ideal type definition of a rigid ball in the map (compatible with special relativity)? What happens to its radius when you spin it?
There is no ‘rigid’ in special relativity, the best you can do is Born-rigid. Even so, its trivial to define a ball in special relativity, just define it in the frame of a corotating observer and use four vectors to move to the same collection of events in other frames You learn that a ‘ball’ in special relativity has some observer dependent properties, but thats because length and time are observer dependent in special relativity. So ‘radius’ isn’t a good concept, but ‘the radius so-and-so measures’ IS a good concept.
[puts logical positivism hat on]
Why, it means this, of course.
[while taking the hat off:] Oh, that wasn’t what you meant, was it?
The Unruh effect is a specific instance of my general-point (particle definition is observer dependent). All you’ve done is give a name to the sub-class of my point (not all observers see the same particles).
So should we expect ontology to be observer independent? If we should, what happens to particles?
And yet it proclaims the issue settled in favour of MWI and argues of how wrong science is for not settling on MWI and so on. The connection—that this deficiency is why MWI can’t be settled on, sure does not come up here. Speaking of which, under any formal metric that he loves to allude to (e.g. Kolmogorov complexity), MWI as it is, is not even a valid code for among other things this reason.
It doesn’t matter how much simpler MWI is if we don’t even know that it isn’t too simple, merely guess that it might not be too simple.
edit: ohh, and lack of derivation of Born’s rules is not the kind of thing I meant by argument in favour of non-realism. You can be non-realist with or without having derived Born’s rules. How QFT deals with relativistic issues, as outlined by e.g. Mitchell Porter , is a quite good reason to doubt reality of what goes on mathematically in-between input and output. There’s a view that (current QM) internals are an artefact of the set of mathematical tricks which we like / can use effectively. The view that internal mathematics is to the world as rods and cogs and gears inside a WW2 aiming computer are to a projectile flying through the air.
Are they, though? Irrational or stupid?
coughcreationsistscough