Not to mention question like “If we send these colonists over the horizon, does that kill them or not?”
This question is equally meaningful in both cases, and equally answerable. And the answer happens to be the same, too.
Which brings me to a question: I can never quite figure out how your instrumentalism interacts with preferences. Without assuming the existence of something you care about, on what basis do you make decisions?
Your argument reminds me of “Obviously morality comes from God, if you don’t believe in God, what’s to stop you from killing people if you can get away with it?” It is probably an uncharitable reading of it, though.
The “What I care about” thingie is currently one of those inputs. Like, what compels me to reply to your comment? It can partly be explained by the existing models in psychology, sociology and other natural sciences, and in part is still a mystery. Some day it will hopefully be able to analyze and simulate mind and brain better, and explain how this desire arises, and why one shminux decides to reply to and not ignore your comment. Maybe I feel good when smart people publicly agree with me. Maybe I’m satisfying some other preference I’m not aware of.
It’s not an argument; it’s an honest question. I’m sympathetic to instrumentalism, I just want to know how you frame the whole preferences issue, because I can’t figure out how to do it. It probably is like the God is Morality thing, but I can’t just accidentally find my way out of such a pickle without some help.
I frame it as “here’s all these possible worlds, some being better than others, and only one being ‘real’, and then here’s this evidence I see, which discriminates which possible worlds are probable, and here’s the things I can do that that further affect which is the real world, and I want to steer towards the good ones.” As you know, this makes a lot of assumptions and is based pretty directly on the fact that that’s how human imagination works.
If there is a better way to do it, which you seem to think that there is, I’m interested. I don’t understand your answer above, either.
Well, I’ll give it another go, despite someone diligently downvoting all my related comments.
“here’s all these possible worlds, some being better than others, and only one being ‘real’, and then here’s this evidence I see, which discriminates which possible worlds are probable, and here’s the things I can do that that further affect which is the real world, and I want to steer towards the good ones.”
Same here, with a marginally different dictionary. Although you are getting close to a point I’ve been waiting for people to bring up for some time now.
So, what are those possible worlds but models? And isn’t the “real world” just the most accurate model? Properly modeling your actions lets you affect the preferred “world” model’s accuracy, and such. The remaining issue is whether the definition of “good” or “preferred” depends on realist vs instrumentalist outlook, and I don’t see how. Maybe you can clarify.
First, let me apologize pre-emptively if I’m retreading old ground, I haven’t carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I’m doing so. That said… my understanding of your account of existence is something like the following:
A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn’t. Similarly, a model may include various hypothesized entities that represent certain consistent patterns of experience, such as this keyboard I’m typing on, my experiences of which consistently correlate with my experiences of text appearing on my monitor, responses to my text later appearing on my monitor, etc.
On your account, all it means to say “my keyboard exists” is that my experience consistently demonstrates patterns of that sort, and consequently I’m confident of the relevant predictions made by the set of models (M1) that have in the past predicted patterns of that sort, not-so-confident of relevant predictions made by the set of models (M2) that predict contradictory patterns, etc. etc. etc.
We can also say that M1 all share a common property K that allows such predictions. In common language, we are accustomed to referring to K as an “object” which “exists” (specifically, we refer to K as “my keyboard”) which is as good a way of talking as any though sloppy in the way of all natural language.
We can consequently say that M1 all agree on the existence of K, though of course that may well elide over many important differences in the ways that various models in M1 instantiate K.
We can also say that M1 models are more “accurate” than M2 models with respect to those patterns of experience that led us to talk about K in the first place. That is, M1 models predict relevant experience more reliably/precisely/whatever.
And in this way we can gradually converge on a single model (MR1), which includes various objects, and which is more accurate than all the other models we’re aware of. We can call MR1 “the real world,” by which we mean the most accurate model.
Of course, this doesn’t preclude uncovering a new model MR2 tomorrow which is even more accurate, at which point we would call MR2 “the real world”. And MR2 might represent K in a completely different way, such that the real world would now, while still containing the existence of my keyboard, contain it in a completely different way. For example, MR1 might represent K as a collection of atoms, and MR2 might represent K as a set of parameters in a configuration space, and when I transition from MR1 to MR2 the real world goes from my keyboard being a collection of atoms to my keyboard being a set of parameters in a configuration space.
Similarly, it doesn’t preclude our experiences starting to systematically change such that the predictions made by MR1 are no longer reliable, in which case MR stops being the most accurate model, and some other model (MR3) is the most accurate model, at which point we would call MR3 “the real world”. For example, MR3 might not contain K at all, and I would suddenly “realize” that there never was a keyboard.
All of which is fine, but the difficulty arises when after identifying MR1 as the real world we make the error of reifying MRn, projecting its patterns onto some kind of presumed “reality” R to which we attribute a kind of pseudo-existence independent of all models. Then we misinterpret the accuracy of a model as referring, not to how well it predicts future experience, but to how well it corresponds to R.
Of course, none of this precludes being mistaken about the real world… that is, I might think that MR1 is the real world, when in fact I just haven’t fully evaluated the predictive value of the various models I’m aware of, and if I were to perform such an evaluation I’d realize that no, actually, MR4 is the real world. And, knowing this, I might have various degrees of confidence in various models, which I can describe as “possible worlds.”
And I might have preferences as to which of those worlds is real. For example, MP1 and MP2 might both be possible worlds, and I am happier in MP1 than MP2, so I prefer MP1 be the real world. Similarly, I might prefer MP1 to MP2 for various other reasons other than happiness.
Which, again, is fine, but again we can make the reification error by assigning to R various attributes which correspond, not only to the real world (that is, the most accurate model), but to the various possible worlds MRx..y. But this isn’t a novel error, it’s just the extension of the original error of reification of the real world onto possible worlds.
That said, talking about it gets extra-confusing now, because there’s now several different mistaken ideas about reality floating around… the original “naive realist” mistake of positing R that corresponds to MR, the “multiverse” mistake of positing R that corresponds to MRx..y, etc. When I say to a naive realist that treating R as something that exists outside of a model is just an error, for example, the naive realist might misunderstand me as trying to say something about the multiverse and the relationships between things that “exist in the world” (outside of a model) and “exist in possible worlds” (outside of a model), which in fact has nothing at all to do with my point, which is that the whole idea of existence outside of a model is confused in the first place.
As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.
The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like “everything imaginable exists”. This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
Maybe you should teach your steelmanning skills, or make a post out of it.
I’ve thought about this, but on consideration the only part of it I understand explicitly enough to “teach” is Miller’s Law (the first one), and there’s really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they’re wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they’re wrong… but rarely do I feel it useful to tell them so.)
The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
Yes. I’m not sure what to say about that on your account, and that was in fact where I was going to go next.
Actually, more generally, I’m not sure what distinguishes experiences we have from those we don’t have in the first place, on your account, even leaving aside how one can alter future experiences.
After all, we’ve said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren’t properties of the individual models (though they can of course be represented by properties of models). But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
I assume you reject that conclusion, but I’m not sure how. On a naive realist’s view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally… indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)
(On a multiverse realist’s view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)
Another unaddressed issue derives from your wording: “how do you affect your future experiences?” I may well ask whether there’s anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that’s roughly the same problem for an instrumentalist as it is for a realist… that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.
But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
Somewhere way upstream I said that I postulate experiences (I called them inputs), so they “exist” in this sense. We certainly don’t experience “everything”, so that’s how you tell “between experiences we have and those we don’t have”. I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I’m failing to Millerize it.
So “existence” properly refers to a property of subsets of models (e.g., “my keyboard exists” asserts that M1 contain K), as discussed earlier, and “existence” also properly refer to a property of inputs (e.g., “my experience of my keyboard sitting on my desk exists” and “my experience of my keyboard dancing the Macarena doesn’t exist” are both coherent, if perhaps puzzling, things to say), as discussed here. Yes?
Which is not necessarily to say that “existence” refers to the same property of subsets of models and of inputs. It might, it might not, we haven’t yet encountered grounds to say one way or the other. Yes?
OK. So far, so good.
And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability.
Well, I agree that when a realist solipsist says “Mine is the only mind that exists” they are using “exists” in a way that is meaningless to an instrumentalist.
That said, I don’t see what stops an instrumentalist solipsist from saying “Mine is the only mind that exists” while using “exists” in the ways that instrumentalists understand that term to have meaning.
That said, I still don’t quite understand how “exists” applies to minds on your account. You said here that “mind is also a model”, which I understand to mean that minds exist as subsets of models, just like keyboards do.
But you also agreed that a model is a “mental construct”… which I understand to refer to a construct created/maintained by a mind.
The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of “existence” that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren’t mental constructs.
My reasoning here is similar to how if you said “Red boxes are contained by blue boxes” and “Blue boxes are contained by red boxes” I would conclude that at least one of those statements had an implicit “some but not all” clause prepended to it… I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true.
Does that make sense? If so, can you clarify which is the case? If not, can you say more about why not?
I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true [implicitly assuming that X is not the same as Y, I am guessing].
And what do you mean here by “true”, in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it’s the latter, how would you test for it?
Just to be clear, are you suggesting that on your account I have no grounds for treating “All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes” differently from “All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes” in the way I discussed?
If you are suggesting that, then I don’t quite know how to proceed. Suggestions welcomed.
If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework
Actually, thinking about this a little bit more, a “simpler” question might be whether it’s meaningful on this account to talk about minds existing. I think the answer is again that it isn’t, as I said about experiences above… models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.
If that’s the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.
So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can’t base anything such (non)existence.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error
Sort of, yes. Except mind is also a model.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.
If I understand both your and shiminux’s comments, this might express the same thing in different terms:
We have experiences (“inputs”.)
We wish to optimize these inputs according to whatever goal structure.
In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
Some of these models are more accurate than others. We might call accurate models “real”.
However, the term “real” holds no special ontological value, and they might later prove inaccurate or be replaced by better models.
Thus, we have a perfectly functioning agent with no conception (or need for) a territory—there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn’t very useful for such an agent.
Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.
As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.
I find it’s much more difficult to express my own positions in ways that are easily understood, though. It’s harder to figure out what is salient and where the vastest inferential gulfs are.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
I actually tried this a few times, even started a post draft titled “explain realism to a baby AI”. In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.
Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.
Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.
(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don’t threaten your own arguments (since they would be threatening the other side’s arguments, as it were.))
Maybe we should organize a discussion where everyone has to take positions other than their own?
It seems to me to be one of the basic exercises in rationality, also known as “Devil’s advocate”. However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back. Not sure how much of this is taught or practiced at CFAR camps.
Yup. In my experience, though, Devil’s Advocates are usually pitted against people genuinely arguing their cause, not other devil’s advocates.
However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back.
Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil’s advocate, though, IIRC.
Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it… though I’m not sure how good a job of it I’ll do.
I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still …
if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so
Wait, there are nonrealists other than shiminux here?
I think I got a cumulative total of some 100 downvotes on this thread, so somehow I don’t believe that a top-level post would be welcome. However, if TheOtherDave were to write one as a description of an interesting ontology he does not subscribe to, this would probably go over much better. I doubt he would be interested, though.
As it happens, I agree with your position. I was actually thinking of making a post that pinpoints to all the important comments here without taking a position, while asking the discussion to continue there. However, making an argumentative post is also possible, although I might not be willing to expend to effort.
Cool. If you are motivated at some point to articulate an anti-realist account of how non-accidental correlations between inputs come to arise (in whatever format you see fit), I’d appreciate that.
As I understand it, the word “how” is used to demand a model for an event. Since I already have models for the correlations of my inputs, I don’t feel the need for further explanation. More concretely, should you ask “How does closing your eyes lead to a blackout of your vision?” I would answer “After I close my eyes, my eyelids block all of the light from getting into my eye.”, and I consider this answer satisfying. Just because I don’t believe in a ontologically fundamental reality, doesn’t mean I don’t believe in eyes and eyelids and light.
In M1, vision depends on light, which is blocked by eyelids. Therefore in M1, we predict that closing my eyes leads to a blackout of vision. In M2, vision depends on something else, which is not blocked by eyelids. Therefore in M2, we predict that closing my eyes does not lead to a blackout of vision.
At some later time, an event occurs in M1: specifically, I close my eyelids. At the same time, I have a blackout of vision. This increases my confidence in the predictive power of M1.
So far, so good.
At the same time, an identical event-pair occurs in M2: I close my eyes and my vision blacks out. This decreases my confidence in the predictive power of M2.
If I’ve understood you correctly, both the realist and the instrumentalist account of all of the above is “there are two models, M1 and M2, the same events occur in both, and as a consequence of those events we decide M1 is more accurate than M2.”
The realist account goes on to say “the reason the same events occur in both models is because they are both fed by the same set of externally realized events, which exist outside of either model.” The instrumentalist account, IIUC, says “the reason the same events occur in both models is not worth discussing; they just do.”
That’s still possible, for convenience purposes, even if shiminux is unwilling to describe their beliefs—your beliefs, apparently, I think a lot of people will have some questions to ask you now—in a top-level post.
Ooh, excellent point. I’d do it myself, but unfortunately my reason for suggesting it is that I want to understand your position better—my puny argument would be torn to shreds, I have too many holes in my understanding :(
The actual world is also a possible world. Non actual possible worlds are only accessible as models. Realists believe they can bring the actual world into line with desired models to some exitent
And isn’t the “real world” just the most accurate model?
Not for realists.
Properly modeling your actions lets you affect the preferred “world” model’s accuracy, and such. The remaining issue is whether the definition of “good” or “preferred” depends on realist vs instrumentalist outlook, and I don’t see how. Maybe you can clarify.
For realist, wireheading isn’t a good aim. For anti realists, it is the only aim.
For realist, wireheading isn’t a good aim. For anti realists, it is the only aim.
Realism doesn’t preclude ethical frameworks that endorse wireheading.
I’m less clear about the second part, though.
Rejecting (sufficiently well implemented) wireheading requires valuing things other than one’s own experience. I’m not yet clear on how one goes about valuing things other than one’s own experience in an instrumentalist framework, but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
Realism doesn’t preclude ethical frameworks that endorse wireheading
No, but they are a minority interest.
’m not yet clear on how one goes about valuing things other than one’s own experience in an instrumentalist framework, but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
If someone accepts that reality exists, you have a head start. Why do anti-realists care about accurate prediction? They don’t think predictive models represent and external reality, and they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.
they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.
My understanding of shminux’s position is that accurate models can be used, somehow, to improve inputs.
I don’t yet understand how that is even in principle possible on his model, though I hope to improve my understanding.
Your last statement shows that you have much to learn from TheOtherDave about the principle of charity. Specifically, don’t think the other person to be stupider than you are, without a valid reason. So, if you come up with a trivial objection to their point, consider that they might have come across it before and addressed it in some way. They might still be wrong, but likely not in the obvious ways.
Sorry, just realized I skipped over the first part of your comment.
It happens, but this should not be the initial assumption.
Doesn’t that depend on the prior? I think most holders of certain religious or political beliefs, for instance, do so for trivially wrong reasons*. Perhaps you mean it should not be the default assumption here?
If I answer ‘yes’ to this, then I am confusing the map with the territory, surely? Yes, there may very well be a possible world that’s a perfect match for a given model, but how would I tell it apart from all the near-misses?
The “real world” is a good deal more accurate than the most accurate model of it that we have of it.
Well, I’ll give it another go, despite someone diligently downvoting all my related comments.
It’s not me, FWIW; I find the discussion interesting.
That said, I’m not sure what methodology you use to determine which actions to take, given your statement that ” the “real world” just the most accurate model”. If all you cared about was the accuracy of your model, would it not be easier to avoid taking any physical actions, and simply change your model on the fly as it suits you ? This way, you could always make your model fit what you observe. Yes, you’d be grossly overfitting the data, but is that even a problem ?
I didn’t say it’s all I care about. Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice, depending on my preferences, the effort required and the odds of success, just like your garden variety realist would. As Eliezer used to emphasize, “it all adds up to normality”.
I am guessing that you, TimS and nyan_sandwich all seem to think that my version of instrumentalism is incompatible with having preferences over possible worlds. I have trouble understanding where this twist is coming from.
It’s not that I think that your version of instrumentalism is incompatible with preferences, it’s more like I’m not sure I understand what the word “preferences” even means in your context. You say “possible worlds”, but, as far as I can tell, you mean something like, “possible models that predict future inputs”.
Firstly, I’m not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a “preference” for you means something like, “a desire to make one model more accurate than the rest”, but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn’t it ?
Your having a preference for worlds without, eg, slavery can’t possibly translate into something iike “i want to change the world external to me so that it no longer contains slaves”. I have trouble understanding what it would translate to. You could adopt models where things you don’t like don’t exist, but they wouldn’t be accurate.
Your having a preference for worlds without, eg, slavery can’t possibly translate into something iike “i want to change the world external to me so that it no longer contains slaves”.
No, but it translates to its equivalent:
I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).
I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).
So you’re saying you have a preference over the map, as opposed to the territory (your experiences, in this case)
That sounds subject to some standard pitfalls, offhand, where you try to fool yourself into choosing the “no-slaves” map instead of trying to optimize, well, reality, such as the slaves—perhaps with an experience machine, through simple self-deception, or maybe some sort of exploit involving Occam’s Razor.
I agree that self-deception is a “real” possibility. Then again, it is also a possibility for a realist. Or a dualist. In fact, confusing map and territory is one of the most common pitfalls, as you well know. Would it be more likely for an instrumentalist to become instrumenta-lost? I don’t see why it would be the case. For example, from my point of view, you arbitrarily chose a comforting Christian map (is it an inverse of “some sort of exploit involving Occam’s Razor”?) instead of a cold hard uncaring one, even though you seem to be preferring realism over instrumentalism.
Ah, no, sorry, I meant that those options would satisfy your stated preferences, not that they were pitfalls on the road to it. I’m suggesting that since you don’t want to fall into those pitfalls, those aren’t actually your preferences, whether because you’ve made a mistake or I have (please tell me if I have.)
I propose a ww2 mechanical aiming computer as an example of a model. Built based on the gears that can be easily and conveniently manufactured, there’s very little doubt that universe does not use anything even remotely similar to produce the movement of the projectile through the air, even if we assume that such question is meaningful.
A case can be made that physics is not that much different from ww2 aiming computer (built out of mathematics that is available and can be conveniently used). And with regards to MWI, a case can be made that it is similar to removing the only ratchet in the mechanical computer and proclaiming rest of the gears the reality because somehow “from the inside” it would allegedly still feel the same even though the mechanical computer, without this ratchet, doesn’t even work any more for predicting anything.
Of course, it is not clear how close physics is to a mechanical aiming computer in terms of how the internals can correspond to the real world.
So, what are those possible worlds but models? And isn’t the “real world” just the most accurate model? Properly modeling your actions lets you affect the preferred “world” model’s accuracy, and such. The remaining issue is whether the definition of “good” or “preferred” depends on realist vs instrumentalist outlook, and I don’t see how. Maybe you can clarify.
Interesting. So we prefer that some models or others be accurate, and take actions that we expect to make that happen, in our current bag of models.
Ok I think I get it. I was confused about what the referent of your preferences would be if you did not have your models referring to something. I see that you have made the accuracy of various models the referent of preferences. This seems reasonable enough.
I can see now that I’m confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.
It works fine—as long as you only care about optimizing inputs, in which case I invite you to go play in the holodeck while the rest of us optimize the real world.
If you can’t find a holodeck, I sure hope you don’t accidentally sacrifice your life to save somebody or further some noble cause. After all, you wont be there to experience the resulting inputs, so what’s the point?
It’s not a utility function over inputs, it’s over the accuracy of models.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
It all adds up to normality, which probably means we are confused and there is an even simpler underlying model of the situation.
It’s not a utility function over inputs, it’s over the accuracy of models.
Affecting the accuracy of a specified model—a term defined as “how well it predicts future inputs”—is a subset of optimizing future inputs.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
You’re still thinking like a realist. A holodeck doesn’t prevent you from observing the real world—there is no “real world”. It prevents you testing how well certain models predict experiences when you take the action “leave the holodeck”, unless of course you leave the holodeck—it’s an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
Pardon?
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don’t win the lottery with quantum suicide.
It all adds up to normality
You know, not every belief adds up to normality—just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because “it all adds up to normality”.
This question is equally meaningful in both cases, and equally answerable. And the answer happens to be the same, too.
Your argument reminds me of “Obviously morality comes from God, if you don’t believe in God, what’s to stop you from killing people if you can get away with it?” It is probably an uncharitable reading of it, though.
The “What I care about” thingie is currently one of those inputs. Like, what compels me to reply to your comment? It can partly be explained by the existing models in psychology, sociology and other natural sciences, and in part is still a mystery. Some day it will hopefully be able to analyze and simulate mind and brain better, and explain how this desire arises, and why one shminux decides to reply to and not ignore your comment. Maybe I feel good when smart people publicly agree with me. Maybe I’m satisfying some other preference I’m not aware of.
It’s not an argument; it’s an honest question. I’m sympathetic to instrumentalism, I just want to know how you frame the whole preferences issue, because I can’t figure out how to do it. It probably is like the God is Morality thing, but I can’t just accidentally find my way out of such a pickle without some help.
I frame it as “here’s all these possible worlds, some being better than others, and only one being ‘real’, and then here’s this evidence I see, which discriminates which possible worlds are probable, and here’s the things I can do that that further affect which is the real world, and I want to steer towards the good ones.” As you know, this makes a lot of assumptions and is based pretty directly on the fact that that’s how human imagination works.
If there is a better way to do it, which you seem to think that there is, I’m interested. I don’t understand your answer above, either.
Well, I’ll give it another go, despite someone diligently downvoting all my related comments.
Same here, with a marginally different dictionary. Although you are getting close to a point I’ve been waiting for people to bring up for some time now.
So, what are those possible worlds but models? And isn’t the “real world” just the most accurate model? Properly modeling your actions lets you affect the preferred “world” model’s accuracy, and such. The remaining issue is whether the definition of “good” or “preferred” depends on realist vs instrumentalist outlook, and I don’t see how. Maybe you can clarify.
Hrm.
First, let me apologize pre-emptively if I’m retreading old ground, I haven’t carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I’m doing so. That said… my understanding of your account of existence is something like the following:
A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn’t. Similarly, a model may include various hypothesized entities that represent certain consistent patterns of experience, such as this keyboard I’m typing on, my experiences of which consistently correlate with my experiences of text appearing on my monitor, responses to my text later appearing on my monitor, etc.
On your account, all it means to say “my keyboard exists” is that my experience consistently demonstrates patterns of that sort, and consequently I’m confident of the relevant predictions made by the set of models (M1) that have in the past predicted patterns of that sort, not-so-confident of relevant predictions made by the set of models (M2) that predict contradictory patterns, etc. etc. etc.
We can also say that M1 all share a common property K that allows such predictions. In common language, we are accustomed to referring to K as an “object” which “exists” (specifically, we refer to K as “my keyboard”) which is as good a way of talking as any though sloppy in the way of all natural language.
We can consequently say that M1 all agree on the existence of K, though of course that may well elide over many important differences in the ways that various models in M1 instantiate K.
We can also say that M1 models are more “accurate” than M2 models with respect to those patterns of experience that led us to talk about K in the first place. That is, M1 models predict relevant experience more reliably/precisely/whatever.
And in this way we can gradually converge on a single model (MR1), which includes various objects, and which is more accurate than all the other models we’re aware of. We can call MR1 “the real world,” by which we mean the most accurate model.
Of course, this doesn’t preclude uncovering a new model MR2 tomorrow which is even more accurate, at which point we would call MR2 “the real world”. And MR2 might represent K in a completely different way, such that the real world would now, while still containing the existence of my keyboard, contain it in a completely different way. For example, MR1 might represent K as a collection of atoms, and MR2 might represent K as a set of parameters in a configuration space, and when I transition from MR1 to MR2 the real world goes from my keyboard being a collection of atoms to my keyboard being a set of parameters in a configuration space.
Similarly, it doesn’t preclude our experiences starting to systematically change such that the predictions made by MR1 are no longer reliable, in which case MR stops being the most accurate model, and some other model (MR3) is the most accurate model, at which point we would call MR3 “the real world”. For example, MR3 might not contain K at all, and I would suddenly “realize” that there never was a keyboard.
All of which is fine, but the difficulty arises when after identifying MR1 as the real world we make the error of reifying MRn, projecting its patterns onto some kind of presumed “reality” R to which we attribute a kind of pseudo-existence independent of all models. Then we misinterpret the accuracy of a model as referring, not to how well it predicts future experience, but to how well it corresponds to R.
Of course, none of this precludes being mistaken about the real world… that is, I might think that MR1 is the real world, when in fact I just haven’t fully evaluated the predictive value of the various models I’m aware of, and if I were to perform such an evaluation I’d realize that no, actually, MR4 is the real world. And, knowing this, I might have various degrees of confidence in various models, which I can describe as “possible worlds.”
And I might have preferences as to which of those worlds is real. For example, MP1 and MP2 might both be possible worlds, and I am happier in MP1 than MP2, so I prefer MP1 be the real world. Similarly, I might prefer MP1 to MP2 for various other reasons other than happiness.
Which, again, is fine, but again we can make the reification error by assigning to R various attributes which correspond, not only to the real world (that is, the most accurate model), but to the various possible worlds MRx..y. But this isn’t a novel error, it’s just the extension of the original error of reification of the real world onto possible worlds.
That said, talking about it gets extra-confusing now, because there’s now several different mistaken ideas about reality floating around… the original “naive realist” mistake of positing R that corresponds to MR, the “multiverse” mistake of positing R that corresponds to MRx..y, etc. When I say to a naive realist that treating R as something that exists outside of a model is just an error, for example, the naive realist might misunderstand me as trying to say something about the multiverse and the relationships between things that “exist in the world” (outside of a model) and “exist in possible worlds” (outside of a model), which in fact has nothing at all to do with my point, which is that the whole idea of existence outside of a model is confused in the first place.
Have I understood your position?
As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.
The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like “everything imaginable exists”. This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
I’ve thought about this, but on consideration the only part of it I understand explicitly enough to “teach” is Miller’s Law (the first one), and there’s really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they’re wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they’re wrong… but rarely do I feel it useful to tell them so.)
The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.
Yes. I’m not sure what to say about that on your account, and that was in fact where I was going to go next.
Actually, more generally, I’m not sure what distinguishes experiences we have from those we don’t have in the first place, on your account, even leaving aside how one can alter future experiences.
After all, we’ve said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren’t properties of the individual models (though they can of course be represented by properties of models). But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
I assume you reject that conclusion, but I’m not sure how. On a naive realist’s view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally… indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)
(On a multiverse realist’s view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)
Another unaddressed issue derives from your wording: “how do you affect your future experiences?” I may well ask whether there’s anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that’s roughly the same problem for an instrumentalist as it is for a realist… that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.
Somewhere way upstream I said that I postulate experiences (I called them inputs), so they “exist” in this sense. We certainly don’t experience “everything”, so that’s how you tell “between experiences we have and those we don’t have”. I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I’m failing to Millerize it.
OK.
So “existence” properly refers to a property of subsets of models (e.g., “my keyboard exists” asserts that M1 contain K), as discussed earlier, and “existence” also properly refer to a property of inputs (e.g., “my experience of my keyboard sitting on my desk exists” and “my experience of my keyboard dancing the Macarena doesn’t exist” are both coherent, if perhaps puzzling, things to say), as discussed here.
Yes?
Which is not necessarily to say that “existence” refers to the same property of subsets of models and of inputs. It might, it might not, we haven’t yet encountered grounds to say one way or the other.
Yes?
OK. So far, so good.
And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:
Well, I agree that when a realist solipsist says “Mine is the only mind that exists” they are using “exists” in a way that is meaningless to an instrumentalist.
That said, I don’t see what stops an instrumentalist solipsist from saying “Mine is the only mind that exists” while using “exists” in the ways that instrumentalists understand that term to have meaning.
That said, I still don’t quite understand how “exists” applies to minds on your account. You said here that “mind is also a model”, which I understand to mean that minds exist as subsets of models, just like keyboards do.
But you also agreed that a model is a “mental construct”… which I understand to refer to a construct created/maintained by a mind.
The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of “existence” that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren’t mental constructs.
My reasoning here is similar to how if you said “Red boxes are contained by blue boxes” and “Blue boxes are contained by red boxes” I would conclude that at least one of those statements had an implicit “some but not all” clause prepended to it… I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true.
Does that make sense?
If so, can you clarify which is the case?
If not, can you say more about why not?
And what do you mean here by “true”, in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it’s the latter, how would you test for it?
Beats me.
Just to be clear, are you suggesting that on your account I have no grounds for treating “All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes” differently from “All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes” in the way I discussed?
If you are suggesting that, then I don’t quite know how to proceed. Suggestions welcomed.
If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework
Actually, thinking about this a little bit more, a “simpler” question might be whether it’s meaningful on this account to talk about minds existing. I think the answer is again that it isn’t, as I said about experiences above… models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.
If that’s the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.
So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can’t base anything such (non)existence.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
Sort of, yes. Except mind is also a model.
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
Nor do I.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
ie, realism explain how you can predict at all.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Fixed that for you.
I suppose shiminux would claim that explanatory or not, it complicates the model and thus makes it more costly, computationally speaking.
But that’s a terrible argument. if you can’t justify a posit by the explanatory work it does, then the optimum number of posits to make is zero.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
Shminux seems to be positing inputs and models at the least.
I think you quoted the wrong thing there, BTW.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.
And inverted stupidity is..?
If I understand both your and shiminux’s comments, this might express the same thing in different terms:
We have experiences (“inputs”.)
We wish to optimize these inputs according to whatever goal structure.
In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
Some of these models are more accurate than others. We might call accurate models “real”.
However, the term “real” holds no special ontological value, and they might later prove inaccurate or be replaced by better models.
Thus, we have a perfectly functioning agent with no conception (or need for) a territory—there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn’t very useful for such an agent.
Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.
As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.
I find it’s much more difficult to express my own positions in ways that are easily understood, though. It’s harder to figure out what is salient and where the vastest inferential gulfs are.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
I actually tried this a few times, even started a post draft titled “explain realism to a baby AI”. In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.
Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.
You are right. I will give it a go. Just because it’s obvious doesn’t mean it should not be explicit.
Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.
(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don’t threaten your own arguments (since they would be threatening the other side’s arguments, as it were.))
It seems to me to be one of the basic exercises in rationality, also known as “Devil’s advocate”. However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back. Not sure how much of this is taught or practiced at CFAR camps.
Yup. In my experience, though, Devil’s Advocates are usually pitted against people genuinely arguing their cause, not other devil’s advocates.
Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil’s advocate, though, IIRC.
Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it… though I’m not sure how good a job of it I’ll do.
I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still …
Wait, there are nonrealists other than shiminux here?
Beats me.
Actually, that’s just the model I was already using. I noticed it was shorter than Dave’s, so I figured it might be useful.
I suggest we move the discussion to a top-level discussion thread. The comment tree here is huge and hard to navigate.
If shiminux could write an actual post on his beliefs, that might help a great deal, actually.
I think I got a cumulative total of some 100 downvotes on this thread, so somehow I don’t believe that a top-level post would be welcome. However, if TheOtherDave were to write one as a description of an interesting ontology he does not subscribe to, this would probably go over much better. I doubt he would be interested, though.
As it happens, I agree with your position. I was actually thinking of making a post that pinpoints to all the important comments here without taking a position, while asking the discussion to continue there. However, making an argumentative post is also possible, although I might not be willing to expend to effort.
Cool.
If you are motivated at some point to articulate an anti-realist account of how non-accidental correlations between inputs come to arise (in whatever format you see fit), I’d appreciate that.
As I understand it, the word “how” is used to demand a model for an event. Since I already have models for the correlations of my inputs, I don’t feel the need for further explanation. More concretely, should you ask “How does closing your eyes lead to a blackout of your vision?” I would answer “After I close my eyes, my eyelids block all of the light from getting into my eye.”, and I consider this answer satisfying. Just because I don’t believe in a ontologically fundamental reality, doesn’t mean I don’t believe in eyes and eyelids and light.
OK. So, say I have two models, M1 and M2.
In M1, vision depends on light, which is blocked by eyelids. Therefore in M1, we predict that closing my eyes leads to a blackout of vision. In M2, vision depends on something else, which is not blocked by eyelids. Therefore in M2, we predict that closing my eyes does not lead to a blackout of vision.
At some later time, an event occurs in M1: specifically, I close my eyelids. At the same time, I have a blackout of vision. This increases my confidence in the predictive power of M1.
So far, so good.
At the same time, an identical event-pair occurs in M2: I close my eyes and my vision blacks out. This decreases my confidence in the predictive power of M2.
If I’ve understood you correctly, both the realist and the instrumentalist account of all of the above is “there are two models, M1 and M2, the same events occur in both, and as a consequence of those events we decide M1 is more accurate than M2.”
The realist account goes on to say “the reason the same events occur in both models is because they are both fed by the same set of externally realized events, which exist outside of either model.” The instrumentalist account, IIUC, says “the reason the same events occur in both models is not worth discussing; they just do.”
Is that right?
That’s still possible, for convenience purposes, even if shiminux is unwilling to describe their beliefs—your beliefs, apparently, I think a lot of people will have some questions to ask you now—in a top-level post.
Ooh, excellent point. I’d do it myself, but unfortunately my reason for suggesting it is that I want to understand your position better—my puny argument would be torn to shreds, I have too many holes in my understanding :(
The actual world is also a possible world. Non actual possible worlds are only accessible as models. Realists believe they can bring the actual world into line with desired models to some exitent
Not for realists.
For realist, wireheading isn’t a good aim. For anti realists, it is the only aim.
Realism doesn’t preclude ethical frameworks that endorse wireheading.
I’m less clear about the second part, though.
Rejecting (sufficiently well implemented) wireheading requires valuing things other than one’s own experience. I’m not yet clear on how one goes about valuing things other than one’s own experience in an instrumentalist framework, but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
See The Domain of Your Utility Function.
No, but they are a minority interest.
If someone accepts that reality exists, you have a head start. Why do anti-realists care about accurate prediction? They don’t think predictive models represent and external reality, and they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.
My understanding of shminux’s position is that accurate models can be used, somehow, to improve inputs.
I don’t yet understand how that is even in principle possible on his model, though I hope to improve my understanding.
Your last statement shows that you have much to learn from TheOtherDave about the principle of charity. Specifically, don’t think the other person to be stupider than you are, without a valid reason. So, if you come up with a trivial objection to their point, consider that they might have come across it before and addressed it in some way. They might still be wrong, but likely not in the obvious ways.
So where did you address it?
The trouble, of course, is that sometimes people really are wrong in “obvious” ways. Probably not high-status LWers, I guess.
It happens, but this should not be the initial assumption. And I’m not sure who you mean by “high-status LWers”.
Sorry, just realized I skipped over the first part of your comment.
Doesn’t that depend on the prior? I think most holders of certain religious or political beliefs, for instance, do so for trivially wrong reasons*. Perhaps you mean it should not be the default assumption here?
*Most conspiracy theories, for example.
I was referring to you. PrawnOfFate should not have expected you to make such a mistake, give the evidence.
If I answer ‘yes’ to this, then I am confusing the map with the territory, surely? Yes, there may very well be a possible world that’s a perfect match for a given model, but how would I tell it apart from all the near-misses?
The “real world” is a good deal more accurate than the most accurate model of it that we have of it.
It’s not me, FWIW; I find the discussion interesting.
That said, I’m not sure what methodology you use to determine which actions to take, given your statement that ” the “real world” just the most accurate model”. If all you cared about was the accuracy of your model, would it not be easier to avoid taking any physical actions, and simply change your model on the fly as it suits you ? This way, you could always make your model fit what you observe. Yes, you’d be grossly overfitting the data, but is that even a problem ?
I didn’t say it’s all I care about. Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice, depending on my preferences, the effort required and the odds of success, just like your garden variety realist would. As Eliezer used to emphasize, “it all adds up to normality”.
Would you do so if picking another model required less effort ? I’m not sure how you can justify doing that.
I am guessing that you, TimS and nyan_sandwich all seem to think that my version of instrumentalism is incompatible with having preferences over possible worlds. I have trouble understanding where this twist is coming from.
It’s not that I think that your version of instrumentalism is incompatible with preferences, it’s more like I’m not sure I understand what the word “preferences” even means in your context. You say “possible worlds”, but, as far as I can tell, you mean something like, “possible models that predict future inputs”.
Firstly, I’m not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a “preference” for you means something like, “a desire to make one model more accurate than the rest”, but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn’t it ?
Your having a preference for worlds without, eg, slavery can’t possibly translate into something iike “i want to change the world external to me so that it no longer contains slaves”. I have trouble understanding what it would translate to. You could adopt models where things you don’t like don’t exist, but they wouldn’t be accurate.
No, but it translates to its equivalent:
And how do you arrange that?
So you’re saying you have a preference over the map, as opposed to the territory (your experiences, in this case)
That sounds subject to some standard pitfalls, offhand, where you try to fool yourself into choosing the “no-slaves” map instead of trying to optimize, well, reality, such as the slaves—perhaps with an experience machine, through simple self-deception, or maybe some sort of exploit involving Occam’s Razor.
I agree that self-deception is a “real” possibility. Then again, it is also a possibility for a realist. Or a dualist. In fact, confusing map and territory is one of the most common pitfalls, as you well know. Would it be more likely for an instrumentalist to become instrumenta-lost? I don’t see why it would be the case. For example, from my point of view, you arbitrarily chose a comforting Christian map (is it an inverse of “some sort of exploit involving Occam’s Razor”?) instead of a cold hard uncaring one, even though you seem to be preferring realism over instrumentalism.
Ah, no, sorry, I meant that those options would satisfy your stated preferences, not that they were pitfalls on the road to it. I’m suggesting that since you don’t want to fall into those pitfalls, those aren’t actually your preferences, whether because you’ve made a mistake or I have (please tell me if I have.)
I propose a ww2 mechanical aiming computer as an example of a model. Built based on the gears that can be easily and conveniently manufactured, there’s very little doubt that universe does not use anything even remotely similar to produce the movement of the projectile through the air, even if we assume that such question is meaningful.
A case can be made that physics is not that much different from ww2 aiming computer (built out of mathematics that is available and can be conveniently used). And with regards to MWI, a case can be made that it is similar to removing the only ratchet in the mechanical computer and proclaiming rest of the gears the reality because somehow “from the inside” it would allegedly still feel the same even though the mechanical computer, without this ratchet, doesn’t even work any more for predicting anything.
Of course, it is not clear how close physics is to a mechanical aiming computer in terms of how the internals can correspond to the real world.
Interesting. So we prefer that some models or others be accurate, and take actions that we expect to make that happen, in our current bag of models.
Ok I think I get it. I was confused about what the referent of your preferences would be if you did not have your models referring to something. I see that you have made the accuracy of various models the referent of preferences. This seems reasonable enough.
I can see now that I’m confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.
I like how you put it into some fancy language, and now it sounds almost profound.
It is entirely possible that I’m talking out of my ass here, and you will find a killer argument against this approach.
Likewise the converse. I reckon both will get killed by a proper approach.
It works fine—as long as you only care about optimizing inputs, in which case I invite you to go play in the holodeck while the rest of us optimize the real world.
If you can’t find a holodeck, I sure hope you don’t accidentally sacrifice your life to save somebody or further some noble cause. After all, you wont be there to experience the resulting inputs, so what’s the point?
You are arguing with a strawman.
It’s not a utility function over inputs, it’s over the accuracy of models.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
It all adds up to normality, which probably means we are confused and there is an even simpler underlying model of the situation.
You know, I’m actually not.
Affecting the accuracy of a specified model—a term defined as “how well it predicts future inputs”—is a subset of optimizing future inputs.
You’re still thinking like a realist. A holodeck doesn’t prevent you from observing the real world—there is no “real world”. It prevents you testing how well certain models predict experiences when you take the action “leave the holodeck”, unless of course you leave the holodeck—it’s an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.
Pardon?
Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don’t win the lottery with quantum suicide.
You know, not every belief adds up to normality—just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because “it all adds up to normality”.