You’d have to ask Eliezer, but as far as I can tell the philosophical difference between his view (realism) and mine (anti-realism/instrumentalism) is that he elevates the concept of territory into an unquestionable belief, and to me it is one of many sometimes useful models. My approach is “there is an observation that sometimes it is possible to make predictions about future observations that are not completely inaccurate”, without postulating an external largely immutable source for those observations, called “reality” or “territory”. I am quite sure that this is not the view Eliezer would endorse. Sure, the initial impetus for the idea of the external reality is to explain predictability in certain observations, but then it takes a life of its own and becomes a privileged concept in the epistemology of realism.
Can you explain your idea more? If the concept of “reality” or “territory” is just one of many sometimes useful models, what are some other useful models?
If you are asking for a model that is a replacement for the idea of the territory, that is not what I meant. This would be like asking “if you don’t believe in God, what do you replace God with?” But maybe you mean something else.
You’d have to ask Eliezer, but as far as I can tell the philosophical difference between his view (realism) and mine (anti-realism/instrumentalism) is that he elevates the concept of territory into an unquestionable belief, and to me it is one of many sometimes useful models.
AFAICT, the “it” in the last clause here has to be referring to “the concept of territory” so I’m asking, if the concept of territory is one of many sometimes useful models, what are some other useful models? I don’t see how else to interpret this sentence that would make sense, so if that’s not what you meant, can you explain what you actually meant?
Still not sure what you are asking. There are plenty of sometimes useful models, which work well within their domain of validity. “Humans sometimes behave as Bayesian reasoners” is one of those. Well, that one has a very limited domain of validity, but still non-empty. All of physics filled with sometimes useful models, in fact, as far as I can tell, there is nothing else but models. But that’s a view few people here are willing to entertain.
Thanks, I think I understand your position better now. Would you say that even the concept of “future observations” is just a model, because for all you know maybe all that exists is just you with your current set of memories and observations? If so I’m curious what your views on values and decision making are. If you’re agnostic about the existence of everything except your current memories and observations and models, what things do you assign value to, and how do you figure out what actions are better than other actions?
for all you know maybe all that exists is just you with your current set of memories and observations?
That solipsistic model is not very useful, is it? Doesn’t offer any useful predictions, so why entertain it?
If so I’m curious what your views on values and decision making are.
I posted about my views on decision making about a year ago. That model seems quite useful to me, as it avoids the pitfalls of logical counterfactuals vs environmental counterfactuals, and a bunch of otherwise confusing dilemmas.
If you’re agnostic about the existence of everything except your current memories and observations and models
I didn’t say I made an exception at all. I just don’t like using terms like “exist”, “real” and “true”, they can be quite misleading. If anything, I would suggest people try to taboo them and see what happens to the statements they make.
how do you figure out what actions are better than other actions?
Like most people, I have an illusion of making decisions. That is the implication from the current best physics models. My linked post above explains how to compare possible worlds, which is the closest one can get to “making decisions” without implying that they have magical free will separate from physical processes.
If what you are really asking is “how do you reconcile Model A you use in the situation 1 and Model B you use in the situation 2?”, then my reply is that every model has its own domain of validity and when stretched beyond it, it breaks. There is nothing unusual about it. in physics quantum mechanics and general relativity are very useful yet incompatible models. You can probably name a few like that in your own are of expertise.
>A model compatible with the known laws of physics is that what we think of as modeling, predicting and making choices is actually learning which one of the possible worlds we live in.
(I would state this somewhat differently, but let’s go with it for now for the sake of argument.)
Do you consider “which one of the possible worlds we live in” to be synonymous with “reality” or “territory”? If so, would you agree that this model is useful anytime we make decisions (i.e., there’s not really an alternative model that we can use to serve the same purpose)? If so, it seems like the concept of territory isn’t just a “sometimes useful” model but at least one of the most useful models we have, and in fact pretty much indispensable? How does this differ in practice from what Eliezer thinks? I think you were complaining that Eliezer asks whether wavefunctions are real, but couldn’t you ask a similar question, namely, does the possible world that you live in contain wavefunctions?
Do you consider “which one of the possible worlds we live in” to be synonymous with “reality” or “territory”?
I consider the map/territory model to be useful in this case, yes. I don’t promote the idea of the territory into anything other than a useful model in this case.
If so, would you agree that this model is useful anytime we make decisions
I wouldn’t make a sweeping statement like that, no. But it is definitely useful to consider the person making decisions as a part of the physical world, and not having magical free will, the way the usual decision theories go, while paying lip service to the idea of reality.
If so, it seems like the concept of territory isn’t just a “sometimes useful” model but at least one of the most useful models we have, and in fact pretty much indispensable? How does this differ in practice from what Eliezer thinks?
I don’t know what he thinks exactly, but my impression is what I had described above, talking about territory while still thinking that the intentional stance is anything more than an occasionally useful approximation. That “occasionally” part does not include decision theories.
I think you were complaining that Eliezer asks whether wavefunctions are real, but couldn’t you ask a similar question, namely, does the possible world that you live in contain wavefunctions?
I don’t recall complaining about it, but wavefunctions are a mathematical abstraction, obviously. Not a lot of use in asking whether they are really real or only seem real and what not. As for “does the possible world that you live in contain wavefunctions?” question, my answer would be that at the level of coarseness that corresponds to observing someone’s actions, “wavefunction” is not a useful abstraction, just like quarks are not a useful abstraction when talking about, as in Eliezer’s example, a Boeing 747. The only residue that I expect to find useful from quantum mechanics in the macroscopic world of agents is the inherent unpredictability and randomness at the level of the ion channels opening and closing, which, when combined, result in the appearance of conscious decisions.
Not sure if this makes sense, but thank you very much for being patient and engaging in this discussion, and not just shrugging it off.
That model seems quite useful to me, as it avoids the pitfalls of logical counterfactuals vs environmental counterfactuals, and a bunch of otherwise confusing dilemmas.
You are “solving” the problem by dogmatically siding with clockwork determinism against free will. That isn’t a real solution because someone else could be dogmatic in the other direction, and it is also inconsistent with your anti realism.
Just a general comment on your style: I have stopped replying to you because you tend to talk at me, telling me what’s right and what’s wrong, as if you have the monopoly on truth. This may well not be your intention, but that’s how your comments come across to me. Just thought I’d let you know. Of course, for all I know, others perceive my comments the same way and that’s why they don’t reply to me.
Saying that some things are right and others wrong is pretty standard round here. I don’t think I’m breaking any rules. And I don’t think you avoid making plonking statements yourself.
You’d have to ask Eliezer, but as far as I can tell the philosophical difference between his view (realism) and mine (anti-realism/instrumentalism) is that he elevates the concept of territory into an unquestionable belief, and to me it is one of many sometimes useful models. My approach is “there is an observation that sometimes it is possible to make predictions about future observations that are not completely inaccurate”, without postulating an external largely immutable source for those observations, called “reality” or “territory”. I am quite sure that this is not the view Eliezer would endorse. Sure, the initial impetus for the idea of the external reality is to explain predictability in certain observations, but then it takes a life of its own and becomes a privileged concept in the epistemology of realism.
Can you explain your idea more? If the concept of “reality” or “territory” is just one of many sometimes useful models, what are some other useful models?
If you are asking for a model that is a replacement for the idea of the territory, that is not what I meant. This would be like asking “if you don’t believe in God, what do you replace God with?” But maybe you mean something else.
In the comment I replied to, you wrote:
AFAICT, the “it” in the last clause here has to be referring to “the concept of territory” so I’m asking, if the concept of territory is one of many sometimes useful models, what are some other useful models? I don’t see how else to interpret this sentence that would make sense, so if that’s not what you meant, can you explain what you actually meant?
Still not sure what you are asking. There are plenty of sometimes useful models, which work well within their domain of validity. “Humans sometimes behave as Bayesian reasoners” is one of those. Well, that one has a very limited domain of validity, but still non-empty. All of physics filled with sometimes useful models, in fact, as far as I can tell, there is nothing else but models. But that’s a view few people here are willing to entertain.
Thanks, I think I understand your position better now. Would you say that even the concept of “future observations” is just a model, because for all you know maybe all that exists is just you with your current set of memories and observations? If so I’m curious what your views on values and decision making are. If you’re agnostic about the existence of everything except your current memories and observations and models, what things do you assign value to, and how do you figure out what actions are better than other actions?
That solipsistic model is not very useful, is it? Doesn’t offer any useful predictions, so why entertain it?
I posted about my views on decision making about a year ago. That model seems quite useful to me, as it avoids the pitfalls of logical counterfactuals vs environmental counterfactuals, and a bunch of otherwise confusing dilemmas.
I didn’t say I made an exception at all. I just don’t like using terms like “exist”, “real” and “true”, they can be quite misleading. If anything, I would suggest people try to taboo them and see what happens to the statements they make.
Like most people, I have an illusion of making decisions. That is the implication from the current best physics models. My linked post above explains how to compare possible worlds, which is the closest one can get to “making decisions” without implying that they have magical free will separate from physical processes.
If what you are really asking is “how do you reconcile Model A you use in the situation 1 and Model B you use in the situation 2?”, then my reply is that every model has its own domain of validity and when stretched beyond it, it breaks. There is nothing unusual about it. in physics quantum mechanics and general relativity are very useful yet incompatible models. You can probably name a few like that in your own are of expertise.
In your decision making post, you wrote:
>A model compatible with the known laws of physics is that what we think of as modeling, predicting and making choices is actually learning which one of the possible worlds we live in.
(I would state this somewhat differently, but let’s go with it for now for the sake of argument.)
Do you consider “which one of the possible worlds we live in” to be synonymous with “reality” or “territory”? If so, would you agree that this model is useful anytime we make decisions (i.e., there’s not really an alternative model that we can use to serve the same purpose)? If so, it seems like the concept of territory isn’t just a “sometimes useful” model but at least one of the most useful models we have, and in fact pretty much indispensable? How does this differ in practice from what Eliezer thinks? I think you were complaining that Eliezer asks whether wavefunctions are real, but couldn’t you ask a similar question, namely, does the possible world that you live in contain wavefunctions?
I consider the map/territory model to be useful in this case, yes. I don’t promote the idea of the territory into anything other than a useful model in this case.
I wouldn’t make a sweeping statement like that, no. But it is definitely useful to consider the person making decisions as a part of the physical world, and not having magical free will, the way the usual decision theories go, while paying lip service to the idea of reality.
I don’t know what he thinks exactly, but my impression is what I had described above, talking about territory while still thinking that the intentional stance is anything more than an occasionally useful approximation. That “occasionally” part does not include decision theories.
I don’t recall complaining about it, but wavefunctions are a mathematical abstraction, obviously. Not a lot of use in asking whether they are really real or only seem real and what not. As for “does the possible world that you live in contain wavefunctions?” question, my answer would be that at the level of coarseness that corresponds to observing someone’s actions, “wavefunction” is not a useful abstraction, just like quarks are not a useful abstraction when talking about, as in Eliezer’s example, a Boeing 747. The only residue that I expect to find useful from quantum mechanics in the macroscopic world of agents is the inherent unpredictability and randomness at the level of the ion channels opening and closing, which, when combined, result in the appearance of conscious decisions.
Not sure if this makes sense, but thank you very much for being patient and engaging in this discussion, and not just shrugging it off.
You are “solving” the problem by dogmatically siding with clockwork determinism against free will. That isn’t a real solution because someone else could be dogmatic in the other direction, and it is also inconsistent with your anti realism.
Just a general comment on your style: I have stopped replying to you because you tend to talk at me, telling me what’s right and what’s wrong, as if you have the monopoly on truth. This may well not be your intention, but that’s how your comments come across to me. Just thought I’d let you know. Of course, for all I know, others perceive my comments the same way and that’s why they don’t reply to me.
Saying that some things are right and others wrong is pretty standard round here. I don’t think I’m breaking any rules. And I don’t think you avoid making plonking statements yourself.