It’s a well known tragedy that (unless Humanity gains a perspective on reality far surpassing my wildest expectations) there are arbitrarily many nontrivially unique theories which correspond to any finite set of observations.
The practical consequence of this (A small leap, but valid) is that we can remove any idea you have and make exactly the same predictions about sensory experiences by reformulating our model. Yes, any idea. Models are not even slightly unique- the idea of anything “really existing” is “unnecessary”, but literally every belief is “unnecessary”. I’d expect some beliefs would, for the practical purposes of present-day-earth human brains, be impossible to replace, but I digress.
(Joke: what’s the first step of more accurately predicting your experiences? Simplifying your experiences! Ahaha!)
You cannot “know” anything, because you’re experiencing exactly the same thing as you could possibly be experiencing if you were wrong. You can’t “know” that you’re either wrong or right, or neither, you can’t “know” that you can’t “know” anything, etc. etc. etc.
There are infinitely many different ontologies which support every single piece of information you have ever or will ever experience.
In fact, no experience indicates anything- we can build a theory of everything which explains any experience but undermines any inferences made using it, and we can do this with a one-to-one correspondence to theories that support that inference.
In fact, there’s no way to draw the inference that you’re experiencing anything. We can build infinitely many models (Or, given the limits on how much matter you can store in a Hubble volume, an arbitrarily large but finite number of models) in which the whole concept of “experience” is explained away as delusion...
And so on!
The main point of making beliefs pay rent is having a more computationally efficient model- doing things more effectively. Is your reformulation more effective than the naïve model? No.
Your model, and this whole line of thought, is not paying rent.
We can build infinitely many models (Or, given the limits on how much matter you can store in a Hubble volume, an arbitrarily large but finite number of models) in which the whole concept of “experience” is explained away as delusion
This is false. I actually have no idea what it would mean for an experience to be a delusion—I don’t think that’s even a meaningful statement.
I’m comfortable with the Cartesian argument that allows me to know that I am experiencing things.
Your model, and this whole line of thought, is not paying rent.
On the contrary, it’s the naive realist model that doesn’t pay rent by not making any predictions at all different from my simpler model.
I don’t really care if one includes realist claims in their model. It’s basically inert. It just makes the model more complicated for no gain.
This is false. I actually have no idea what it would mean for an experience to be a delusion—I don’t think that’s even a meaningful statement.
I’m comfortable with the Cartesian argument that allows me to know that I am experiencing things.
Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
There are an exhausting number of different riffs on this idea- maybe you’re in an asylum and all of your thinking including “I actually have no idea what it would mean for an experience to be a delusion” is due to some major mental disorder. Oh, how obvious- my idea of experience was a crazy delusion all along. I can’t believe I said that it was my daughter’s arm. “I think therefore I am”? Absurd!
If you have an argument against this problem, I am especially interested in hearing it- it seems like the fact you can’t tell between this situation and reality (and you can’t know whether this situation is impossible as a result, etc.) is part of the construction of the scenario. You’d need to show that the whole idea that “We can construct situations in which you’re having exactly the same thoughts as you are right now, but with some arbitrary change (Which you don’t even need to believe is theoretically possible or coherent) in the background” is invalid.
Do I think this is a practical concern? Of course not. The Cartesian argument isn’t sufficient to convince me, though- I’m just assuming that I really exist and things are broadly as they seem. I don’t think it’s that plausible to expect that I would be able to derive these assumptions without using them- there is no epistemological rock bottom.
On the contrary, it’s the naive realist model that doesn’t pay rent by not making any predictions at all different from my simpler model.
Your model is (I allege) not actually simpler. It just seems simpler because you “removed something” from it. A mind could be much “simpler” than ours, but also less useful- which is the actual point of having a simpler model. The “simplest” model which accurately predicts everything we see is going to be a fundamental physical theory, but making accurate predictions about complicated macroscopic behavior entirely from first principles is not tractable with eight billion human brains worth of hardware.
The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world? I can’t see why that would be the case- it requires occasionally spending my scare brainpower on reformatting my basic experience of the world in more complicated terms, I have to think about whether or not I should argue with someone whenever they bring up the idea of naïve realism, etc. You claim adopting the “simpler” model doesn’t change your predictions, so I don’t see what justifies these costs. Are there some major hidden costs of naïve realism that I’m not aware of? Am I actually wasting more unconscious brainpower working with the idea of “reality” and things “really existing”?
If I have to choose between two models which make the exact same predictions (i.e. my current model and your model), I’m going to choose between the model which is better at achieving my goals. In practice, this is the more computationally efficient model, which (I allege) is my current model.
>Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
I disagree, and see no reason to agree. You have not fully specified this situation, and have offered no argument for why this situation is coherent. Being as this is obviously self-contradictory (at least the part about logic), why should I accept this?
>If you have an argument against this problem, I am especially interested in hearing it
The problem is that you’re assuming that verificationism is false in arguing against it, which is impermissible. E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
Same for any other way to cash out “it’s all a delusion”—you need to stipulate unverifiable entities in order to even define delusion.
Now, this is distinct from the question of whether I should have 100% credence in claims such as 2+2=4 or “I am currently having an experience”. I can have uncertainty as to such claims without allowing for them to be meaningfully false. I’m not 100% certain that verificationism is valid.
>It seems like the fact you can’t tell between this situation and reality
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
>The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world?
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it. You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
You’re implementing a feature into your model which doesn’t change what it predicts but makes it less computationally efficient.
The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. Ipso facto, my model is better. There’s no coherent excuse for this.
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it.
What does it mean for your model to be “true”? There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
It’s not about making my model “prettier”- my model is literally better at evoking the outcomes that I want to evoke. This is the correct dimension on which to evaluate your model.
You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
My preferred interpretation of quantum physics (many worlds) was made before bell’s theorem, and it turns out that bell’s theorem is actually strong evidence in favor of many worlds. Bell’s theorem does not “disprove realism”, it just disproves hidden variable theories. My interpretation already predicted that.
I suspect this isn’t going anywhere, so I’m abdicating.
>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.
You can’t define things as (un)necessary without knowing what you value or what goal you are trying to achieve. Assuming that only prediction is valuable is pretty question begging.
This is only true for trivial values, e.g. “I terminally value having this specific world model”.
For most utility schemes (Including, critically, that of humans), the supermajority of the purpose of models and beliefs is instrumental. For example, making better predictions, using less computing power, etc.
In fact, humans who do not recognize this fact and stick to beliefs or models because they like them are profoundly irrational. If the sky is blue, I wish to believe the sky is blue, and so on. So, assuming that only prediction is valuable is not question begging- I suspect you already agreed with this and just didn’t realize it.
In the sense that beliefs (and the models they’re part of) are instrumental goals, any specific belief is “unnecessary”. Note the quotations around “unnecessary” in this comment and the comment you’re replying to. By “unnecessary” I mean the choice of which beliefs and which model to use is subject to the whims of which is more instrumentally valuable- in practice, a complex tradeoff between predictive accuracy and computational demands.
It’s also true for “I terminally value understanding the world, whatever the correct model is”.
I said e.g, not i.e, and “I terminally value understanding the world, whatever the correct model is” is also a case of trivial values.
First, a disclaimer: It’s unclear how well the idea of terminal/instrumental values maps to human values. Humans seem pretty prone to value drift- whenever we decide we like some idea and implement it, we’re not exactly “discovering” some new strategy and then instrumentally implementing it. We’re more incorporating the new strategy directly into our value network. It’s possible (Or even probable) that our instrumental values “sneak in” to our value network and are basically terminal values with (usually) lower weights.
Now, what would we expect to see if “Understanding the world, whatever the correct model is” was a broadly shared terminal value in humans, in the same way as the other prime suspects for terminal value (survival instinct, caring for friends and family, etc)? I would expect:
It’s exhibited in the vast majority of humans, with some medium correlation between intelligence and the level to which this value is exhibited. (Strongly exhibiting this value tends to cause greater effectiveness i.e. intelligence, but most people already strongly exhibit this value)
Companies to have jumped on this opportunity like a pack of wolves and have designed thousands of cheap wooden signs with phrases like “Family, love, ‘Understanding the world, whatever the correct model is’”.
Movements which oppose this value are somewhat fringe and widely condemned.
Most people who espouse this value are not exactly sure where it’s from, in the same way they’re not exactly sure where their survival instinct or their love for their family came from.
But, what do we see in the real world?
Exhibiting this value is highly correlated with intelligence. Almost everyone lightly exhibits this value, because its practical applications are pretty obvious (Pretending your mate isn’t cheating on you is just plainly a stupid strategy), but it’s only strongly and knowingly exhibited among really smart people interested in improving their instrumental capabilities.
Movements which oppose this value are common.
Most people who espouse this value got it from an intellectual tradition, some wise counseling, etc.
Refer to my disclaimer for the validity of the idea of humans having terminal values. In the context of human values, I think of “terminal values” as the ones directly formed by evolution and hardwired into our brains, and thus broadly shared. The apparent exceptions are rarish and highly associated with childhood neglect and brain damage.
“Broadly shared” is not a significant additional constraint on what I mean by “terminal value”, it’s a passing acknowledgement of the rare counterexamples.
If that’s your argument then we somewhat agree. I’m saying that the model you should use is the model that most efficiently pursues your goals, and (in response to your comment) that utility schemes which terminally value having specific models (and thus whose goals are most efficiently pursued through using said arbitrary terminally valued model and not a more computationally efficient model) are not evidently present among humans in great enough supply for us to expect that that caveat applies to anyone who will read any of these comments.
Real world examples of people who appear at first glance to value having specific models (e.g. religious people) are pretty sketchy- if this is to be believed, you can change someone’s terminal values with the argumentative equivalent of a single rusty musket ball and a rubber band. That defies the sort of behaviors we’d want to see from whatever we’re defining as a “terminal value”, keeping in mind the inconsistencies between the way human value systems are structured and the way the value systems of hypothetical artificial intelligences are structured.
The argumentative strategy required to convince someone to ignore instrumentally unimportant details about the truth of reality looks more like “have a normal conversation with them” than “display a series of colorful flashes as a precursor to the biological equivalent of arbitrary code execution” or otherwise psychologically breaking them in a way sufficient to get them to do basically anything, which is what would be required to cause serious damage to what I’m talking about when I say “terminal values” in the context of humans.
Refer to my disclaimer for the validity of the idea of humans having terminal values. In the context of human values, I think of “terminal values” as the ones directly formed by evolution and hardwired into our brains, and thus broadly shared. The apparent exceptions are rarish and highly associated with childhood neglect and brain damage.
The existence of places like LessWrong, philosophy departments, etc, indicate that people do have some sort of goal to understand things in general, aside from any nitpicking about what is a true terminal value.
If that’s your argument then we somewhat agree. I’m saying that the model you should use is the model that most efficiently pursues your goals,
Well, if my goal is the truth, I am going to want the model that corresponds the best, not the model that predicts most efficiently .
and (in response to your comment) that utility schemes which terminally value having specific models
I’ve already stated than I am not talking about confirming specific models .
The existence of places like LessWrong, philosophy departments, etc, indicate that people do have some sort of goal to understand things in general, aside from any nitpicking about what is a true terminal value.
I agree- lots of people (including me, of course) are learning because they want to- not as part of some instrumental plan to achieve their other goals. I think this is significant evidence that we do terminally value learning. However, the way that I personally have the most fun learning is not the way that is best for cultivating a perfect understanding of reality (nor developing the model which is most instrumentally efficient, for that matter). This indicates that I don’t necessarily want to learn so that I can have the mental model that most accurately describes reality- I have fun learning for complicated reasons which I don’t expect align with any short guiding principle.
Also, at least for now, I get basically all of my expected value from learning from my expectations for being able to leverage that knowledge. I have a lot more fun learning about e.g. history than the things I actually spend my time on, but historical knowledge isn’t nearly as useful, so I’m not spending my time on it.
In retrospect, I should’ve said something more along the lines of “We value understanding in and of itself, but (at least for me, and at least for now) most of the value in our understanding is from its practical role in the advancement of our other goals.”
I’ve already stated than I am not talking about confirming specific models.
There’s been a mix-up here- my meaning for “specific” also includes “whichever model corresponds to reality the best”
It’s a well known tragedy that (unless Humanity gains a perspective on reality far surpassing my wildest expectations) there are arbitrarily many nontrivially unique theories which correspond to any finite set of observations.
The practical consequence of this (A small leap, but valid) is that we can remove any idea you have and make exactly the same predictions about sensory experiences by reformulating our model. Yes, any idea. Models are not even slightly unique- the idea of anything “really existing” is “unnecessary”, but literally every belief is “unnecessary”. I’d expect some beliefs would, for the practical purposes of present-day-earth human brains, be impossible to replace, but I digress.
(Joke: what’s the first step of more accurately predicting your experiences? Simplifying your experiences! Ahaha!)
You cannot “know” anything, because you’re experiencing exactly the same thing as you could possibly be experiencing if you were wrong. You can’t “know” that you’re either wrong or right, or neither, you can’t “know” that you can’t “know” anything, etc. etc. etc.
There are infinitely many different ontologies which support every single piece of information you have ever or will ever experience.
In fact, no experience indicates anything- we can build a theory of everything which explains any experience but undermines any inferences made using it, and we can do this with a one-to-one correspondence to theories that support that inference.
In fact, there’s no way to draw the inference that you’re experiencing anything. We can build infinitely many models (Or, given the limits on how much matter you can store in a Hubble volume, an arbitrarily large but finite number of models) in which the whole concept of “experience” is explained away as delusion...
And so on!
The main point of making beliefs pay rent is having a more computationally efficient model- doing things more effectively. Is your reformulation more effective than the naïve model? No.
Your model, and this whole line of thought, is not paying rent.
This is false. I actually have no idea what it would mean for an experience to be a delusion—I don’t think that’s even a meaningful statement.
I’m comfortable with the Cartesian argument that allows me to know that I am experiencing things.
On the contrary, it’s the naive realist model that doesn’t pay rent by not making any predictions at all different from my simpler model.
I don’t really care if one includes realist claims in their model. It’s basically inert. It just makes the model more complicated for no gain.
Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
There are an exhausting number of different riffs on this idea- maybe you’re in an asylum and all of your thinking including “I actually have no idea what it would mean for an experience to be a delusion” is due to some major mental disorder. Oh, how obvious- my idea of experience was a crazy delusion all along. I can’t believe I said that it was my daughter’s arm. “I think therefore I am”? Absurd!
If you have an argument against this problem, I am especially interested in hearing it- it seems like the fact you can’t tell between this situation and reality (and you can’t know whether this situation is impossible as a result, etc.) is part of the construction of the scenario. You’d need to show that the whole idea that “We can construct situations in which you’re having exactly the same thoughts as you are right now, but with some arbitrary change (Which you don’t even need to believe is theoretically possible or coherent) in the background” is invalid.
Do I think this is a practical concern? Of course not. The Cartesian argument isn’t sufficient to convince me, though- I’m just assuming that I really exist and things are broadly as they seem. I don’t think it’s that plausible to expect that I would be able to derive these assumptions without using them- there is no epistemological rock bottom.
Your model is (I allege) not actually simpler. It just seems simpler because you “removed something” from it. A mind could be much “simpler” than ours, but also less useful- which is the actual point of having a simpler model. The “simplest” model which accurately predicts everything we see is going to be a fundamental physical theory, but making accurate predictions about complicated macroscopic behavior entirely from first principles is not tractable with eight billion human brains worth of hardware.
The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world? I can’t see why that would be the case- it requires occasionally spending my scare brainpower on reformatting my basic experience of the world in more complicated terms, I have to think about whether or not I should argue with someone whenever they bring up the idea of naïve realism, etc. You claim adopting the “simpler” model doesn’t change your predictions, so I don’t see what justifies these costs. Are there some major hidden costs of naïve realism that I’m not aware of? Am I actually wasting more unconscious brainpower working with the idea of “reality” and things “really existing”?
If I have to choose between two models which make the exact same predictions (i.e. my current model and your model), I’m going to choose between the model which is better at achieving my goals. In practice, this is the more computationally efficient model, which (I allege) is my current model.
>Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
I disagree, and see no reason to agree. You have not fully specified this situation, and have offered no argument for why this situation is coherent. Being as this is obviously self-contradictory (at least the part about logic), why should I accept this?
>If you have an argument against this problem, I am especially interested in hearing it
The problem is that you’re assuming that verificationism is false in arguing against it, which is impermissible. E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
Same for any other way to cash out “it’s all a delusion”—you need to stipulate unverifiable entities in order to even define delusion.
Now, this is distinct from the question of whether I should have 100% credence in claims such as 2+2=4 or “I am currently having an experience”. I can have uncertainty as to such claims without allowing for them to be meaningfully false. I’m not 100% certain that verificationism is valid.
>It seems like the fact you can’t tell between this situation and reality
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
>The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world?
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it. You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
You’re implementing a feature into your model which doesn’t change what it predicts but makes it less computationally efficient.
The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. Ipso facto, my model is better. There’s no coherent excuse for this.
What does it mean for your model to be “true”? There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
It’s not about making my model “prettier”- my model is literally better at evoking the outcomes that I want to evoke. This is the correct dimension on which to evaluate your model.
My preferred interpretation of quantum physics (many worlds) was made before bell’s theorem, and it turns out that bell’s theorem is actually strong evidence in favor of many worlds. Bell’s theorem does not “disprove realism”, it just disproves hidden variable theories. My interpretation already predicted that.
I suspect this isn’t going anywhere, so I’m abdicating.
>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.
You can’t define things as (un)necessary without knowing what you value or what goal you are trying to achieve. Assuming that only prediction is valuable is pretty question begging.
This is only true for trivial values, e.g. “I terminally value having this specific world model”.
For most utility schemes (Including, critically, that of humans), the supermajority of the purpose of models and beliefs is instrumental. For example, making better predictions, using less computing power, etc.
In fact, humans who do not recognize this fact and stick to beliefs or models because they like them are profoundly irrational. If the sky is blue, I wish to believe the sky is blue, and so on. So, assuming that only prediction is valuable is not question begging- I suspect you already agreed with this and just didn’t realize it.
In the sense that beliefs (and the models they’re part of) are instrumental goals, any specific belief is “unnecessary”. Note the quotations around “unnecessary” in this comment and the comment you’re replying to. By “unnecessary” I mean the choice of which beliefs and which model to use is subject to the whims of which is more instrumentally valuable- in practice, a complex tradeoff between predictive accuracy and computational demands.
It’s also true for “I terminally value understanding the world, whatever the correct model is”.
I said e.g, not i.e, and “I terminally value understanding the world, whatever the correct model is” is also a case of trivial values.
First, a disclaimer: It’s unclear how well the idea of terminal/instrumental values maps to human values. Humans seem pretty prone to value drift- whenever we decide we like some idea and implement it, we’re not exactly “discovering” some new strategy and then instrumentally implementing it. We’re more incorporating the new strategy directly into our value network. It’s possible (Or even probable) that our instrumental values “sneak in” to our value network and are basically terminal values with (usually) lower weights.
Now, what would we expect to see if “Understanding the world, whatever the correct model is” was a broadly shared terminal value in humans, in the same way as the other prime suspects for terminal value (survival instinct, caring for friends and family, etc)? I would expect:
It’s exhibited in the vast majority of humans, with some medium correlation between intelligence and the level to which this value is exhibited. (Strongly exhibiting this value tends to cause greater effectiveness i.e. intelligence, but most people already strongly exhibit this value)
Companies to have jumped on this opportunity like a pack of wolves and have designed thousands of cheap wooden signs with phrases like “Family, love, ‘Understanding the world, whatever the correct model is’”.
Movements which oppose this value are somewhat fringe and widely condemned.
Most people who espouse this value are not exactly sure where it’s from, in the same way they’re not exactly sure where their survival instinct or their love for their family came from.
But, what do we see in the real world?
Exhibiting this value is highly correlated with intelligence. Almost everyone lightly exhibits this value, because its practical applications are pretty obvious (Pretending your mate isn’t cheating on you is just plainly a stupid strategy), but it’s only strongly and knowingly exhibited among really smart people interested in improving their instrumental capabilities.
Movements which oppose this value are common.
Most people who espouse this value got it from an intellectual tradition, some wise counseling, etc.
I never claimed it was a broadly shared terminal value.
My argument is that you can’t make a one-size-fits-all recommendation of realism or anti realism, because individual values vary.
Refer to my disclaimer for the validity of the idea of humans having terminal values. In the context of human values, I think of “terminal values” as the ones directly formed by evolution and hardwired into our brains, and thus broadly shared. The apparent exceptions are rarish and highly associated with childhood neglect and brain damage.
“Broadly shared” is not a significant additional constraint on what I mean by “terminal value”, it’s a passing acknowledgement of the rare counterexamples.
If that’s your argument then we somewhat agree. I’m saying that the model you should use is the model that most efficiently pursues your goals, and (in response to your comment) that utility schemes which terminally value having specific models (and thus whose goals are most efficiently pursued through using said arbitrary terminally valued model and not a more computationally efficient model) are not evidently present among humans in great enough supply for us to expect that that caveat applies to anyone who will read any of these comments.
Real world examples of people who appear at first glance to value having specific models (e.g. religious people) are pretty sketchy- if this is to be believed, you can change someone’s terminal values with the argumentative equivalent of a single rusty musket ball and a rubber band. That defies the sort of behaviors we’d want to see from whatever we’re defining as a “terminal value”, keeping in mind the inconsistencies between the way human value systems are structured and the way the value systems of hypothetical artificial intelligences are structured.
The argumentative strategy required to convince someone to ignore instrumentally unimportant details about the truth of reality looks more like “have a normal conversation with them” than “display a series of colorful flashes as a precursor to the biological equivalent of arbitrary code execution” or otherwise psychologically breaking them in a way sufficient to get them to do basically anything, which is what would be required to cause serious damage to what I’m talking about when I say “terminal values” in the context of humans.
The existence of places like LessWrong, philosophy departments, etc, indicate that people do have some sort of goal to understand things in general, aside from any nitpicking about what is a true terminal value.
Well, if my goal is the truth, I am going to want the model that corresponds the best, not the model that predicts most efficiently .
I’ve already stated than I am not talking about confirming specific models .
I agree- lots of people (including me, of course) are learning because they want to- not as part of some instrumental plan to achieve their other goals. I think this is significant evidence that we do terminally value learning. However, the way that I personally have the most fun learning is not the way that is best for cultivating a perfect understanding of reality (nor developing the model which is most instrumentally efficient, for that matter). This indicates that I don’t necessarily want to learn so that I can have the mental model that most accurately describes reality- I have fun learning for complicated reasons which I don’t expect align with any short guiding principle.
Also, at least for now, I get basically all of my expected value from learning from my expectations for being able to leverage that knowledge. I have a lot more fun learning about e.g. history than the things I actually spend my time on, but historical knowledge isn’t nearly as useful, so I’m not spending my time on it.
In retrospect, I should’ve said something more along the lines of “We value understanding in and of itself, but (at least for me, and at least for now) most of the value in our understanding is from its practical role in the advancement of our other goals.”
There’s been a mix-up here- my meaning for “specific” also includes “whichever model corresponds to reality the best”
Looks like an issue of utility vs truth to me. Time to get deontological :) (joke)