As always, it simply depends on your utility function. If you consider avoiding short-term emotional pain as an end in itself, it would of course be in your best interest to engage in various self-deceptive strategies etc.
The users on Less Wrong may well be drastically less likely to have that sort of utility function than the general population, but that doesn’t in any way detract from the obvious fact that a utility function can in fact include an ultimate aversion to short-term emotional pain, and there are in fact an absolute ton of people like that.
So can people stand what’s true because they’re already enduring it? Wait, already enduring what? For somebody like I described above (one who’s goal set contains an ultimate aversion to short-term emotional pain), the emotional pain itself is something to endure.
In other words, avoiding thinking about fact A doesn’t allow you to not endure A (because of course A will be present whether or not you think about it), but there is in fact something that not thinking about it will do, and that’s let you not endure the emotional pain, which may well be extremely important for your utility function.
Tetronian said that the Litany basically says it’s silly to refuse to update your map because you’re afraid of what you may find, for what’s in the territory is already there whether or not you know it. Sure, but for some people the emotions themselves are part of the territory. It’s not that they’re afraid to update their map; it’s that they’re afraid to change one section of the territory (their belief structure) because it may make another section undesirable (their emotional landscape or whatever).
The map/territory distinction has proven useful in a lot of ways, but in this conversation it can only distract, for it has a utility function built right into its core—one incompatible with the one incompatible with the Litany. It breaks down when it encounters a utility function that values what’s in one’s head not necessarily only as an indicator for what’s outside, but also simply for its own sake.
As always, it simply depends on your utility function.
Please don’t use “utility function” in this context. What you believe you want is different from what you actually want or what you should want or what you would like if it happened, or what you should want to happen irrespective of your own experience (and none of these are utility function in the technical sense), so conflating all these senses into a single rhetorical pseudomath buzzword is bad mental hygiene.
I’m completely baffled by your reply. I have no idea what the “technical sense” of the term “utility function” is, but I thought I was using it the normal, LW way: to refer to an agent’s terminal values.
What term should I use instead? I was under the impression that “utility function” was pretty safe, but apparently it carries some pretty heavy baggage. I’ll gladly switch to using whatever term would prevent this sort of reply in the future. Just let me know.
Or perhaps I simply repeated “utility function” way too many times in that response? I probably should have switched it up a lot more and alternated it with “terminal values”, “goal set”, etc. Using it like 6 times in such a short comment may have been careless and brought it undue attention and scrutiny.
Or… is there something you disagree with in my assessment? I understand that it’s controversial to state that people even have coherent utility functions, or even have terminal values, or whatever, so perhaps my comment takes something for granted that shouldn’t be?
Two more things:
Can you explain how exactly I conflated all those senses into that single word? I thought I used the term to refer to the same exact thing over and over, and I haven’t heard anything to convince me otherwise.
And what exactly does it mean for it to be a “rhetorical psuedomath buzzword”? That sounds like an eloquent attack, but I honestly can’t pinpoint how to interpret it on any higher of a level of detail than you simply reacting to my usage in a disapproving way.
Anyway, do you disagree that somebody could from one moment from the next have a terminal value (or whatever) for avoiding emotional pain at all costs? Or is that wrong or incoherent? Or what?
I’m completely baffled by your reply. I have no idea what the “technical sense” of the term “utility function” is, but I thought I was using it the normal, LW way: to refer to an agent’s terminal values.
Your usage was fine. Some people will try to go all ‘deep’ on you and challenge even the use of the term “terminal values” because “humans aren’t that simple etc”. But that is their baggage not yours and can be safely ignored.
Please don’t use “utility function” in this context.
I probably blatantly reveal my ignorance by asking this, but do only agents who know what they want have a utility-function? An AGI undergoing recursive self-improvement can’t possible know what exactly it is going to “want” later on (some (sub)goals may turn out to be impossible while world states previously believed to be impossible might turn out to be possible), yet it is implied by its given utility-function and the “nature of reality” (environmental circumstances).
What you believe you want is different from what you actually want or what you should want of what you would like if it happened, or what you should want to happen irrespective of your own experience...
You believe that what you want is actually different from what you want. You appear to be knowing that what you believe you want is different from what you actually want. Proof by contradiction that what you believe you want is what you actually want?
Your utility-function seems to assign high utility to world states where it is optimized according to new information. In other words, you believe that your utility-function should be undergoing recursive self-improvement.
I think Nesov’s saying that you have a utility function, but you don’t explicitly know it to the degree that you can make statements about its content. Or at least, it would be more accurate to use the best colloquial term, and leave the term of art “utility function” to its technical meaning.
Also, your penultimate paragraph sounds confused, while the paragraph it’s responding to is confusing but coherent. Nesov’s explicitly listing a variety of related but different categories that “utility function” gets misinterpreted into. He doesn’t claim to believe that what he wants is different from what he wants.
Still leaves the question: Change the Litany (if so, how)? Or just don’t use it in this particular context?
I supposed I should probably reveal a little bit of the context: There will be a Litany following a spoken presentation of Beyond the Reach of God. That Litany can either be the Litany of Gendlin, or the Litany of Tarski with a phrasing similar to:
If the world will be destroyed during my lifetime, I desire to believe that the world will be destroyed during my lifetime. If the world will not be destroyed during my lifetime, I desire to not believe that the world will be destroyed during my lifetime. Let me not become attached to beliefs I may not want.
(Litany of Tarski’s already getting used in multiple other places during the night, so there’s no advantage to using purely for the sake of using it. I believe there is [slight] advantage to using Gendlin at least once to create a sense of completeness)
I think the point of such litanies is to help restructure the listener’s emotional attachments in a more productive and safe-feeling way. You are exhorting them to adopt an instrumental meta-preference for truth-conducive object-preferences, using heroic virtue as the emotional cover for the desired modification of meta-preferences.
In this light, the litany exists specifically to be deployed precisely when it is a false statement about the actual psychological state of a person (because they may in fact be attached to their beliefs) but in saying it you hope that it becomes more accurate of the person. It implicitly relies on a “fake it till you make it” personal growth strategy, which is probably a useful personal growth and coping strategy in many real human situations, but is certainly not universally useful given the plausibility of various pathological circumstances.
A useful thread for the general issue of “self soothing” might be I’m Scared.
The litany is probably best understood as something to use in cases where the person saying it believes that (1) it is kind of psychologically false just now (because someone hearing it really would feel bad if their nose was rubbed in a particular truth), but where (2) truth-seeking “meta-preference modification” is feasible and would be helpful at that time. The saying of it in particular circumstances could thus be construed as a particular claim that the circumstances merit this approach.
Perhaps it might be helpful to adjust the words to help in precisely such circumstances? Perhaps change to focus on the first person (I or we, as the case may be), the future, and an internal locus of control, and add a few hooks for later cognitive behavioral therapy exercises, and a non-judgmental but negative framing of the alternative. Maybe something like:
Let me not become attached to beliefs that are not true. What is true is already so, whether or not I acknowledge it. And because it’s true, it is what is there to be interacted with. If I’m flinching, then I am already influenced by fearful suspicions. But what is true is probably better than the worst I can imagine. I should be able to face what is true, for I am already enduring it. Relaxed, active, and thoughtful attention is usually helpful. Let me not multiply my woes through poverty of knowledge.
This may not be the literal Litany of Gendlin, but it retains some of the words, the cadences, and most of the basic message, minus the explicit typical mind fallacy of the original :-P
For the purposes of the event I’m planning, I went with something close to the original Litany, but did switch to first person.
What is true is already so. Not owning up to it only makes it worse. Not being open about it doesn’t make it go away. And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived. I can face what’s true, for I am already enduring it.
I don’t feel that Tarski says anything untrue the way that Gendlin does. It doesn’t say that believing the unfair world won’t hurt, or that you’re already enduring the knowledge. It just says that, all things together, it is more important to believe the truth than to cling to the comforting falsehood. Which I fully endorse.
As always, it simply depends on your utility function. If you consider avoiding short-term emotional pain as an end in itself, it would of course be in your best interest to engage in various self-deceptive strategies etc.
The users on Less Wrong may well be drastically less likely to have that sort of utility function than the general population, but that doesn’t in any way detract from the obvious fact that a utility function can in fact include an ultimate aversion to short-term emotional pain, and there are in fact an absolute ton of people like that.
So can people stand what’s true because they’re already enduring it? Wait, already enduring what? For somebody like I described above (one who’s goal set contains an ultimate aversion to short-term emotional pain), the emotional pain itself is something to endure.
In other words, avoiding thinking about fact A doesn’t allow you to not endure A (because of course A will be present whether or not you think about it), but there is in fact something that not thinking about it will do, and that’s let you not endure the emotional pain, which may well be extremely important for your utility function.
Tetronian said that the Litany basically says it’s silly to refuse to update your map because you’re afraid of what you may find, for what’s in the territory is already there whether or not you know it. Sure, but for some people the emotions themselves are part of the territory. It’s not that they’re afraid to update their map; it’s that they’re afraid to change one section of the territory (their belief structure) because it may make another section undesirable (their emotional landscape or whatever).
The map/territory distinction has proven useful in a lot of ways, but in this conversation it can only distract, for it has a utility function built right into its core—one incompatible with the one incompatible with the Litany. It breaks down when it encounters a utility function that values what’s in one’s head not necessarily only as an indicator for what’s outside, but also simply for its own sake.
Please don’t use “utility function” in this context. What you believe you want is different from what you actually want or what you should want or what you would like if it happened, or what you should want to happen irrespective of your own experience (and none of these are utility function in the technical sense), so conflating all these senses into a single rhetorical pseudomath buzzword is bad mental hygiene.
I’m completely baffled by your reply. I have no idea what the “technical sense” of the term “utility function” is, but I thought I was using it the normal, LW way: to refer to an agent’s terminal values.
What term should I use instead? I was under the impression that “utility function” was pretty safe, but apparently it carries some pretty heavy baggage. I’ll gladly switch to using whatever term would prevent this sort of reply in the future. Just let me know.
Or perhaps I simply repeated “utility function” way too many times in that response? I probably should have switched it up a lot more and alternated it with “terminal values”, “goal set”, etc. Using it like 6 times in such a short comment may have been careless and brought it undue attention and scrutiny.
Or… is there something you disagree with in my assessment? I understand that it’s controversial to state that people even have coherent utility functions, or even have terminal values, or whatever, so perhaps my comment takes something for granted that shouldn’t be?
Two more things:
Can you explain how exactly I conflated all those senses into that single word? I thought I used the term to refer to the same exact thing over and over, and I haven’t heard anything to convince me otherwise.
And what exactly does it mean for it to be a “rhetorical psuedomath buzzword”? That sounds like an eloquent attack, but I honestly can’t pinpoint how to interpret it on any higher of a level of detail than you simply reacting to my usage in a disapproving way.
Anyway, do you disagree that somebody could from one moment from the next have a terminal value (or whatever) for avoiding emotional pain at all costs? Or is that wrong or incoherent? Or what?
Your usage was fine. Some people will try to go all ‘deep’ on you and challenge even the use of the term “terminal values” because “humans aren’t that simple etc”. But that is their baggage not yours and can be safely ignored.
I probably blatantly reveal my ignorance by asking this, but do only agents who know what they want have a utility-function? An AGI undergoing recursive self-improvement can’t possible know what exactly it is going to “want” later on (some (sub)goals may turn out to be impossible while world states previously believed to be impossible might turn out to be possible), yet it is implied by its given utility-function and the “nature of reality” (environmental circumstances).
You believe that what you want is actually different from what you want. You appear to be knowing that what you believe you want is different from what you actually want. Proof by contradiction that what you believe you want is what you actually want?
Your utility-function seems to assign high utility to world states where it is optimized according to new information. In other words, you believe that your utility-function should be undergoing recursive self-improvement.
I think Nesov’s saying that you have a utility function, but you don’t explicitly know it to the degree that you can make statements about its content. Or at least, it would be more accurate to use the best colloquial term, and leave the term of art “utility function” to its technical meaning.
Also, your penultimate paragraph sounds confused, while the paragraph it’s responding to is confusing but coherent. Nesov’s explicitly listing a variety of related but different categories that “utility function” gets misinterpreted into. He doesn’t claim to believe that what he wants is different from what he wants.
Nope—in theory, all agents have a utility-function—though it might not necessarily be the neatest way of expressing what they value.
Well put.
Still leaves the question: Change the Litany (if so, how)? Or just don’t use it in this particular context?
I supposed I should probably reveal a little bit of the context: There will be a Litany following a spoken presentation of Beyond the Reach of God. That Litany can either be the Litany of Gendlin, or the Litany of Tarski with a phrasing similar to:
If the world will be destroyed during my lifetime,
I desire to believe that the world will be destroyed during my lifetime.
If the world will not be destroyed during my lifetime,
I desire to not believe that the world will be destroyed during my lifetime.
Let me not become attached to beliefs I may not want.
(Litany of Tarski’s already getting used in multiple other places during the night, so there’s no advantage to using purely for the sake of using it. I believe there is [slight] advantage to using Gendlin at least once to create a sense of completeness)
I think the point of such litanies is to help restructure the listener’s emotional attachments in a more productive and safe-feeling way. You are exhorting them to adopt an instrumental meta-preference for truth-conducive object-preferences, using heroic virtue as the emotional cover for the desired modification of meta-preferences.
In this light, the litany exists specifically to be deployed precisely when it is a false statement about the actual psychological state of a person (because they may in fact be attached to their beliefs) but in saying it you hope that it becomes more accurate of the person. It implicitly relies on a “fake it till you make it” personal growth strategy, which is probably a useful personal growth and coping strategy in many real human situations, but is certainly not universally useful given the plausibility of various pathological circumstances.
A useful thread for the general issue of “self soothing” might be I’m Scared.
The litany is probably best understood as something to use in cases where the person saying it believes that (1) it is kind of psychologically false just now (because someone hearing it really would feel bad if their nose was rubbed in a particular truth), but where (2) truth-seeking “meta-preference modification” is feasible and would be helpful at that time. The saying of it in particular circumstances could thus be construed as a particular claim that the circumstances merit this approach.
Perhaps it might be helpful to adjust the words to help in precisely such circumstances? Perhaps change to focus on the first person (I or we, as the case may be), the future, and an internal locus of control, and add a few hooks for later cognitive behavioral therapy exercises, and a non-judgmental but negative framing of the alternative. Maybe something like:
This may not be the literal Litany of Gendlin, but it retains some of the words, the cadences, and most of the basic message, minus the explicit typical mind fallacy of the original :-P
For the purposes of the event I’m planning, I went with something close to the original Litany, but did switch to first person.
What is true is already so.
Not owning up to it only makes it worse.
Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
I can face what’s true,
for I am already enduring it.
Yes, this assessment is spot on. I’ll take a day or so to mull it over before deciding how to to incorporate it. But I like your example.
Absolutely excellent assessment. Thank you.
Your objection to the Litany of Gendlin applies equally to the Litany of Tarski. Both tell you to desire the truth above your attachment to beliefs.
I don’t feel that Tarski says anything untrue the way that Gendlin does. It doesn’t say that believing the unfair world won’t hurt, or that you’re already enduring the knowledge. It just says that, all things together, it is more important to believe the truth than to cling to the comforting falsehood. Which I fully endorse.
JenniferRM answered much better than I could have.