if a person wanted to satisfy another person’s preferences (or to go against them), then it would be very important.
Practically speaking, I don’t think it ((ETA for clarity) doing the thing we are talking about, knowing others’ preferences) is important to achieve the sort of goals humans generally want to achieve. If I’m trading you ice cream for flour, what we really need to nail down is that the ice cream has been in the freezer and not out in the sun, the flour is from wheat and isn’t dirt or cocaine, it’s not soaked in water etc. Then, we can negotiate a trade without knowing each other’s preferences.
In contrast, if we only know each other’s preferences, we won’t get very far. I will use the word “rectangle” (which in my language would refer to what you call “ice cream”) and offer you melted ice cream, etc.
There are a lot of facts more important than understanding the other’s opinion.
Are you saying that...you think the statement you made is itself some fact about the world?
Not logically so—there are possible minds whose only desire is to only know the other person’s opinions. I meant it as an assertion of what’s generally true in human interactions. Knowing the other person’s preferences is far less often necessary than knowing other facts, it’s never sufficient for a realistic human scenario I can think of. So as I intended it “less important” applies in a stronger sense than “I disapprove” since compared to the other type of knowledge those facts are less often necessary and less often sufficient.
Practically speaking, I don’t think it is important to achieve the sort of goals humans generally want to achieve.
Okay. You are telling me something about your preferences then.
If I’m trading you ice cream for flour, what we really need to nail down...
And why is that? Why are those facts more important than, say, that the ice cream is bubblegum-flavored or blue-colored or sweetened with aspartame or made from coconut milk? Knowing the temperature of the ice cream or the composition of the flour is important only in the sense that there can be human preferences in this direction.
Then, we can negotiate a trade without knowing each other’s preferences.
Your example is not about people negotiating without knowing each other’s preferences. Your example is about people negotiating with a few assumptions of the other person’s preferences. Here is an example of people negotiating without knowing the other person’s preferences:
Person A: Would you like some flour?
Person B: No. Would you like ice cream?
Person A: No. I have some fruit fly eggs here...
Person B: Not interested. Would you like a computer?
Person A: Why, yes. What do you have here? Never mind—I won’t buy anything over ten years old.
In contrast, if we only know each other’s preferences, we won’t get very far.
True. If we only know the other person’s preferences but not any relevant facts for achieving them, we cannot expect a mutually satisfying interaction. However, if we know the relevant facts for achieving various preferences, but not which of those preferences the other person has, the same is true.
there are possible minds whose only desire is to only know the other person’s opinions.
True, but not what I’m discussing. I am discussing how to satisfy both people’s preferences in an interaction between two people.
I meant it as an assertion of what’s generally true in human interactions.
Since you state this is not a logical assertion but generally true, I assume you mean to say that it is true in the world we live in but would not have to be true in all possible worlds. However, what I am saying is that this statement does not have a truth value in any logically possible world since it does not specify the preference the importance relates to. Using the word important in this way is like leaving off the ‘if’ condition in an ‘if’-‘then’ statement, but not leaving out the if as well. The ‘then’ condition has a truth value by itself, but the ‘if’-‘then’ statement can only be evaluated if both conditions can be evaluated.
So as I intended it “less important” applies in a stronger sense than “I disapprove” since compared to the other type of knowledge those facts are less often necessary and less often sufficient.
And I disagree that it can. Less important to achieve what objective? The only way a statement of importance has meaning is to relate it to the goal it is meant to achieve. That goal is a preference.
You have been trying to argue that facts are important but that knowing another person’s preferences is not very important. But important for what purpose? One possibility is that you mean that knowing other facts is more important for the goal of achieving that person’s preferences than knowing that person’s preferences. Another is that you mean that knowing facts are more important for achieving your preferences than knowing what the other person’s preferences are (since you state you don’t consider goals humans generally want to achieve as important, it seems reasonable to assume this is also a possibility). In order to say whether your statement is true, I need to know the specific preferences involved. As you have stated it here, it has no truth value.
My position is that knowing a person’s preferences and the facts about how to achieve those preferences are both necessary, but by themselves insufficient, to achieve those preferences. I do not know which I find more tragic, the person who knows the goal but not the path to get there, or the person who knows perfectly all the paths, but not which one to take.
Practically speaking, I don’t think it is important to achieve the sort of goals humans generally want to achieve.
You are telling me something about your preferences then.
Should be read as “Practically speaking, I don’t think it (doing the thing we are talking about, knowing others’ preferences) is important to achieve the sort of goals humans generally want to achieve.”
English permitted me to exclude that clause and have the same wording as a phrase that conveys the exact opposite of my point. Sorry. I can imagine your confusion reading that and seeing me follow it with an example that illustrates a point opposite of how you read that.
But no, I am not saying anything about my preferences, but am describing a relationship between what people want and the world, the relationship is that in general knowing about preferences doesn’t help people achieve their goals, but knowing about states of the world does.
Knowing the temperature of the ice cream or the composition of the flour is important only in the sense that there can be human preferences in this direction.
But I don’t need to know them if you do and we share knowledge about states of the world.
Your example is about people negotiating with a few assumptions of the other person’s preferences.
A very, very hazy idea of others’ preferences is sufficient, so improved knowledge beyond that isn’t too useful. Alternatively, with no idea of them, we can still trade by saying what we want and giving a preference ranking rather than trying to guess what the other wants.
Since you state (“There are a lot of facts more important than understanding the other’s opinion,”) is not a logical assertion but generally true, I assume you mean to say that it is true in the world we live in but would not have to be true in all possible worlds.
I did not mean it is always true in this universe but not like that in other universes. Instead I meant it is almost always true in this universe. If you are in a situation in this world, such as a financial one or one in which you disagree over a joint action to take, it will almost always be better to get a unit of relevant information about consequences of actions than a unit of relevant information about the other person’s preferences, particularly if you can communicate half-decently or better. Also, for random genies or whatever with random amounts of information about each other and the world, they will each usually be better able to achieve their goals by knowing more about the world.
This depends heavily on an intuitive comparison of what “random relevant” information of a certain quantity looks like. That might not be intelligible, more likely a formal treatment of “relevant” would clash with intuition to settle this decisively as tru or false, but it wouldn’t fail to have a truth value.
I do not know which I find more tragic, the person who knows the goal but not the path to get there, or the person who knows perfectly all the paths, but not which one to take.
We’re discussing the goals of other people. Each type might be equally tragic, but if you had the opportunity to give a random actual person (or random hypothetical being) more knowledge about their goal or knowledge about the world, pick the world and it’s not a close decision!
My view on this discussion is that I have been saying “pick the world” in such a case, and not only don’t I know what you would say to pick, you are saying “pick the world” isn’t truth apt (when it fulfills my desires to fulfill others’ desires, and those desires are best fulfilled by their getting the one type of knowledge and not the other, and that second “best” is according to their desires).
Practically speaking, I don’t think it is important to achieve the sort of goals humans generally want to achieve.
Should be read as “Practically speaking, I don’t think it (doing the thing we are talking about, knowing others’ preferences) is important to achieve the sort of goals humans generally want to achieve.”
Upvoted for clarifying this point. This changes my interpretation of this sentence considerably, so perhaps I can now address your intended meaning. This statement does have a truth value (which I believe to be false). I disagree that knowing another human’s preferences is not important to achieving most of their goals (ie. their preferences). Since you make a weaker statement below (that they only need to vaguely know the other’s preferences), I assume you intend this statement to mean something more along the lines of needing very little preference information to achieve preferences than needing no preference information to achieve preferences (and it is probably not common for humans to have zero initial information about all relevant preferences anyway).
Knowing the temperature of the ice cream or the composition of the flour is important only in the sense that there can be human preferences in this direction.
But I don’t need to know them if you do and we share knowledge about states of the world.
I disagree. If I want to buy something from you, I benefit from knowing the minimum amount of money you will sell it for. This is a preference that applies specifically to you. Indeed, other people may require more or less money than you would. It is, therefore, optimal for me to know specifically where the lower end of your preference range is. Knowing other facts about the world, such as what money looks like or how to use it, would not, by themselves, resolve this situation. Likewise, if you wish to sell me something, you must know how much money I am willing to pay for it. You must also know whether I am willing to pay for it at all.
A very, very hazy idea of others’ preferences is sufficient, so improved knowledge beyond that isn’t too useful. Alternatively, with no idea of them, we can still trade by saying what we want and giving a preference ranking rather than trying to guess what the other wants.
If I were trading with someone, I might not be inclined to believe that they would always tell me the minimum they are willing to accept for something. Nor would I typically divulge such information about myself to them. Sure, you can trade by just asking someone what they want, but if they say they want your item for free, that’s not going to help if you want them to pay.
Since you state (“There are a lot of facts more important than understanding the other’s opinion,”) is not a logical assertion but generally true, I assume you mean to say that it is true in the world we live in but would not have to be true in all possible worlds.
I did not mean it is always true in this universe but not like that in other universes. Instead I meant it is almost always true in this universe. If you are in a situation in this world, such as a financial one or one in which you disagree over a joint action to take, it will almost always be better to get a unit of relevant information about consequences of actions than a unit of relevant information about the other person’s preferences, particularly if you can communicate half-decently or better.
By the lack of truth value, I meant that it was not clarified what preference the word important referred to. If the preference referred to is explained, then the expanded sentence has a truth value. Perhaps this is like the other sentence, and you meant it to refer to satisfying the preferences of others. Also, the consequences of actions can only be assigned a value if the preferences are known. No preferences = No consequences.
This depends heavily on an intuitive comparison of what “random relevant” information of a certain quantity looks like. That might not be intelligible, more likely a formal treatment of “relevant” would clash with intuition to settle this decisively as tru or false, but it wouldn’t fail to have a truth value.
Yes, these statements lead me to believe that you were stating something similar to your original sentence, and meant something like “There are a lot of facts more important for satisfying the preferences of the other person than understanding the other person’s opinion”. This seems incorrect to me. Also, I believe that you will find that all pieces of relevant information relate to one or more of the preferences involved. This relation is not mutually exclusive, since these pieces of relevant information could also relate to facts external to the person. Consider your example of the unfortunate cheese-loving person who believes the moon is made of cheese. This belief gives them both a false picture of the world and a false picture of their own cheese-related preferences. A belief that Saturn was made of salami would give them a false picture of the world, but not of those same cheese-related preferences.
I do not know which I find more tragic, the person who knows the goal but not the path to get there, or the person who knows perfectly all the paths, but not which one to take.
We’re discussing the goals of other people. Each type might be equally tragic, but if you had the opportunity to give a random actual person (or random hypothetical being) more knowledge about their goal or knowledge about the world, pick the world and it’s not a close decision!
My view on this discussion is that I have been saying “pick the world”...
It sounds like there is some misunderstanding of what I mean. Let me try to restate my position in a completely different way.
Preferences are, of course, facts. They could even be thought of as facts about the world, in the sense that they refer to a part of the world (ie. a person). This is true in the same way that the color orange is a fact about the world, assuming that you clarify that it refers to the color of, say, a carrot, and not the color of everything in the world. If you remove the carrot, you remove its orange-ness with it. If you remove the person, you remove their preference with them. Similarly, if you remove the preference involved, then you remove its importance with it. The importance is a property of the preference, just as the preference is a property of the person. This was why I was saying that the statement of importance (referring to a preference) had no truth value—because the preference it was important to was not stated. As such, I read it as ′ There are a lot of facts more important for x than understanding the other person’s opinion’. Since x was unknown to me, the statement could not be evaluated to true or false any more than saying ‘x is orange’ could. The revision I posted above (based on your earlier revision of your other sentence) can be evaluated as true or false.
My position is that one should know the preferences involved with great precision if one wishes to maximally satisfy those preferences, since this eliminates time establishing irrelevant facts (of which there is an infinite number). Furthermore, one needs to know about the people involved, since the preferences are a property of the people. Therefore, many of the facts about the preferences will also be facts about people. There may, in any given case, be more numerous facts about the world that are relevant to these preferences than facts about the person. Nevertheless, one unit of information about the person which relates to the preferences to be satisfied can easily eliminate over a million items of irrelevant information from the search space of information to be dealt with.
Here is an example:
Two programmers have a disagreement about whether they should try to program a more intelligent AI. The first programmer writes a twenty page long email to the second programmer to assure them that the more intelligent AI will not be a threat to human civilization. This person employs all the facts at their disposal to explain this and their argument is airtight. The second programmer responds that they never thought that the improved program would be a threat to civilization—just that hiring the extra programmers required to improve it would cost too much money.
The less you understand a person, the less you can satisfy their preferences. Whether that decreased satisfaction is good enough for you depends on a number of factors, including the magnitude of the decrease (which may or may not vary widely for a given unit of preference information, depending on what it is), how much time you are willing to waste with irrelevant information, and your threshold for ‘good enough’.
Practically speaking, I don’t think it ((ETA for clarity) doing the thing we are talking about, knowing others’ preferences) is important to achieve the sort of goals humans generally want to achieve. If I’m trading you ice cream for flour, what we really need to nail down is that the ice cream has been in the freezer and not out in the sun, the flour is from wheat and isn’t dirt or cocaine, it’s not soaked in water etc. Then, we can negotiate a trade without knowing each other’s preferences.
In contrast, if we only know each other’s preferences, we won’t get very far. I will use the word “rectangle” (which in my language would refer to what you call “ice cream”) and offer you melted ice cream, etc.
Not logically so—there are possible minds whose only desire is to only know the other person’s opinions. I meant it as an assertion of what’s generally true in human interactions. Knowing the other person’s preferences is far less often necessary than knowing other facts, it’s never sufficient for a realistic human scenario I can think of. So as I intended it “less important” applies in a stronger sense than “I disapprove” since compared to the other type of knowledge those facts are less often necessary and less often sufficient.
Okay. You are telling me something about your preferences then.
And why is that? Why are those facts more important than, say, that the ice cream is bubblegum-flavored or blue-colored or sweetened with aspartame or made from coconut milk? Knowing the temperature of the ice cream or the composition of the flour is important only in the sense that there can be human preferences in this direction.
Your example is not about people negotiating without knowing each other’s preferences. Your example is about people negotiating with a few assumptions of the other person’s preferences. Here is an example of people negotiating without knowing the other person’s preferences:
Person A: Would you like some flour?
Person B: No. Would you like ice cream?
Person A: No. I have some fruit fly eggs here...
Person B: Not interested. Would you like a computer?
Person A: Why, yes. What do you have here? Never mind—I won’t buy anything over ten years old.
True. If we only know the other person’s preferences but not any relevant facts for achieving them, we cannot expect a mutually satisfying interaction. However, if we know the relevant facts for achieving various preferences, but not which of those preferences the other person has, the same is true.
True, but not what I’m discussing. I am discussing how to satisfy both people’s preferences in an interaction between two people.
Since you state this is not a logical assertion but generally true, I assume you mean to say that it is true in the world we live in but would not have to be true in all possible worlds. However, what I am saying is that this statement does not have a truth value in any logically possible world since it does not specify the preference the importance relates to. Using the word important in this way is like leaving off the ‘if’ condition in an ‘if’-‘then’ statement, but not leaving out the if as well. The ‘then’ condition has a truth value by itself, but the ‘if’-‘then’ statement can only be evaluated if both conditions can be evaluated.
And I disagree that it can. Less important to achieve what objective? The only way a statement of importance has meaning is to relate it to the goal it is meant to achieve. That goal is a preference.
You have been trying to argue that facts are important but that knowing another person’s preferences is not very important. But important for what purpose? One possibility is that you mean that knowing other facts is more important for the goal of achieving that person’s preferences than knowing that person’s preferences. Another is that you mean that knowing facts are more important for achieving your preferences than knowing what the other person’s preferences are (since you state you don’t consider goals humans generally want to achieve as important, it seems reasonable to assume this is also a possibility). In order to say whether your statement is true, I need to know the specific preferences involved. As you have stated it here, it has no truth value.
My position is that knowing a person’s preferences and the facts about how to achieve those preferences are both necessary, but by themselves insufficient, to achieve those preferences. I do not know which I find more tragic, the person who knows the goal but not the path to get there, or the person who knows perfectly all the paths, but not which one to take.
Should be read as “Practically speaking, I don’t think it (doing the thing we are talking about, knowing others’ preferences) is important to achieve the sort of goals humans generally want to achieve.”
English permitted me to exclude that clause and have the same wording as a phrase that conveys the exact opposite of my point. Sorry. I can imagine your confusion reading that and seeing me follow it with an example that illustrates a point opposite of how you read that.
But no, I am not saying anything about my preferences, but am describing a relationship between what people want and the world, the relationship is that in general knowing about preferences doesn’t help people achieve their goals, but knowing about states of the world does.
But I don’t need to know them if you do and we share knowledge about states of the world.
A very, very hazy idea of others’ preferences is sufficient, so improved knowledge beyond that isn’t too useful. Alternatively, with no idea of them, we can still trade by saying what we want and giving a preference ranking rather than trying to guess what the other wants.
I did not mean it is always true in this universe but not like that in other universes. Instead I meant it is almost always true in this universe. If you are in a situation in this world, such as a financial one or one in which you disagree over a joint action to take, it will almost always be better to get a unit of relevant information about consequences of actions than a unit of relevant information about the other person’s preferences, particularly if you can communicate half-decently or better. Also, for random genies or whatever with random amounts of information about each other and the world, they will each usually be better able to achieve their goals by knowing more about the world.
This depends heavily on an intuitive comparison of what “random relevant” information of a certain quantity looks like. That might not be intelligible, more likely a formal treatment of “relevant” would clash with intuition to settle this decisively as tru or false, but it wouldn’t fail to have a truth value.
We’re discussing the goals of other people. Each type might be equally tragic, but if you had the opportunity to give a random actual person (or random hypothetical being) more knowledge about their goal or knowledge about the world, pick the world and it’s not a close decision!
My view on this discussion is that I have been saying “pick the world” in such a case, and not only don’t I know what you would say to pick, you are saying “pick the world” isn’t truth apt (when it fulfills my desires to fulfill others’ desires, and those desires are best fulfilled by their getting the one type of knowledge and not the other, and that second “best” is according to their desires).
Upvoted for clarifying this point. This changes my interpretation of this sentence considerably, so perhaps I can now address your intended meaning. This statement does have a truth value (which I believe to be false). I disagree that knowing another human’s preferences is not important to achieving most of their goals (ie. their preferences). Since you make a weaker statement below (that they only need to vaguely know the other’s preferences), I assume you intend this statement to mean something more along the lines of needing very little preference information to achieve preferences than needing no preference information to achieve preferences (and it is probably not common for humans to have zero initial information about all relevant preferences anyway).
I disagree. If I want to buy something from you, I benefit from knowing the minimum amount of money you will sell it for. This is a preference that applies specifically to you. Indeed, other people may require more or less money than you would. It is, therefore, optimal for me to know specifically where the lower end of your preference range is. Knowing other facts about the world, such as what money looks like or how to use it, would not, by themselves, resolve this situation. Likewise, if you wish to sell me something, you must know how much money I am willing to pay for it. You must also know whether I am willing to pay for it at all.
If I were trading with someone, I might not be inclined to believe that they would always tell me the minimum they are willing to accept for something. Nor would I typically divulge such information about myself to them. Sure, you can trade by just asking someone what they want, but if they say they want your item for free, that’s not going to help if you want them to pay.
By the lack of truth value, I meant that it was not clarified what preference the word important referred to. If the preference referred to is explained, then the expanded sentence has a truth value. Perhaps this is like the other sentence, and you meant it to refer to satisfying the preferences of others. Also, the consequences of actions can only be assigned a value if the preferences are known. No preferences = No consequences.
Yes, these statements lead me to believe that you were stating something similar to your original sentence, and meant something like “There are a lot of facts more important for satisfying the preferences of the other person than understanding the other person’s opinion”. This seems incorrect to me. Also, I believe that you will find that all pieces of relevant information relate to one or more of the preferences involved. This relation is not mutually exclusive, since these pieces of relevant information could also relate to facts external to the person. Consider your example of the unfortunate cheese-loving person who believes the moon is made of cheese. This belief gives them both a false picture of the world and a false picture of their own cheese-related preferences. A belief that Saturn was made of salami would give them a false picture of the world, but not of those same cheese-related preferences.
It sounds like there is some misunderstanding of what I mean. Let me try to restate my position in a completely different way.
Preferences are, of course, facts. They could even be thought of as facts about the world, in the sense that they refer to a part of the world (ie. a person). This is true in the same way that the color orange is a fact about the world, assuming that you clarify that it refers to the color of, say, a carrot, and not the color of everything in the world. If you remove the carrot, you remove its orange-ness with it. If you remove the person, you remove their preference with them. Similarly, if you remove the preference involved, then you remove its importance with it. The importance is a property of the preference, just as the preference is a property of the person. This was why I was saying that the statement of importance (referring to a preference) had no truth value—because the preference it was important to was not stated. As such, I read it as ′ There are a lot of facts more important for x than understanding the other person’s opinion’. Since x was unknown to me, the statement could not be evaluated to true or false any more than saying ‘x is orange’ could. The revision I posted above (based on your earlier revision of your other sentence) can be evaluated as true or false.
My position is that one should know the preferences involved with great precision if one wishes to maximally satisfy those preferences, since this eliminates time establishing irrelevant facts (of which there is an infinite number). Furthermore, one needs to know about the people involved, since the preferences are a property of the people. Therefore, many of the facts about the preferences will also be facts about people. There may, in any given case, be more numerous facts about the world that are relevant to these preferences than facts about the person. Nevertheless, one unit of information about the person which relates to the preferences to be satisfied can easily eliminate over a million items of irrelevant information from the search space of information to be dealt with.
Here is an example: Two programmers have a disagreement about whether they should try to program a more intelligent AI. The first programmer writes a twenty page long email to the second programmer to assure them that the more intelligent AI will not be a threat to human civilization. This person employs all the facts at their disposal to explain this and their argument is airtight. The second programmer responds that they never thought that the improved program would be a threat to civilization—just that hiring the extra programmers required to improve it would cost too much money.
The less you understand a person, the less you can satisfy their preferences. Whether that decreased satisfaction is good enough for you depends on a number of factors, including the magnitude of the decrease (which may or may not vary widely for a given unit of preference information, depending on what it is), how much time you are willing to waste with irrelevant information, and your threshold for ‘good enough’.