Practically speaking, I don’t think it is important to achieve the sort of goals humans generally want to achieve.
Should be read as “Practically speaking, I don’t think it (doing the thing we are talking about, knowing others’ preferences) is important to achieve the sort of goals humans generally want to achieve.”
Upvoted for clarifying this point. This changes my interpretation of this sentence considerably, so perhaps I can now address your intended meaning. This statement does have a truth value (which I believe to be false). I disagree that knowing another human’s preferences is not important to achieving most of their goals (ie. their preferences). Since you make a weaker statement below (that they only need to vaguely know the other’s preferences), I assume you intend this statement to mean something more along the lines of needing very little preference information to achieve preferences than needing no preference information to achieve preferences (and it is probably not common for humans to have zero initial information about all relevant preferences anyway).
Knowing the temperature of the ice cream or the composition of the flour is important only in the sense that there can be human preferences in this direction.
But I don’t need to know them if you do and we share knowledge about states of the world.
I disagree. If I want to buy something from you, I benefit from knowing the minimum amount of money you will sell it for. This is a preference that applies specifically to you. Indeed, other people may require more or less money than you would. It is, therefore, optimal for me to know specifically where the lower end of your preference range is. Knowing other facts about the world, such as what money looks like or how to use it, would not, by themselves, resolve this situation. Likewise, if you wish to sell me something, you must know how much money I am willing to pay for it. You must also know whether I am willing to pay for it at all.
A very, very hazy idea of others’ preferences is sufficient, so improved knowledge beyond that isn’t too useful. Alternatively, with no idea of them, we can still trade by saying what we want and giving a preference ranking rather than trying to guess what the other wants.
If I were trading with someone, I might not be inclined to believe that they would always tell me the minimum they are willing to accept for something. Nor would I typically divulge such information about myself to them. Sure, you can trade by just asking someone what they want, but if they say they want your item for free, that’s not going to help if you want them to pay.
Since you state (“There are a lot of facts more important than understanding the other’s opinion,”) is not a logical assertion but generally true, I assume you mean to say that it is true in the world we live in but would not have to be true in all possible worlds.
I did not mean it is always true in this universe but not like that in other universes. Instead I meant it is almost always true in this universe. If you are in a situation in this world, such as a financial one or one in which you disagree over a joint action to take, it will almost always be better to get a unit of relevant information about consequences of actions than a unit of relevant information about the other person’s preferences, particularly if you can communicate half-decently or better.
By the lack of truth value, I meant that it was not clarified what preference the word important referred to. If the preference referred to is explained, then the expanded sentence has a truth value. Perhaps this is like the other sentence, and you meant it to refer to satisfying the preferences of others. Also, the consequences of actions can only be assigned a value if the preferences are known. No preferences = No consequences.
This depends heavily on an intuitive comparison of what “random relevant” information of a certain quantity looks like. That might not be intelligible, more likely a formal treatment of “relevant” would clash with intuition to settle this decisively as tru or false, but it wouldn’t fail to have a truth value.
Yes, these statements lead me to believe that you were stating something similar to your original sentence, and meant something like “There are a lot of facts more important for satisfying the preferences of the other person than understanding the other person’s opinion”. This seems incorrect to me. Also, I believe that you will find that all pieces of relevant information relate to one or more of the preferences involved. This relation is not mutually exclusive, since these pieces of relevant information could also relate to facts external to the person. Consider your example of the unfortunate cheese-loving person who believes the moon is made of cheese. This belief gives them both a false picture of the world and a false picture of their own cheese-related preferences. A belief that Saturn was made of salami would give them a false picture of the world, but not of those same cheese-related preferences.
I do not know which I find more tragic, the person who knows the goal but not the path to get there, or the person who knows perfectly all the paths, but not which one to take.
We’re discussing the goals of other people. Each type might be equally tragic, but if you had the opportunity to give a random actual person (or random hypothetical being) more knowledge about their goal or knowledge about the world, pick the world and it’s not a close decision!
My view on this discussion is that I have been saying “pick the world”...
It sounds like there is some misunderstanding of what I mean. Let me try to restate my position in a completely different way.
Preferences are, of course, facts. They could even be thought of as facts about the world, in the sense that they refer to a part of the world (ie. a person). This is true in the same way that the color orange is a fact about the world, assuming that you clarify that it refers to the color of, say, a carrot, and not the color of everything in the world. If you remove the carrot, you remove its orange-ness with it. If you remove the person, you remove their preference with them. Similarly, if you remove the preference involved, then you remove its importance with it. The importance is a property of the preference, just as the preference is a property of the person. This was why I was saying that the statement of importance (referring to a preference) had no truth value—because the preference it was important to was not stated. As such, I read it as ′ There are a lot of facts more important for x than understanding the other person’s opinion’. Since x was unknown to me, the statement could not be evaluated to true or false any more than saying ‘x is orange’ could. The revision I posted above (based on your earlier revision of your other sentence) can be evaluated as true or false.
My position is that one should know the preferences involved with great precision if one wishes to maximally satisfy those preferences, since this eliminates time establishing irrelevant facts (of which there is an infinite number). Furthermore, one needs to know about the people involved, since the preferences are a property of the people. Therefore, many of the facts about the preferences will also be facts about people. There may, in any given case, be more numerous facts about the world that are relevant to these preferences than facts about the person. Nevertheless, one unit of information about the person which relates to the preferences to be satisfied can easily eliminate over a million items of irrelevant information from the search space of information to be dealt with.
Here is an example:
Two programmers have a disagreement about whether they should try to program a more intelligent AI. The first programmer writes a twenty page long email to the second programmer to assure them that the more intelligent AI will not be a threat to human civilization. This person employs all the facts at their disposal to explain this and their argument is airtight. The second programmer responds that they never thought that the improved program would be a threat to civilization—just that hiring the extra programmers required to improve it would cost too much money.
The less you understand a person, the less you can satisfy their preferences. Whether that decreased satisfaction is good enough for you depends on a number of factors, including the magnitude of the decrease (which may or may not vary widely for a given unit of preference information, depending on what it is), how much time you are willing to waste with irrelevant information, and your threshold for ‘good enough’.
Upvoted for clarifying this point. This changes my interpretation of this sentence considerably, so perhaps I can now address your intended meaning. This statement does have a truth value (which I believe to be false). I disagree that knowing another human’s preferences is not important to achieving most of their goals (ie. their preferences). Since you make a weaker statement below (that they only need to vaguely know the other’s preferences), I assume you intend this statement to mean something more along the lines of needing very little preference information to achieve preferences than needing no preference information to achieve preferences (and it is probably not common for humans to have zero initial information about all relevant preferences anyway).
I disagree. If I want to buy something from you, I benefit from knowing the minimum amount of money you will sell it for. This is a preference that applies specifically to you. Indeed, other people may require more or less money than you would. It is, therefore, optimal for me to know specifically where the lower end of your preference range is. Knowing other facts about the world, such as what money looks like or how to use it, would not, by themselves, resolve this situation. Likewise, if you wish to sell me something, you must know how much money I am willing to pay for it. You must also know whether I am willing to pay for it at all.
If I were trading with someone, I might not be inclined to believe that they would always tell me the minimum they are willing to accept for something. Nor would I typically divulge such information about myself to them. Sure, you can trade by just asking someone what they want, but if they say they want your item for free, that’s not going to help if you want them to pay.
By the lack of truth value, I meant that it was not clarified what preference the word important referred to. If the preference referred to is explained, then the expanded sentence has a truth value. Perhaps this is like the other sentence, and you meant it to refer to satisfying the preferences of others. Also, the consequences of actions can only be assigned a value if the preferences are known. No preferences = No consequences.
Yes, these statements lead me to believe that you were stating something similar to your original sentence, and meant something like “There are a lot of facts more important for satisfying the preferences of the other person than understanding the other person’s opinion”. This seems incorrect to me. Also, I believe that you will find that all pieces of relevant information relate to one or more of the preferences involved. This relation is not mutually exclusive, since these pieces of relevant information could also relate to facts external to the person. Consider your example of the unfortunate cheese-loving person who believes the moon is made of cheese. This belief gives them both a false picture of the world and a false picture of their own cheese-related preferences. A belief that Saturn was made of salami would give them a false picture of the world, but not of those same cheese-related preferences.
It sounds like there is some misunderstanding of what I mean. Let me try to restate my position in a completely different way.
Preferences are, of course, facts. They could even be thought of as facts about the world, in the sense that they refer to a part of the world (ie. a person). This is true in the same way that the color orange is a fact about the world, assuming that you clarify that it refers to the color of, say, a carrot, and not the color of everything in the world. If you remove the carrot, you remove its orange-ness with it. If you remove the person, you remove their preference with them. Similarly, if you remove the preference involved, then you remove its importance with it. The importance is a property of the preference, just as the preference is a property of the person. This was why I was saying that the statement of importance (referring to a preference) had no truth value—because the preference it was important to was not stated. As such, I read it as ′ There are a lot of facts more important for x than understanding the other person’s opinion’. Since x was unknown to me, the statement could not be evaluated to true or false any more than saying ‘x is orange’ could. The revision I posted above (based on your earlier revision of your other sentence) can be evaluated as true or false.
My position is that one should know the preferences involved with great precision if one wishes to maximally satisfy those preferences, since this eliminates time establishing irrelevant facts (of which there is an infinite number). Furthermore, one needs to know about the people involved, since the preferences are a property of the people. Therefore, many of the facts about the preferences will also be facts about people. There may, in any given case, be more numerous facts about the world that are relevant to these preferences than facts about the person. Nevertheless, one unit of information about the person which relates to the preferences to be satisfied can easily eliminate over a million items of irrelevant information from the search space of information to be dealt with.
Here is an example: Two programmers have a disagreement about whether they should try to program a more intelligent AI. The first programmer writes a twenty page long email to the second programmer to assure them that the more intelligent AI will not be a threat to human civilization. This person employs all the facts at their disposal to explain this and their argument is airtight. The second programmer responds that they never thought that the improved program would be a threat to civilization—just that hiring the extra programmers required to improve it would cost too much money.
The less you understand a person, the less you can satisfy their preferences. Whether that decreased satisfaction is good enough for you depends on a number of factors, including the magnitude of the decrease (which may or may not vary widely for a given unit of preference information, depending on what it is), how much time you are willing to waste with irrelevant information, and your threshold for ‘good enough’.