So you’re saying that I am only allowed to use “should” to mean “WillSawin_should”. I can’t use it to mean “wedrifid_should”.
No, that is another rather bizarre thing which I definitely did not say. Perhaps it will be best for me to just leave it with my initial affirmation of Luke’s post:
We must give up one, and I say give up yours.
I would much prefer to keep Luke’s.
In my observation Luke’s system for reducing moral claims provides more potential for enabling effective communication between agents and a more comprehensive way to form a useful epistemic model of such conversations.
No, that is another rather bizarre thing which I definitely did not say. Perhaps it will be best for me to just leave it with my initial affirmation of Luke’s post:
So suppose I say:
“I wedrifid_should do X” and then don’t do X. Clearly, I am not being inconsistent.
but if I say:
“I should do X” and then don’t do X then I am being inconsistent.
Something must therefore prevent me from using “should” to mean “wedrifid_should”.
I’d agree that you can (and probably do) use plain old “should” to mean multiple things. The trouble is that this isn’t very useful for communication. So when communicating, us humans use heuristics to figure out what “should” is meant.
In the example of the conversation, if I say “you should X” and you say “I agree,” then I generally use a shortcut to think you meant Will-should. The obvious reason for this is that if you meant Manfred-should, you would have just repeated my own statement back to me, which would be not communicating anything, and it’s a decent shortcut to assume that when people say something they want to communicate. The only other obvious “should” in the conversation is Will-should, so it’s a good guess that you meant Will-should.
“I agree” generally means the same thing as repeating someone’s statement back at them. We can expand:
“You wedrifd_should do X”
“I agree, I will_should do X”
I seem to be making an error of interpretation here if I say things the way you normally say them! Why, in this instance, is it considered normal and acceptable to interpret professed agreement as expressing a different belief than the one being agreed to?
Huh, yeah that is weird. But on thinking about it, I can only think of two situations I’ve heard or used “I agree.” One is if there’s a problem with an unsure solution, where it means “My solution-finding algorithm also returned that,” and if someone offers a suggestion about what should be done, where I seem to be claiming it usually means “My should-finding algorithm also returned that.”
Because there’s no objective standard against which “should algorithms” can be tested, like there is for the standard for “solution-finding algorithms” If there was no objective standard for solutions, I would absolutely stop talking about “the solution” and start talking about the Manfred_solution.
Hm. I agree that you can disagree about some world-state that you’d like, but I don’t understand how we could move that from “we disagree” to “there is one specific world-state that is the standard.” So I stand by “no objective standard” for now.
If we disagree about what world state is best, there has to be some kind of statement I believe and you don’t, right? Otherwise, we wouldn’t disagree. Some kind of statement like “This world state is best.”
But the difference isn’t about some measurable property of the world, but about internal algorithms for deciding what to do.
Sure, to the extent that humans are irrational and can pit one desire against another, arguing about how to determine “best” is not a total waste of time, but I don’t think that has much bearing on subjectivity.
I’m losing the thread of the conversation at this point.
Perhaps the meaning of the paragraph you quote wasn’t clear—I was trying hard to be polite rather than frank. You seem to be attacking a straw man using rhetorical questions so trivial that I would consider them disingenuous prior to adjusting for things like the illusion of transparency. Your conversation with lukeprog seems like one with more potential for useful communication. He cares about the subject far more than I do.
No, that is another rather bizarre thing which I definitely did not say. Perhaps it will be best for me to just leave it with my initial affirmation of Luke’s post:
In my observation Luke’s system for reducing moral claims provides more potential for enabling effective communication between agents and a more comprehensive way to form a useful epistemic model of such conversations.
So suppose I say:
“I wedrifid_should do X” and then don’t do X. Clearly, I am not being inconsistent.
but if I say:
“I should do X” and then don’t do X then I am being inconsistent.
Something must therefore prevent me from using “should” to mean “wedrifid_should”.
I’d agree that you can (and probably do) use plain old “should” to mean multiple things. The trouble is that this isn’t very useful for communication. So when communicating, us humans use heuristics to figure out what “should” is meant.
In the example of the conversation, if I say “you should X” and you say “I agree,” then I generally use a shortcut to think you meant Will-should. The obvious reason for this is that if you meant Manfred-should, you would have just repeated my own statement back to me, which would be not communicating anything, and it’s a decent shortcut to assume that when people say something they want to communicate. The only other obvious “should” in the conversation is Will-should, so it’s a good guess that you meant Will-should.
“I agree” generally means the same thing as repeating someone’s statement back at them. We can expand:
“You wedrifd_should do X”
“I agree, I will_should do X”
I seem to be making an error of interpretation here if I say things the way you normally say them! Why, in this instance, is it considered normal and acceptable to interpret professed agreement as expressing a different belief than the one being agreed to?
It all seems fishy to me.
Huh, yeah that is weird. But on thinking about it, I can only think of two situations I’ve heard or used “I agree.” One is if there’s a problem with an unsure solution, where it means “My solution-finding algorithm also returned that,” and if someone offers a suggestion about what should be done, where I seem to be claiming it usually means “My should-finding algorithm also returned that.”
In the first case, would you say that the \Manfred_solution is something or other? You and I mean something different by “solution”?
Of course not.
So why would you do something different for “should”?
Because there’s no objective standard against which “should algorithms” can be tested, like there is for the standard for “solution-finding algorithms” If there was no objective standard for solutions, I would absolutely stop talking about “the solution” and start talking about the Manfred_solution.
Didn’t you say in the other thread that we can disagree about the proper state of the world?
When we do that, what thing are we disagreeing about? It’s certainly not a standard, but how can it be subjective?
That’s the objective thing I am talking about.
Hm. I agree that you can disagree about some world-state that you’d like, but I don’t understand how we could move that from “we disagree” to “there is one specific world-state that is the standard.” So I stand by “no objective standard” for now.
I assume you are talking about proper or desirable world-states rather than actual ones.
I didn’t say it was the standard.
The idea is this.
If we disagree about what world state is best, there has to be some kind of statement I believe and you don’t, right? Otherwise, we wouldn’t disagree. Some kind of statement like “This world state is best.”
But the difference isn’t about some measurable property of the world, but about internal algorithms for deciding what to do.
Sure, to the extent that humans are irrational and can pit one desire against another, arguing about how to determine “best” is not a total waste of time, but I don’t think that has much bearing on subjectivity.
I’m losing the thread of the conversation at this point.
I have no solution to that problem.
Perhaps the meaning of the paragraph you quote wasn’t clear—I was trying hard to be polite rather than frank. You seem to be attacking a straw man using rhetorical questions so trivial that I would consider them disingenuous prior to adjusting for things like the illusion of transparency. Your conversation with lukeprog seems like one with more potential for useful communication. He cares about the subject far more than I do.