One of my frequent criticisms of LessWrong denizens is that they are very quick to say “This is too confused” when they should be saying “I don’t understand and don’t care to take the time to try to understand”.
And if you can’t imagine a concrete interpretation of “don’t over-optimize” then you have obviously done no simulations work whatsoever. One of the most common problems (if not the most common) is to watch a simulation chugging along with all the parameters within normal ranges only to have everything suddenly rabbithole to extreme and unlikely values because of some minor detail (ruling out a virtually impossible edge-case) missing from the simulation.
Or, can you really not see someone over-optimizing their search for money at the expense of their happiness or the rest of their life.
Of course, this comment will rapidly be karma’d into oblivion and the echo chamber will continue.
One of my frequent criticisms of LessWrong denizens is that they are very quick to say “This is too confused” when they should be saying “I don’t understand and don’t care to take the time to try to understand”.
The burden of clarity falls on the writer. Not all confusion is the writer’s fault, but confused writing is a very major problem in philosophy. In fact, I would say it’s more of a problem than falsehood is. There’s no shame in being confused—almost everyone is, especially around complex topics like morality. But you can’t expect to make novel contributions that are any good until you’ve untangled the usual confusions and understood the progress that’s previously been made.
Or, can you really not see someone over-optimizing their search for money at the expense of their happiness or the rest of their life.
If someone sacrifices happiness to seek money, the problem is not that they’re doing too good a job of earning money, it’s that they’re optimizing the wrong thing entirely. An AI wouldn’t see your advice against over-optimizing and put more resources into finding happiness for people; instead, it would waste some of its money to make sure it didn’t have too much.
The burden of clarity falls on the writer. Not all confusion is the writer’s fault, but confused writing is a very major problem in philosophy. In fact, I would say it’s more of a problem than falsehood is. There’s no shame in being confused—almost everyone is, especially around complex topics like morality. But you can’t expect to make novel contributions that are any good until you’ve untangled the usual confusions and understood the progress that’s previously been made.
A good point and well written. My counter-point is that numerous other people have not had problems with my logic; have not needed to get special definitions of “terms” that were pretty clear standard English; have not insisted on throwing up strawmen, etc.
Your assumption is that I haven’t untangled the usual confusions and that I haven’t read the literature. It’s an argument from authority but I can’t help but point out that I was a Philosophy major 30 years ago and have been constantly reading and learning since then. Further, the outside view is generally that it is LessWrong that is generally confused and intolerant of outside views.
===
Your second argument is a classic case of a stupid super-intelligent AI.
Then apparently Less Wrong readers are more stupid or more ignorant than your previous audience. In which case I am afraid you will have to dumb down your writing so that it is comprehensible and useful to your current target audience.
Then apparently Less Wrong readers are more stupid or more ignorant than your previous audience.
This is the type of strawman that frustrates me. I said nothing of the sort.
An equally valid interpretation (and my belief) is that LessWrong readers are much more intolerant of accepting common English phrases and prone to inventing strawmen to the point of making communication at any decent rate of speed nearly impossible. I’m starting to really get the lesson that LessWrong really is conservative to an extreme (this is not a criticism at all).
Your point about altering my writing for the current target audience is dead on the money. In general, your post was as adversarial as my writing is interpreted as being. There’s a definite double standard here (but since I’m here as a guest, I shouldn’t complain).
LW readers are, perhaps, more cautious than average about “accepting common English phrases” because a major topic in rationality is precisely the fact that such common phrases often conceal fatal vagueness. Whether or not I agree with you that you’ve been using certain words and phrases to mean exactly what an ordinary English speaker would understand them to mean, this kind of caution surrounding ordinary language is generally considered to be a feature, not a bug, of discourse around here.
As far as the double standard thing, it seems like the one hypothesis you can’t bring yourself to entertain is that nobody can figure out what you’re talking about, despite some fairly sympathetic attempts to do so. After a few times around, everyone will have lost patience with you, yes. But that’s not a double standard. (I say this as emphatically an outsider: I don’t comment here much and no one at LW knows me from Adam.)
(Sorry in advance that I won’t be able to reply to any comments for at least 24 hours, since I’m traveling—musicology conference this week!)
True. And I did not say over-optimising overall. Humans are very prone to over-optimization (i.e. money at the expense of happiness and/or a life). How would you have phrased that?
In your example, money versus happiness is a choice between alternatives. Whatever goal you are trying to optimize towards should provide the guidance in making the choices between alternatives.
Language about “Over-optimizing” one alternative at the expense of another distracts from identifying your real goals and how you make the tradeoffs to achieve them
“at the expense of overall utility” is unnecessary for the “short-sighted” bit: that is implied by the phrase. Short-sighted-ness is a well known character flaw.
And your version is still bad. Over-optimising at the expense of overall utility is hard to parse. You’re missing “one aspect”. You shouldn’t over-optimise one aspect at the expense of overall utility.
One of my frequent criticisms of LessWrong denizens is that they are very quick to say “This is too confused” when they should be saying “I don’t understand and don’t care to take the time to try to understand”.
And if you can’t imagine a concrete interpretation of “don’t over-optimize” then you have obviously done no simulations work whatsoever. One of the most common problems (if not the most common) is to watch a simulation chugging along with all the parameters within normal ranges only to have everything suddenly rabbithole to extreme and unlikely values because of some minor detail (ruling out a virtually impossible edge-case) missing from the simulation.
Or, can you really not see someone over-optimizing their search for money at the expense of their happiness or the rest of their life.
Of course, this comment will rapidly be karma’d into oblivion and the echo chamber will continue.
The burden of clarity falls on the writer. Not all confusion is the writer’s fault, but confused writing is a very major problem in philosophy. In fact, I would say it’s more of a problem than falsehood is. There’s no shame in being confused—almost everyone is, especially around complex topics like morality. But you can’t expect to make novel contributions that are any good until you’ve untangled the usual confusions and understood the progress that’s previously been made.
If someone sacrifices happiness to seek money, the problem is not that they’re doing too good a job of earning money, it’s that they’re optimizing the wrong thing entirely. An AI wouldn’t see your advice against over-optimizing and put more resources into finding happiness for people; instead, it would waste some of its money to make sure it didn’t have too much.
A good point and well written. My counter-point is that numerous other people have not had problems with my logic; have not needed to get special definitions of “terms” that were pretty clear standard English; have not insisted on throwing up strawmen, etc.
Your assumption is that I haven’t untangled the usual confusions and that I haven’t read the literature. It’s an argument from authority but I can’t help but point out that I was a Philosophy major 30 years ago and have been constantly reading and learning since then. Further, the outside view is generally that it is LessWrong that is generally confused and intolerant of outside views.
=== Your second argument is a classic case of a stupid super-intelligent AI.
Then apparently Less Wrong readers are more stupid or more ignorant than your previous audience. In which case I am afraid you will have to dumb down your writing so that it is comprehensible and useful to your current target audience.
This is the type of strawman that frustrates me. I said nothing of the sort.
An equally valid interpretation (and my belief) is that LessWrong readers are much more intolerant of accepting common English phrases and prone to inventing strawmen to the point of making communication at any decent rate of speed nearly impossible. I’m starting to really get the lesson that LessWrong really is conservative to an extreme (this is not a criticism at all).
Your point about altering my writing for the current target audience is dead on the money. In general, your post was as adversarial as my writing is interpreted as being. There’s a definite double standard here (but since I’m here as a guest, I shouldn’t complain).
LW readers are, perhaps, more cautious than average about “accepting common English phrases” because a major topic in rationality is precisely the fact that such common phrases often conceal fatal vagueness. Whether or not I agree with you that you’ve been using certain words and phrases to mean exactly what an ordinary English speaker would understand them to mean, this kind of caution surrounding ordinary language is generally considered to be a feature, not a bug, of discourse around here.
As far as the double standard thing, it seems like the one hypothesis you can’t bring yourself to entertain is that nobody can figure out what you’re talking about, despite some fairly sympathetic attempts to do so. After a few times around, everyone will have lost patience with you, yes. But that’s not a double standard. (I say this as emphatically an outsider: I don’t comment here much and no one at LW knows me from Adam.)
(Sorry in advance that I won’t be able to reply to any comments for at least 24 hours, since I’m traveling—musicology conference this week!)
That’s over-optimising a single aspect resulting in overall under-optimisation.
It’s not over-optimising overall.
True. And I did not say over-optimising overall. Humans are very prone to over-optimization (i.e. money at the expense of happiness and/or a life). How would you have phrased that?
Humans usually phrase it as “You should keep your priorities straight”.
Thank you but I don’t feel that that clearly expresses my point.
In your example, money versus happiness is a choice between alternatives. Whatever goal you are trying to optimize towards should provide the guidance in making the choices between alternatives.
Language about “Over-optimizing” one alternative at the expense of another distracts from identifying your real goals and how you make the tradeoffs to achieve them
“Do not over-optimise one aspect at the expense of overall utility”
Good phrasing.
but which is better . . . .
You should not over-optimize or be short-sighted at the expense of overall utility.
OR
You should not be short-sighted or over-optimize at the expense of overall utility.
“at the expense of overall utility” applies to both halves of the statement
Is it still just as bad? Or was the initial comment a bit hasty and unwarranted in that respect?
“at the expense of overall utility” is unnecessary for the “short-sighted” bit: that is implied by the phrase. Short-sighted-ness is a well known character flaw.
And your version is still bad. Over-optimising at the expense of overall utility is hard to parse. You’re missing “one aspect”. You shouldn’t over-optimise one aspect at the expense of overall utility.