A reasonable challenge—or, rather, half of one; after all, what CFAR “is trying to accomplish” is of no consequence. What they have accomplished, of course, is of great consequence.
That’s fair. I include the “trying” part because it is some evidence about the value of activities that, to outsiders, don’t obviously directly cause the desired outcome.
(If someone says their goal is to cause X, and in fact they do actually cause X, but along the way they do some seemingly unrelated activity Y, that is some evidence that Y is necessary or useful for X, relative to if they had done Y and also happened to cause X, but didn’t have causing X as a primary goal.
In other words, independently of how much someone is actually accomplishing X, the more they are trying to cause X, the more one should expect them to be attempting to filter their activities for accomplishing X. And the more they are actually accomplishing X, the more one should update on the filter being accurate.)
I don’t think that I agree with this framing. (Consider the following to be a sort of thinking-out-loud.)
Suppose that activity Y is, to an outsider (e.g., me), neutral in value—neither beneficial nor harmful. You come to me and say: “I have done thing X, which you take to be beneficial; as you have observed, I have also been engaging in activity Y. I claim that Y is necessary for the accomplishment of X. Will you now update your evaluation of Y, and judge it no longer as neutral, but in fact as positive (on account of the fact—which you may take my word, and my accomplishment of X, as evidence—that Y is necessary for X)?”
My answer can only be “No”. No, because whatever may or may not be necessary for you to accomplish outcome X, nonetheless it is only X which is valuable to me. How you bring X about is your business. It is an implementation detail; I am not interested in implementation details, when it comes to evaluating your output (i.e., the sum total of the consequences of all your actions).[1]
Now suppose that Y is not neutral in my eyes, but rather, of negative value. I tally up your output, and note: you have caused X—this is to your credit! But, at the same time, you have done Y—this I write down in red ink. And again you come to me and say: “I see you take X to be positive, but Y to be negative; but consider that Y is necessary for X [which, we once again assume, I may have good reason to trust is the case]! Will you now move Y over to the other side of the ledger, seeing as how Y is a sine qua non of X?”
And once again my answer is “No”. Whatever contribution Y has made to the accomplishment of X, I have already counted them—they are included in the value I place on X! To credit you again for doing Y, would be double-counting.[2] But the direct negative value of Y to me—that part has not already been included in my evaluation of X; so indeed I am correct in debiting Y’s value from your account.
And so, in the final analysis, all questions about what you may or may not have been “trying” to do—and any other implementation details, any other facts about how you came by the outcomes of your efforts—simply factor out.
Of course your implementation details may very well be of interest to me when it comes to predicting your future output; but that is a different matter altogether!
Note, by the way, that this formulation entirely removes the need for me to consider the truth of your claim that Y is necessary for X. Once again we see the correctness of ignoring implementation details, and looking only at outcomes.
It seems like there’s a consistent disagreement here about how much implementation details matter.
And I think it’s useful to remember that things _are_ just implementation details. Sometimes you’re burning coal to produce energy, and if you wrap up your entire thought process around “coal is necessary to produce energy” you might not consider wind or nuclear power.
But realistically I think implementation details do matter, and if the best way to get X is with Y… no, that shouldn’t lead you to think Y is good in-and-of-itself, but it should affect your model of how everything fits together.
Understanding the mechanics of how the world works is how you improve how the world works. If you abstract away all the lower level details you lose the ability to reconfigure them.
I don’t disagree with what you say, but I’m not sure that it’s responsive to my comments. I never said, after all, that implementation details “don’t matter”, in some absolute sense—only (here) that they don’t matter as far as evaluation of outcomes goes! (Did you miss the first footnote of the grandparent comment…?)
Understanding the mechanics of how the world works is how you improve how the world works. If you abstract away all the lower level details you lose the ability to reconfigure them.
Yes, of course. But I am not the one doing any reconfiguring of, say, CFAR, nor am I interested in doing so! It is of course right and proper that CFAR employees (and/or anyone else in a position to, and with a motivation to, improve or modify CFAR’s affairs) understand the implementation details of how CFAR does the things they do. But what is that to me? Of academic or general interest—yes, of course. But for the purpose of evaluation…?
It seemed like it mattered with regard to the original context of this discussion, where the thing I was asking was “what would LW output if it were going well, according to you?” (I realize this question perhaps unfairly implies you cared about my particular frame in which I asked the question)
If LessWrong’s job was to produce energy, and we did it by burning coal, pollution and other downsides might be a cost that we weigh, but if we thought about “how would we tell things had gone well in another 20 years?”, unless we had a plan for switching the entire plant over to solar panels, we should probably expect roughly similar levels of whatever the costs were (maybe with some reduction based on efficiency), rather than those downsides disappearing into the mists of time.
(I realize this question perhaps unfairly implies you cared about my particular frame in which I asked the question)
Sure, but more importantly, what you asked was this:
In 20 years if everything on LW went exactly the way you think is ideal, what are the good things that would have happened along the way, and how would we know that we made the right call?
[emphasis mine]
Producing energy by burning coal is hardly ideal. As you say upthread, it’s well and good to be realistic about what can be accomplished and how it can be accomplished, but we shouldn’t lose track of what our goals (i.e., our ideals) actually are.
I’m not too worried about the conversation continuing in the manner is has, but I’m pretty sure I’ve now covered everything I had to say before actually drilling down into the details.
That’s fair. I include the “trying” part because it is some evidence about the value of activities that, to outsiders, don’t obviously directly cause the desired outcome.
(If someone says their goal is to cause X, and in fact they do actually cause X, but along the way they do some seemingly unrelated activity Y, that is some evidence that Y is necessary or useful for X, relative to if they had done Y and also happened to cause X, but didn’t have causing X as a primary goal.
In other words, independently of how much someone is actually accomplishing X, the more they are trying to cause X, the more one should expect them to be attempting to filter their activities for accomplishing X. And the more they are actually accomplishing X, the more one should update on the filter being accurate.)
I don’t think that I agree with this framing. (Consider the following to be a sort of thinking-out-loud.)
Suppose that activity Y is, to an outsider (e.g., me), neutral in value—neither beneficial nor harmful. You come to me and say: “I have done thing X, which you take to be beneficial; as you have observed, I have also been engaging in activity Y. I claim that Y is necessary for the accomplishment of X. Will you now update your evaluation of Y, and judge it no longer as neutral, but in fact as positive (on account of the fact—which you may take my word, and my accomplishment of X, as evidence—that Y is necessary for X)?”
My answer can only be “No”. No, because whatever may or may not be necessary for you to accomplish outcome X, nonetheless it is only X which is valuable to me. How you bring X about is your business. It is an implementation detail; I am not interested in implementation details, when it comes to evaluating your output (i.e., the sum total of the consequences of all your actions).[1]
Now suppose that Y is not neutral in my eyes, but rather, of negative value. I tally up your output, and note: you have caused X—this is to your credit! But, at the same time, you have done Y—this I write down in red ink. And again you come to me and say: “I see you take X to be positive, but Y to be negative; but consider that Y is necessary for X [which, we once again assume, I may have good reason to trust is the case]! Will you now move Y over to the other side of the ledger, seeing as how Y is a sine qua non of X?”
And once again my answer is “No”. Whatever contribution Y has made to the accomplishment of X, I have already counted them—they are included in the value I place on X! To credit you again for doing Y, would be double-counting.[2] But the direct negative value of Y to me—that part has not already been included in my evaluation of X; so indeed I am correct in debiting Y’s value from your account.
And so, in the final analysis, all questions about what you may or may not have been “trying” to do—and any other implementation details, any other facts about how you came by the outcomes of your efforts—simply factor out.
Of course your implementation details may very well be of interest to me when it comes to predicting your future output; but that is a different matter altogether!
Note, by the way, that this formulation entirely removes the need for me to consider the truth of your claim that Y is necessary for X. Once again we see the correctness of ignoring implementation details, and looking only at outcomes.
It seems like there’s a consistent disagreement here about how much implementation details matter.
And I think it’s useful to remember that things _are_ just implementation details. Sometimes you’re burning coal to produce energy, and if you wrap up your entire thought process around “coal is necessary to produce energy” you might not consider wind or nuclear power.
But realistically I think implementation details do matter, and if the best way to get X is with Y… no, that shouldn’t lead you to think Y is good in-and-of-itself, but it should affect your model of how everything fits together.
Understanding the mechanics of how the world works is how you improve how the world works. If you abstract away all the lower level details you lose the ability to reconfigure them.
I don’t disagree with what you say, but I’m not sure that it’s responsive to my comments. I never said, after all, that implementation details “don’t matter”, in some absolute sense—only (here) that they don’t matter as far as evaluation of outcomes goes! (Did you miss the first footnote of the grandparent comment…?)
Yes, of course. But I am not the one doing any reconfiguring of, say, CFAR, nor am I interested in doing so! It is of course right and proper that CFAR employees (and/or anyone else in a position to, and with a motivation to, improve or modify CFAR’s affairs) understand the implementation details of how CFAR does the things they do. But what is that to me? Of academic or general interest—yes, of course. But for the purpose of evaluation…?
It seemed like it mattered with regard to the original context of this discussion, where the thing I was asking was “what would LW output if it were going well, according to you?” (I realize this question perhaps unfairly implies you cared about my particular frame in which I asked the question)
If LessWrong’s job was to produce energy, and we did it by burning coal, pollution and other downsides might be a cost that we weigh, but if we thought about “how would we tell things had gone well in another 20 years?”, unless we had a plan for switching the entire plant over to solar panels, we should probably expect roughly similar levels of whatever the costs were (maybe with some reduction based on efficiency), rather than those downsides disappearing into the mists of time.
[edit: mild update to first paragraph]
Sure, but more importantly, what you asked was this:
[emphasis mine]
Producing energy by burning coal is hardly ideal. As you say upthread, it’s well and good to be realistic about what can be accomplished and how it can be accomplished, but we shouldn’t lose track of what our goals (i.e., our ideals) actually are.
Okay, coolio.
I’m not too worried about the conversation continuing in the manner is has, but I’m pretty sure I’ve now covered everything I had to say before actually drilling down into the details.