If the point is “the Cons are not all bad; they are partly good, to the extent that they contribute to (or are perhaps even necessary for?) the Pros”, then—granted.
If the point is “the Cons are not bad at all, and the reasons for considering them to be bad do not exist, because of the fact that they also contribute to the Pros”, then that is revealed to be manifestly incoherent as soon as it’s made explicit.
If the point is something else entirely, then I reserve judgment until clarification.
If you found out some of those cons (or some close version of them) were necessary in order to achieve those pros, would anything shift for you?
For instance, if you see people acting to work on/improve/increase the cons… would you see those people as acting badly/negatively if you knew it was the only realistic way to achieve the pros?
(This is just in the hypothetical world where this is true. I do not know if it is.)
Like, what if we just live in a “tragic world” where you can’t achieve things like your pros list without… basically feeding people’s desire for community and connection? And what if people’s desire for connection often ends up taking the form of wanting to live/work/interact together? Would anything shift for you?
(If my hypothetical does nothing, then could you come up with a hypothetical that does?)
If you found out some of those cons (or some close version of them) were necessary in order to achieve those pros, would anything shift for you?
This question is incomplete. The corrected version would read:
“If you found out some of those cons (or some close version of them) were necessary in order to achieve some of those pros, would anything shift for you?”
Given this, the answer is “of course, and it would depend on which Cons were necessary in order to achieve which Pros”.
Now, you said that your question is a mere hypothetical, but let’s not obfuscate: clearly, if not you, then at least other folks here, think that your hypothetical scenario describes reality. But as Ray commented elsethread, this is hardly the ideal context to hash out the details of this topic. So I won’t. I will, however, ask you this:
Do you think that some of the Cons on my list are necessary in order to achieve some of the Pros? (No need to provide details on which, etc.)
If the point is “the Cons are not all bad; they are partly good, to the extent that they contribute to (or are perhaps even necessary for?) the Pros”, then—granted.
Yes, this is the point. (I wouldn’t personally put it quite that way, since by my own evaluation the things I mentioned—EA, CFAR, rationalist communities—are much better than “not all bad” makes it sound. But yes, it seems like someone who values the things on your pros list should at least think that those things are not all bad.)
then—granted.
For clarity—when you say, “granted”, do you mean, “Yes, I already believed that, and I stand by my pros and cons list, as written.” Or do you mean, “Good point. You’ve given me an update, and I would no longer endorse the statement, ‘Almost everything CFAR has done belongs on the con side of a pros-and-cons list.’”?
If the former (such that you would still endorse the “Almost everything...” statement), I would challenge whether that position is consistent with both 1) highly valuing the things on your pros list, and also 2) having an accurate view of the facts on the ground of what CFAR is trying to accomplish and has actually accomplished.
I could see that position being consistent if you thought CFAR’s other actions were highly negative. But my guess is that you see them being closer to useless (and widely overvalued), rather than so negative as to make their positive contributions a rounding error.
In any case, I’m happy to table that debate if you’d like, as has been suggested in other comments.
For clarity—when you say, “granted”, do you mean, “Yes, I already believed that, and I stand by my pros and cons list, as written.” Or do you mean, “Good point. You’ve given me an update, and I would no longer endorse the statement, ‘Almost everything CFAR has done belongs on the con side of a pros-and-cons list.’”?
The middle way, viz.:
Good point. You’ve given me an update, and I would still endorse the statement, ‘Almost everything CFAR has done belongs on the con side of a pros-and-cons list.’
… I would challenge whether that position is consistent with both 1) highly valuing the things on your pros list, and also 2) having an accurate view of the facts on the ground of what CFAR is trying to accomplish and has actually accomplished.
A reasonable challenge—or, rather, half of one; after all, what CFAR “is trying to accomplish” is of no consequence. What they have accomplished, of course, is of great consequence. I allow that I may have an inaccurate view of their accomplishments. I would love to see an overview, written by a neutral third party, that summarizes everything that CFAR has ever done.
I could see that position being consistent if you thought CFAR’s other actions were highly negative. But my guess is that you see them being closer to useless (and widely overvalued), rather than so negative as to make their positive contributions a rounding error.
I’m afraid your guess is mistaken (though I would quibble with the “rounding error” phrasing—that is a stronger claim than any I have made).
A reasonable challenge—or, rather, half of one; after all, what CFAR “is trying to accomplish” is of no consequence. What they have accomplished, of course, is of great consequence.
That’s fair. I include the “trying” part because it is some evidence about the value of activities that, to outsiders, don’t obviously directly cause the desired outcome.
(If someone says their goal is to cause X, and in fact they do actually cause X, but along the way they do some seemingly unrelated activity Y, that is some evidence that Y is necessary or useful for X, relative to if they had done Y and also happened to cause X, but didn’t have causing X as a primary goal.
In other words, independently of how much someone is actually accomplishing X, the more they are trying to cause X, the more one should expect them to be attempting to filter their activities for accomplishing X. And the more they are actually accomplishing X, the more one should update on the filter being accurate.)
I don’t think that I agree with this framing. (Consider the following to be a sort of thinking-out-loud.)
Suppose that activity Y is, to an outsider (e.g., me), neutral in value—neither beneficial nor harmful. You come to me and say: “I have done thing X, which you take to be beneficial; as you have observed, I have also been engaging in activity Y. I claim that Y is necessary for the accomplishment of X. Will you now update your evaluation of Y, and judge it no longer as neutral, but in fact as positive (on account of the fact—which you may take my word, and my accomplishment of X, as evidence—that Y is necessary for X)?”
My answer can only be “No”. No, because whatever may or may not be necessary for you to accomplish outcome X, nonetheless it is only X which is valuable to me. How you bring X about is your business. It is an implementation detail; I am not interested in implementation details, when it comes to evaluating your output (i.e., the sum total of the consequences of all your actions).[1]
Now suppose that Y is not neutral in my eyes, but rather, of negative value. I tally up your output, and note: you have caused X—this is to your credit! But, at the same time, you have done Y—this I write down in red ink. And again you come to me and say: “I see you take X to be positive, but Y to be negative; but consider that Y is necessary for X [which, we once again assume, I may have good reason to trust is the case]! Will you now move Y over to the other side of the ledger, seeing as how Y is a sine qua non of X?”
And once again my answer is “No”. Whatever contribution Y has made to the accomplishment of X, I have already counted them—they are included in the value I place on X! To credit you again for doing Y, would be double-counting.[2] But the direct negative value of Y to me—that part has not already been included in my evaluation of X; so indeed I am correct in debiting Y’s value from your account.
And so, in the final analysis, all questions about what you may or may not have been “trying” to do—and any other implementation details, any other facts about how you came by the outcomes of your efforts—simply factor out.
Of course your implementation details may very well be of interest to me when it comes to predicting your future output; but that is a different matter altogether!
Note, by the way, that this formulation entirely removes the need for me to consider the truth of your claim that Y is necessary for X. Once again we see the correctness of ignoring implementation details, and looking only at outcomes.
It seems like there’s a consistent disagreement here about how much implementation details matter.
And I think it’s useful to remember that things _are_ just implementation details. Sometimes you’re burning coal to produce energy, and if you wrap up your entire thought process around “coal is necessary to produce energy” you might not consider wind or nuclear power.
But realistically I think implementation details do matter, and if the best way to get X is with Y… no, that shouldn’t lead you to think Y is good in-and-of-itself, but it should affect your model of how everything fits together.
Understanding the mechanics of how the world works is how you improve how the world works. If you abstract away all the lower level details you lose the ability to reconfigure them.
I don’t disagree with what you say, but I’m not sure that it’s responsive to my comments. I never said, after all, that implementation details “don’t matter”, in some absolute sense—only (here) that they don’t matter as far as evaluation of outcomes goes! (Did you miss the first footnote of the grandparent comment…?)
Understanding the mechanics of how the world works is how you improve how the world works. If you abstract away all the lower level details you lose the ability to reconfigure them.
Yes, of course. But I am not the one doing any reconfiguring of, say, CFAR, nor am I interested in doing so! It is of course right and proper that CFAR employees (and/or anyone else in a position to, and with a motivation to, improve or modify CFAR’s affairs) understand the implementation details of how CFAR does the things they do. But what is that to me? Of academic or general interest—yes, of course. But for the purpose of evaluation…?
It seemed like it mattered with regard to the original context of this discussion, where the thing I was asking was “what would LW output if it were going well, according to you?” (I realize this question perhaps unfairly implies you cared about my particular frame in which I asked the question)
If LessWrong’s job was to produce energy, and we did it by burning coal, pollution and other downsides might be a cost that we weigh, but if we thought about “how would we tell things had gone well in another 20 years?”, unless we had a plan for switching the entire plant over to solar panels, we should probably expect roughly similar levels of whatever the costs were (maybe with some reduction based on efficiency), rather than those downsides disappearing into the mists of time.
(I realize this question perhaps unfairly implies you cared about my particular frame in which I asked the question)
Sure, but more importantly, what you asked was this:
In 20 years if everything on LW went exactly the way you think is ideal, what are the good things that would have happened along the way, and how would we know that we made the right call?
[emphasis mine]
Producing energy by burning coal is hardly ideal. As you say upthread, it’s well and good to be realistic about what can be accomplished and how it can be accomplished, but we shouldn’t lose track of what our goals (i.e., our ideals) actually are.
I’m not too worried about the conversation continuing in the manner is has, but I’m pretty sure I’ve now covered everything I had to say before actually drilling down into the details.
There may need to be more buckets than “pro” and “con.”
I propose “negative,” “neutral,” “positive,” “instrumental,” and “detrimental.”
Thus you can get things like “negative and yet instrumental” or “positive and yet detrimental,” where the first word is the thing taken reasonably in isolation and judged against a standard of virtue or quality, and the second word is the ramifications of the thing’s existence in the world in a long-term consequentialist sense.
(So returning to my favorite local controversy, punching people is Negative, but it’s possible that punch bug might consequentially be Instrumental for societies filled with good people that are overall on board with nonviolence and personal sovereignty.)
Other categorizations might do better to clarify cruxes … this was my attempt to create a paradigm that would allow you to zero in on the actual substance of disagreement.
You’re talking about means and ends (which, in a consequentialist framework, are, of course, just “ends” and “other ends”).
(Your example may thus be translated as “punching people is negative ceteris paribus, as it has direct, immediate, negative effects; however, the knock-on effects, etc., may result in consequences which, when all aggregated and integrated over some suitable future period, are net positive”. Of course this gets us into the usual difficulties with aggregation, both intra-personally and interpersonally, but these may probably be safely bracketed… at least, provisionally.)
I’m talking about you and ESRogs zeroing in on where you disagree, because at least one of you is wrong and has a productive opportunity to update. Sorry if the example of punch bug was distracting, but I suspect fairly strongly that it is inappropriate and oversimplified to just have a pros-and-cons list in the case of these large evaluations you’re making—not least because in a black-or-white dichotomy, you lose resolution on the places where your assumptions actually differ.
If the point is “the Cons are not all bad; they are partly good, to the extent that they contribute to (or are perhaps even necessary for?) the Pros”, then—granted.
If the point is “the Cons are not bad at all, and the reasons for considering them to be bad do not exist, because of the fact that they also contribute to the Pros”, then that is revealed to be manifestly incoherent as soon as it’s made explicit.
If the point is something else entirely, then I reserve judgment until clarification.
If you found out some of those cons (or some close version of them) were necessary in order to achieve those pros, would anything shift for you?
For instance, if you see people acting to work on/improve/increase the cons… would you see those people as acting badly/negatively if you knew it was the only realistic way to achieve the pros?
(This is just in the hypothetical world where this is true. I do not know if it is.)
Like, what if we just live in a “tragic world” where you can’t achieve things like your pros list without… basically feeding people’s desire for community and connection? And what if people’s desire for connection often ends up taking the form of wanting to live/work/interact together? Would anything shift for you?
(If my hypothetical does nothing, then could you come up with a hypothetical that does?)
This question is incomplete. The corrected version would read:
“If you found out some of those cons (or some close version of them) were necessary in order to achieve some of those pros, would anything shift for you?”
Given this, the answer is “of course, and it would depend on which Cons were necessary in order to achieve which Pros”.
Now, you said that your question is a mere hypothetical, but let’s not obfuscate: clearly, if not you, then at least other folks here, think that your hypothetical scenario describes reality. But as Ray commented elsethread, this is hardly the ideal context to hash out the details of this topic. So I won’t. I will, however, ask you this:
Do you think that some of the Cons on my list are necessary in order to achieve some of the Pros? (No need to provide details on which, etc.)
(shucks, now I’m kind of ashamed of my own reply to Said above, which is not nearly as skillful as this)
Yes, this is the point. (I wouldn’t personally put it quite that way, since by my own evaluation the things I mentioned—EA, CFAR, rationalist communities—are much better than “not all bad” makes it sound. But yes, it seems like someone who values the things on your pros list should at least think that those things are not all bad.)
For clarity—when you say, “granted”, do you mean, “Yes, I already believed that, and I stand by my pros and cons list, as written.” Or do you mean, “Good point. You’ve given me an update, and I would no longer endorse the statement, ‘Almost everything CFAR has done belongs on the con side of a pros-and-cons list.’”?
If the former (such that you would still endorse the “Almost everything...” statement), I would challenge whether that position is consistent with both 1) highly valuing the things on your pros list, and also 2) having an accurate view of the facts on the ground of what CFAR is trying to accomplish and has actually accomplished.
I could see that position being consistent if you thought CFAR’s other actions were highly negative. But my guess is that you see them being closer to useless (and widely overvalued), rather than so negative as to make their positive contributions a rounding error.
In any case, I’m happy to table that debate if you’d like, as has been suggested in other comments.
The middle way, viz.:
Good point. You’ve given me an update, and I would still endorse the statement, ‘Almost everything CFAR has done belongs on the con side of a pros-and-cons list.’
A reasonable challenge—or, rather, half of one; after all, what CFAR “is trying to accomplish” is of no consequence. What they have accomplished, of course, is of great consequence. I allow that I may have an inaccurate view of their accomplishments. I would love to see an overview, written by a neutral third party, that summarizes everything that CFAR has ever done.
I’m afraid your guess is mistaken (though I would quibble with the “rounding error” phrasing—that is a stronger claim than any I have made).
That’s fair. I include the “trying” part because it is some evidence about the value of activities that, to outsiders, don’t obviously directly cause the desired outcome.
(If someone says their goal is to cause X, and in fact they do actually cause X, but along the way they do some seemingly unrelated activity Y, that is some evidence that Y is necessary or useful for X, relative to if they had done Y and also happened to cause X, but didn’t have causing X as a primary goal.
In other words, independently of how much someone is actually accomplishing X, the more they are trying to cause X, the more one should expect them to be attempting to filter their activities for accomplishing X. And the more they are actually accomplishing X, the more one should update on the filter being accurate.)
I don’t think that I agree with this framing. (Consider the following to be a sort of thinking-out-loud.)
Suppose that activity Y is, to an outsider (e.g., me), neutral in value—neither beneficial nor harmful. You come to me and say: “I have done thing X, which you take to be beneficial; as you have observed, I have also been engaging in activity Y. I claim that Y is necessary for the accomplishment of X. Will you now update your evaluation of Y, and judge it no longer as neutral, but in fact as positive (on account of the fact—which you may take my word, and my accomplishment of X, as evidence—that Y is necessary for X)?”
My answer can only be “No”. No, because whatever may or may not be necessary for you to accomplish outcome X, nonetheless it is only X which is valuable to me. How you bring X about is your business. It is an implementation detail; I am not interested in implementation details, when it comes to evaluating your output (i.e., the sum total of the consequences of all your actions).[1]
Now suppose that Y is not neutral in my eyes, but rather, of negative value. I tally up your output, and note: you have caused X—this is to your credit! But, at the same time, you have done Y—this I write down in red ink. And again you come to me and say: “I see you take X to be positive, but Y to be negative; but consider that Y is necessary for X [which, we once again assume, I may have good reason to trust is the case]! Will you now move Y over to the other side of the ledger, seeing as how Y is a sine qua non of X?”
And once again my answer is “No”. Whatever contribution Y has made to the accomplishment of X, I have already counted them—they are included in the value I place on X! To credit you again for doing Y, would be double-counting.[2] But the direct negative value of Y to me—that part has not already been included in my evaluation of X; so indeed I am correct in debiting Y’s value from your account.
And so, in the final analysis, all questions about what you may or may not have been “trying” to do—and any other implementation details, any other facts about how you came by the outcomes of your efforts—simply factor out.
Of course your implementation details may very well be of interest to me when it comes to predicting your future output; but that is a different matter altogether!
Note, by the way, that this formulation entirely removes the need for me to consider the truth of your claim that Y is necessary for X. Once again we see the correctness of ignoring implementation details, and looking only at outcomes.
It seems like there’s a consistent disagreement here about how much implementation details matter.
And I think it’s useful to remember that things _are_ just implementation details. Sometimes you’re burning coal to produce energy, and if you wrap up your entire thought process around “coal is necessary to produce energy” you might not consider wind or nuclear power.
But realistically I think implementation details do matter, and if the best way to get X is with Y… no, that shouldn’t lead you to think Y is good in-and-of-itself, but it should affect your model of how everything fits together.
Understanding the mechanics of how the world works is how you improve how the world works. If you abstract away all the lower level details you lose the ability to reconfigure them.
I don’t disagree with what you say, but I’m not sure that it’s responsive to my comments. I never said, after all, that implementation details “don’t matter”, in some absolute sense—only (here) that they don’t matter as far as evaluation of outcomes goes! (Did you miss the first footnote of the grandparent comment…?)
Yes, of course. But I am not the one doing any reconfiguring of, say, CFAR, nor am I interested in doing so! It is of course right and proper that CFAR employees (and/or anyone else in a position to, and with a motivation to, improve or modify CFAR’s affairs) understand the implementation details of how CFAR does the things they do. But what is that to me? Of academic or general interest—yes, of course. But for the purpose of evaluation…?
It seemed like it mattered with regard to the original context of this discussion, where the thing I was asking was “what would LW output if it were going well, according to you?” (I realize this question perhaps unfairly implies you cared about my particular frame in which I asked the question)
If LessWrong’s job was to produce energy, and we did it by burning coal, pollution and other downsides might be a cost that we weigh, but if we thought about “how would we tell things had gone well in another 20 years?”, unless we had a plan for switching the entire plant over to solar panels, we should probably expect roughly similar levels of whatever the costs were (maybe with some reduction based on efficiency), rather than those downsides disappearing into the mists of time.
[edit: mild update to first paragraph]
Sure, but more importantly, what you asked was this:
[emphasis mine]
Producing energy by burning coal is hardly ideal. As you say upthread, it’s well and good to be realistic about what can be accomplished and how it can be accomplished, but we shouldn’t lose track of what our goals (i.e., our ideals) actually are.
Okay, coolio.
I’m not too worried about the conversation continuing in the manner is has, but I’m pretty sure I’ve now covered everything I had to say before actually drilling down into the details.
There may need to be more buckets than “pro” and “con.”
I propose “negative,” “neutral,” “positive,” “instrumental,” and “detrimental.”
Thus you can get things like “negative and yet instrumental” or “positive and yet detrimental,” where the first word is the thing taken reasonably in isolation and judged against a standard of virtue or quality, and the second word is the ramifications of the thing’s existence in the world in a long-term consequentialist sense.
(So returning to my favorite local controversy, punching people is Negative, but it’s possible that punch bug might consequentially be Instrumental for societies filled with good people that are overall on board with nonviolence and personal sovereignty.)
Other categorizations might do better to clarify cruxes … this was my attempt to create a paradigm that would allow you to zero in on the actual substance of disagreement.
Let’s not reinvent the wheel here.
You’re talking about means and ends (which, in a consequentialist framework, are, of course, just “ends” and “other ends”).
(Your example may thus be translated as “punching people is negative ceteris paribus, as it has direct, immediate, negative effects; however, the knock-on effects, etc., may result in consequences which, when all aggregated and integrated over some suitable future period, are net positive”. Of course this gets us into the usual difficulties with aggregation, both intra-personally and interpersonally, but these may probably be safely bracketed… at least, provisionally.)
I’m talking about you and ESRogs zeroing in on where you disagree, because at least one of you is wrong and has a productive opportunity to update. Sorry if the example of punch bug was distracting, but I suspect fairly strongly that it is inappropriate and oversimplified to just have a pros-and-cons list in the case of these large evaluations you’re making—not least because in a black-or-white dichotomy, you lose resolution on the places where your assumptions actually differ.