A criticism I have of your posts is that you seem to view your typical audience member as somebody who stubbornly disagrees with your viewpoint, rather than as an undecided voter. More critically, you seem to view yourself as somebody capable of changing the former’s opinion through (very well-written) restatements of the relevant arguments. But people like me want to know why previous discussions haven’t yet resolved the issue even in discussions between key players. Because they should be resolvable, and posts like this suggest to me that at least some players can’t even figure out why they aren’t yet.
Ideally, we’d take a Bayesian approach, where we have a certain prior estimate about how cost-effective the organization is, and then update our cost-effectiveness estimate based on additional evidence as it comes in. For reasons I argued earlier and GiveWell has argued in I think our prior estimate should be quite skeptical (i.e. expect cost-effectiveness to be not as good as AMF / much closer to average than naïvely estimated) until proven otherwise.
The Karnofsky articles have been responded to, with a rather in-depth followup discussion, in this post. It’s hardly important to me that you don’t consider existential risk charities to defeat expected value criticisms, because Peter Hurford’s head is not where I need this discussion to play out in order to convince me. At first glance, and after continued discussion, the arguments appear to me incredibly complex, and possibly too complex for many to even consider. In such cases, sometimes the correct answer demonstrates that the experts were overcomplicating the issue. In others, the laymen were overtrivializing it.
Those advocating existential risk reduction often argue as if their cause was unjustified exactly until the arguments starting making sense. These arguments tend to be extremely high volume, and offer different conclusions to different audience members with different background assumptions. For those who have ended up advocating X-risk safety, the argument has ceased to be unclear in the epistemological sense, and its philanthropic value is proven.
I’d like to hear more from you, and to to hear arguments laid out for your position in a way that allows me to accept them as relevant to the most weighty concerns of your opponents.
A criticism I have of your posts is that you seem to view your typical audience member as somebody who stubbornly disagrees with your viewpoint, rather than as an undecided voter.
I think I am doing persuasive writing (i.e. advocating for my point of view), but I would model myself as talking to an undecided voter or at least someone open minded, not someone stubborn. I’m interested in what in my writing is coming across as indicating I expect a stubborn audience.
More critically, you seem to view yourself as somebody capable of changing the former’s opinion through (very well-written) restatements of the relevant arguments.
I think that’s the case, yes. But I’m not sure they’re restatements as synthesis of many arguments that had not previously been all in one place, and even arguments that had never before been articulated in writing (as is the case in this piece).
But people like me want to know why previous discussions haven’t yet resolved the issue even in discussions between key players. Because they should be resolvable, and posts like this suggest to me that at least some players can’t even figure out why they aren’t yet.
It’s difficult to offer an answer to that question. I think one problem is many of these discussions haven’t (at least as far as I know) taken place in writing yet.
I’d like to hear more from you, and to to hear arguments laid out for your position in a way that allows me to accept them as relevant to the most weighty concerns of your opponents.
I’m confused. What’s wrong with how they’re currently laid out? Do you think there are certain arguments I’m not engaging with? If so, which ones?
~
On X-Risk Arguments
At first glance, and after continued discussion, the arguments appear to me incredibly complex, and possibly too complex for many to even consider. In such cases, sometimes the correct answer demonstrates that the experts were overcomplicating the issue. In others, the laymen were overtrivializing it.
I don’t understand what you’re saying here. It sounds like you’re advocating for learned helplessness, but I don’t think that’s the case.
Those advocating existential risk reduction often argue as if their cause was unjustified exactly until the arguments starting making sense.
What do you mean? Can you give me an example?
For those who have ended up advocating X-risk safety, the argument has ceased to be unclear in the epistemological sense, and its philanthropic value is proven.
I think that’s equivocating two different definitions of “proven”.
I don’t mean to be antagonistic here, and I apologize for my tone. I’d prefer my impressions to be taken as yet-another-data-point rather than a strongly stated opinion on what your writings should be.
I’m interested in what in my writing is coming across as indicating I expect a stubborn audience.
The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don’t find them to be good objections to your framework. My overall suggestion could be summarized as a plea to take two steps back before making a post, to fill up content not with arguments, but with data about how people think. Summarize background assumptions and trace them to their resultant beliefs about the subject. Link us to existing opinions by people who you might imagine will take issue with your writing. Preempt a comment thread by considering how those existing opinions would conflict with yours, and decide to find that more interesting than the quality of your own argument.
These aren’t requirements for a good post. I’m not saying you don’t do these things to some extent. They are just things which, if they were more heavily focused, would make your posts much more useful to this data point (me).
It’s difficult to offer an answer to that question. I think one problem is many of these discussions haven’t (at least as far as I know) taken place in writing yet.
That seems initially unlikely to me. What do you find particularly novel about your Speculative Cause post that distinguishes it from previous Less Wrong discussions, where this has been the du jour topic and the crux of whether MIRI is useful as a donation target? Do you have a list of posts that are similar, but which lack in a way your Speculative Cause post makes up for?
I’m confused. What’s wrong with how they’re currently laid out? Do you think there are certain arguments I’m not engaging with? If so, which ones?
Again, this post seems extremely relevant to your Speculative Causes post. This comment and its child are also well written, and link in other valuable sources. Since AI-risk is one of the most-discussed topic here, I would have expected a higher quality response than calling the AI-safety conclusion commonsense.
Those advocating existential risk reduction often argue as if their cause was unjustified exactly until the arguments starting making sense.
What do you mean? Can you give me an example?
Certain portions of Luke’s Story are the best example I can come up with after a little bit of searching through posts I’ve read at some point in the past. The way he phrases it is slightly different from how I have, but it suggests inferential distance for the AI form of X-Risk might be insurmountably high for those who don’t have a similar “aha.” Quoted from link:
Good’s paragraph ran me over like a train. Not because it was absurd, but because it was clearly true. Intelligence explosion was a direct consequence of things I already believed, I just hadn’t noticed! Humans do not automatically propagate their beliefs, so I hadn’t noticed that my worldview already implied intelligence explosion.
I spent a week looking for counterarguments, to check whether I was missing something, and then accepted intelligence explosion to be likely.
And Luke’s comment (child of So8res’) suggests his response to your post would be along the lines of “lots of good arguments built up over a long period of careful consideration.” Learned helplessness is the opposite of what I’m advocating. When laymen overtrivialize an issue, they fail to see how somebody who has made it a long-term focus could be justified in their theses.
I think that’s equivocating two different definitions of “proven”.
It is indeed. I was initially going to protest that your post conflated “proven in the Bayesian sense” and “proven as a valuable philanthropic cause,” so I was trying to draw attention to that. Those who think that the probability of AI-risk is low, might still think that its high enough to overshadow nearly all other causes, because the negative impact is so high. AI-risk would be unproven, but its philanthropic value proven to that person.
As comments on your posts indicate, MIRI and its supporters are quite convinced.
The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don’t find them to be good objections to your framework.
I think there’s something to be said for making the essay too long by analyzing absolutely every consideration that could ever be brought up. There’s dozens of additional considerations that I could have elaborated on at length in my essay (the utilitarianism of it, other meta-ethics, free range, whether nonhuman animal lives actually aren’t worth living, logic of the larder, wild animal suffering, etc.) that it would be impossible to cover them all. Therefore, I preferred them to come up in the comments.
But generally, should I hedge my claims more in light of more possible counterarguments? Yeah, probably.
~
That seems initially unlikely to me. What do you find particularly novel about your Speculative Cause post that distinguishes it from previous Less Wrong discussions, where this has been the du jour topic and the crux of whether MIRI is useful as a donation target?
I did read a large list of essays in this realm prior to writing this essay. A lot played on the decision theory angle and the concern with experts, but none mentioned the potential for biases in favor of x-risk or the history of commonsense.
~
a higher quality response than calling the AI-safety conclusion commonsense.
To be fair, the essay did include quite a lot more extended argument than just that. I do agree I could have engaged better with other essays on the site, though. I was mostly concerned with issues of length and amount of time spent, but maybe I erred too much on the side of caution.
A criticism I have of your posts is that you seem to view your typical audience member as somebody who stubbornly disagrees with your viewpoint, rather than as an undecided voter. More critically, you seem to view yourself as somebody capable of changing the former’s opinion through (very well-written) restatements of the relevant arguments. But people like me want to know why previous discussions haven’t yet resolved the issue even in discussions between key players. Because they should be resolvable, and posts like this suggest to me that at least some players can’t even figure out why they aren’t yet.
The Karnofsky articles have been responded to, with a rather in-depth followup discussion, in this post. It’s hardly important to me that you don’t consider existential risk charities to defeat expected value criticisms, because Peter Hurford’s head is not where I need this discussion to play out in order to convince me. At first glance, and after continued discussion, the arguments appear to me incredibly complex, and possibly too complex for many to even consider. In such cases, sometimes the correct answer demonstrates that the experts were overcomplicating the issue. In others, the laymen were overtrivializing it.
Those advocating existential risk reduction often argue as if their cause was unjustified exactly until the arguments starting making sense. These arguments tend to be extremely high volume, and offer different conclusions to different audience members with different background assumptions. For those who have ended up advocating X-risk safety, the argument has ceased to be unclear in the epistemological sense, and its philanthropic value is proven.
I’d like to hear more from you, and to to hear arguments laid out for your position in a way that allows me to accept them as relevant to the most weighty concerns of your opponents.
On Criticism of Me
I think I am doing persuasive writing (i.e. advocating for my point of view), but I would model myself as talking to an undecided voter or at least someone open minded, not someone stubborn. I’m interested in what in my writing is coming across as indicating I expect a stubborn audience.
I think that’s the case, yes. But I’m not sure they’re restatements as synthesis of many arguments that had not previously been all in one place, and even arguments that had never before been articulated in writing (as is the case in this piece).
It’s difficult to offer an answer to that question. I think one problem is many of these discussions haven’t (at least as far as I know) taken place in writing yet.
I’m confused. What’s wrong with how they’re currently laid out? Do you think there are certain arguments I’m not engaging with? If so, which ones?
~
On X-Risk Arguments
I don’t understand what you’re saying here. It sounds like you’re advocating for learned helplessness, but I don’t think that’s the case.
What do you mean? Can you give me an example?
I think that’s equivocating two different definitions of “proven”.
I don’t mean to be antagonistic here, and I apologize for my tone. I’d prefer my impressions to be taken as yet-another-data-point rather than a strongly stated opinion on what your writings should be.
The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don’t find them to be good objections to your framework. My overall suggestion could be summarized as a plea to take two steps back before making a post, to fill up content not with arguments, but with data about how people think. Summarize background assumptions and trace them to their resultant beliefs about the subject. Link us to existing opinions by people who you might imagine will take issue with your writing. Preempt a comment thread by considering how those existing opinions would conflict with yours, and decide to find that more interesting than the quality of your own argument.
These aren’t requirements for a good post. I’m not saying you don’t do these things to some extent. They are just things which, if they were more heavily focused, would make your posts much more useful to this data point (me).
That seems initially unlikely to me. What do you find particularly novel about your Speculative Cause post that distinguishes it from previous Less Wrong discussions, where this has been the du jour topic and the crux of whether MIRI is useful as a donation target? Do you have a list of posts that are similar, but which lack in a way your Speculative Cause post makes up for?
Again, this post seems extremely relevant to your Speculative Causes post. This comment and its child are also well written, and link in other valuable sources. Since AI-risk is one of the most-discussed topic here, I would have expected a higher quality response than calling the AI-safety conclusion commonsense.
Certain portions of Luke’s Story are the best example I can come up with after a little bit of searching through posts I’ve read at some point in the past. The way he phrases it is slightly different from how I have, but it suggests inferential distance for the AI form of X-Risk might be insurmountably high for those who don’t have a similar “aha.” Quoted from link:
And Luke’s comment (child of So8res’) suggests his response to your post would be along the lines of “lots of good arguments built up over a long period of careful consideration.” Learned helplessness is the opposite of what I’m advocating. When laymen overtrivialize an issue, they fail to see how somebody who has made it a long-term focus could be justified in their theses.
It is indeed. I was initially going to protest that your post conflated “proven in the Bayesian sense” and “proven as a valuable philanthropic cause,” so I was trying to draw attention to that. Those who think that the probability of AI-risk is low, might still think that its high enough to overshadow nearly all other causes, because the negative impact is so high. AI-risk would be unproven, but its philanthropic value proven to that person.
As comments on your posts indicate, MIRI and its supporters are quite convinced.
I think there’s something to be said for making the essay too long by analyzing absolutely every consideration that could ever be brought up. There’s dozens of additional considerations that I could have elaborated on at length in my essay (the utilitarianism of it, other meta-ethics, free range, whether nonhuman animal lives actually aren’t worth living, logic of the larder, wild animal suffering, etc.) that it would be impossible to cover them all. Therefore, I preferred them to come up in the comments.
But generally, should I hedge my claims more in light of more possible counterarguments? Yeah, probably.
~
I did read a large list of essays in this realm prior to writing this essay. A lot played on the decision theory angle and the concern with experts, but none mentioned the potential for biases in favor of x-risk or the history of commonsense.
~
To be fair, the essay did include quite a lot more extended argument than just that. I do agree I could have engaged better with other essays on the site, though. I was mostly concerned with issues of length and amount of time spent, but maybe I erred too much on the side of caution.