I don’t mean to be antagonistic here, and I apologize for my tone. I’d prefer my impressions to be taken as yet-another-data-point rather than a strongly stated opinion on what your writings should be.
I’m interested in what in my writing is coming across as indicating I expect a stubborn audience.
The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don’t find them to be good objections to your framework. My overall suggestion could be summarized as a plea to take two steps back before making a post, to fill up content not with arguments, but with data about how people think. Summarize background assumptions and trace them to their resultant beliefs about the subject. Link us to existing opinions by people who you might imagine will take issue with your writing. Preempt a comment thread by considering how those existing opinions would conflict with yours, and decide to find that more interesting than the quality of your own argument.
These aren’t requirements for a good post. I’m not saying you don’t do these things to some extent. They are just things which, if they were more heavily focused, would make your posts much more useful to this data point (me).
It’s difficult to offer an answer to that question. I think one problem is many of these discussions haven’t (at least as far as I know) taken place in writing yet.
That seems initially unlikely to me. What do you find particularly novel about your Speculative Cause post that distinguishes it from previous Less Wrong discussions, where this has been the du jour topic and the crux of whether MIRI is useful as a donation target? Do you have a list of posts that are similar, but which lack in a way your Speculative Cause post makes up for?
I’m confused. What’s wrong with how they’re currently laid out? Do you think there are certain arguments I’m not engaging with? If so, which ones?
Again, this post seems extremely relevant to your Speculative Causes post. This comment and its child are also well written, and link in other valuable sources. Since AI-risk is one of the most-discussed topic here, I would have expected a higher quality response than calling the AI-safety conclusion commonsense.
Those advocating existential risk reduction often argue as if their cause was unjustified exactly until the arguments starting making sense.
What do you mean? Can you give me an example?
Certain portions of Luke’s Story are the best example I can come up with after a little bit of searching through posts I’ve read at some point in the past. The way he phrases it is slightly different from how I have, but it suggests inferential distance for the AI form of X-Risk might be insurmountably high for those who don’t have a similar “aha.” Quoted from link:
Good’s paragraph ran me over like a train. Not because it was absurd, but because it was clearly true. Intelligence explosion was a direct consequence of things I already believed, I just hadn’t noticed! Humans do not automatically propagate their beliefs, so I hadn’t noticed that my worldview already implied intelligence explosion.
I spent a week looking for counterarguments, to check whether I was missing something, and then accepted intelligence explosion to be likely.
And Luke’s comment (child of So8res’) suggests his response to your post would be along the lines of “lots of good arguments built up over a long period of careful consideration.” Learned helplessness is the opposite of what I’m advocating. When laymen overtrivialize an issue, they fail to see how somebody who has made it a long-term focus could be justified in their theses.
I think that’s equivocating two different definitions of “proven”.
It is indeed. I was initially going to protest that your post conflated “proven in the Bayesian sense” and “proven as a valuable philanthropic cause,” so I was trying to draw attention to that. Those who think that the probability of AI-risk is low, might still think that its high enough to overshadow nearly all other causes, because the negative impact is so high. AI-risk would be unproven, but its philanthropic value proven to that person.
As comments on your posts indicate, MIRI and its supporters are quite convinced.
The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don’t find them to be good objections to your framework.
I think there’s something to be said for making the essay too long by analyzing absolutely every consideration that could ever be brought up. There’s dozens of additional considerations that I could have elaborated on at length in my essay (the utilitarianism of it, other meta-ethics, free range, whether nonhuman animal lives actually aren’t worth living, logic of the larder, wild animal suffering, etc.) that it would be impossible to cover them all. Therefore, I preferred them to come up in the comments.
But generally, should I hedge my claims more in light of more possible counterarguments? Yeah, probably.
~
That seems initially unlikely to me. What do you find particularly novel about your Speculative Cause post that distinguishes it from previous Less Wrong discussions, where this has been the du jour topic and the crux of whether MIRI is useful as a donation target?
I did read a large list of essays in this realm prior to writing this essay. A lot played on the decision theory angle and the concern with experts, but none mentioned the potential for biases in favor of x-risk or the history of commonsense.
~
a higher quality response than calling the AI-safety conclusion commonsense.
To be fair, the essay did include quite a lot more extended argument than just that. I do agree I could have engaged better with other essays on the site, though. I was mostly concerned with issues of length and amount of time spent, but maybe I erred too much on the side of caution.
I don’t mean to be antagonistic here, and I apologize for my tone. I’d prefer my impressions to be taken as yet-another-data-point rather than a strongly stated opinion on what your writings should be.
The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don’t find them to be good objections to your framework. My overall suggestion could be summarized as a plea to take two steps back before making a post, to fill up content not with arguments, but with data about how people think. Summarize background assumptions and trace them to their resultant beliefs about the subject. Link us to existing opinions by people who you might imagine will take issue with your writing. Preempt a comment thread by considering how those existing opinions would conflict with yours, and decide to find that more interesting than the quality of your own argument.
These aren’t requirements for a good post. I’m not saying you don’t do these things to some extent. They are just things which, if they were more heavily focused, would make your posts much more useful to this data point (me).
That seems initially unlikely to me. What do you find particularly novel about your Speculative Cause post that distinguishes it from previous Less Wrong discussions, where this has been the du jour topic and the crux of whether MIRI is useful as a donation target? Do you have a list of posts that are similar, but which lack in a way your Speculative Cause post makes up for?
Again, this post seems extremely relevant to your Speculative Causes post. This comment and its child are also well written, and link in other valuable sources. Since AI-risk is one of the most-discussed topic here, I would have expected a higher quality response than calling the AI-safety conclusion commonsense.
Certain portions of Luke’s Story are the best example I can come up with after a little bit of searching through posts I’ve read at some point in the past. The way he phrases it is slightly different from how I have, but it suggests inferential distance for the AI form of X-Risk might be insurmountably high for those who don’t have a similar “aha.” Quoted from link:
And Luke’s comment (child of So8res’) suggests his response to your post would be along the lines of “lots of good arguments built up over a long period of careful consideration.” Learned helplessness is the opposite of what I’m advocating. When laymen overtrivialize an issue, they fail to see how somebody who has made it a long-term focus could be justified in their theses.
It is indeed. I was initially going to protest that your post conflated “proven in the Bayesian sense” and “proven as a valuable philanthropic cause,” so I was trying to draw attention to that. Those who think that the probability of AI-risk is low, might still think that its high enough to overshadow nearly all other causes, because the negative impact is so high. AI-risk would be unproven, but its philanthropic value proven to that person.
As comments on your posts indicate, MIRI and its supporters are quite convinced.
I think there’s something to be said for making the essay too long by analyzing absolutely every consideration that could ever be brought up. There’s dozens of additional considerations that I could have elaborated on at length in my essay (the utilitarianism of it, other meta-ethics, free range, whether nonhuman animal lives actually aren’t worth living, logic of the larder, wild animal suffering, etc.) that it would be impossible to cover them all. Therefore, I preferred them to come up in the comments.
But generally, should I hedge my claims more in light of more possible counterarguments? Yeah, probably.
~
I did read a large list of essays in this realm prior to writing this essay. A lot played on the decision theory angle and the concern with experts, but none mentioned the potential for biases in favor of x-risk or the history of commonsense.
~
To be fair, the essay did include quite a lot more extended argument than just that. I do agree I could have engaged better with other essays on the site, though. I was mostly concerned with issues of length and amount of time spent, but maybe I erred too much on the side of caution.