I imagine (edit: wrongly) it was less “choosing” and more “he encountered the podcast first because it has a vastly larger audience, and had thoughts about it.”
I also doubt “just engage with X” was an available action. The podcast transcript doesn’t mention List of Lethalities, LessWrong, or the Sequences, so how is a listener supposed to find it?
I also hate it when people don’t engage with the strongest form of my work, and wouldn’t consider myself obligated to respond if they engaged with a weaker form (or if they engaged with the strongest one, barring additional obligation). But I think this is just what happens when someone goes on a podcast aimed at audiences that don’t already know them.
I agree with this heuristic in general, but will observe Quintin’s first post here was over two years ago and he commented on A List of Lethalities; I do think it’d be fair for him to respond with “what do you think this post was?”.
Vaniver is right. Note that I did specifically describe myself as an “alignment insider” at the start of this post. I’ve read A List of Lethalities and lots of other writing by Yudkowsky. Though the post I’d cite in response to the “you’re not engaging with the strongest forms of my argument” claim would be the one where I pretty much did what Yudkowsky suggests:
To grapple with the intellectual content of my ideas, consider picking one item from “A List of Lethalities” and engaging with that.
16.Even if you train really hard on an exact loss function, that doesn’t thereby create an explicit internal representation of the loss function inside an AI that then continues to pursue that exact loss function in distribution-shifted environments. Humans don’t explicitly pursue inclusive genetic fitness; outer optimization even on a very exact, very simple loss function doesn’t produce inner optimization in that direction. This happens in practice in real life, it is what happened in the only case we know about…
and then argues that we shouldn’t use evolution as our central example of an “outer optimization criteria versus inner formed values” outcome.
You can also see my comment here for some of what led me to write about the podcast specifically.
I imagine (edit: wrongly) it was less “choosing” and more “he encountered the podcast first because it has a vastly larger audience, and had thoughts about it.”
I also doubt “just engage with X” was an available action. The podcast transcript doesn’t mention List of Lethalities, LessWrong, or the Sequences, so how is a listener supposed to find it?
I also hate it when people don’t engage with the strongest form of my work, and wouldn’t consider myself obligated to respond if they engaged with a weaker form (or if they engaged with the strongest one, barring additional obligation). But I think this is just what happens when someone goes on a podcast aimed at audiences that don’t already know them.
I agree with this heuristic in general, but will observe Quintin’s first post here was over two years ago and he commented on A List of Lethalities; I do think it’d be fair for him to respond with “what do you think this post was?”.
Vaniver is right. Note that I did specifically describe myself as an “alignment insider” at the start of this post. I’ve read A List of Lethalities and lots of other writing by Yudkowsky. Though the post I’d cite in response to the “you’re not engaging with the strongest forms of my argument” claim would be the one where I pretty much did what Yudkowsky suggests:
My post Evolution is a bad analogy for AGI: inner alignment specifically addresses List of Lethalities point 16:
and then argues that we shouldn’t use evolution as our central example of an “outer optimization criteria versus inner formed values” outcome.
You can also see my comment here for some of what led me to write about the podcast specifically.
Oh yeah in that case both the complaint and the grumpiness seems much more reasonable.