I think part of my negative reaction at the inferential leap here is the lack of imagination I feel like it exhibits. It feels roughly the same as if I’d heard someone on the street say ‘Scott Sumner wrote a blog post that’s more than three pages long?! That’s crazy. The only possible reason someone could ever want to write something that long is if they have some political agenda they want to push, and they think they can trick people into agreeing by exhausting them with excess verbiage.’
It’s not that people never have political agendas, or never inflate their text in order to exhaust the reader; it’s that your cheater-detectors are obviously way too trigger-happy if your brain immediately confidently updates to ‘this is the big overriding reason’, based on evidence as weak as ‘this blog post is at least four pages long’.
If someone then responds “Sounds plausible, yeah”, then I update from ‘This one person has bad epistemics’ to ‘I have been teleported to Crazytown’.
I do feel like you are speaking too confidently of the epistemic state of the author. I do think the opening sentence of “I have a less charitable description of the links” feels like it weakens the statement here a good amount, and moves it for me more into the “this is one hypothesis that I have” territory, instead of the “I am confidently declaring this to be obviously true” territory.
Hmmm. I’d agree if it said “a less charitable hypothesis about the links” rather than “a less charitable description of the links”. Calling it a “description” makes it sound even more confident/authoritative/objective.
To be clear, I think a comment like this would have been great:
I clicked on your first three references, and in all three cases the details made me a lot more skeptical that this is a plausible way the future could go. Briefly, reference 1 faces problem W; reference 2 faces problems X and Y; and reference 3 faces problem Z. Based on this spot check, I expect the rest of the scenario will similarly fall apart when I pick at the details.
The whole story feels like a Gish gallop to me. Presumably at least part of your goal in telling this story was to make AI doom seem more realistic, but if a lot of the sense of realism rests on you dumping in a river of details that don’t hold up to scrutiny, then the sense of realism is a dishonest magic trick. Better to have just filled your story with technobabble, so we don’t mistake the word salad for a coherent gearsy world-model.
If I were the King Of Karma, I might set the net karma for a comment like that to somewhere between +8 and +60, depending on the quality of the arguments.
I think this would have been OK too:
I clicked on your first three references, and I didn’t understand them. As a skeptic of AI risk, this makes me feel like you’re Eulering me, trying to talk me into worrying about AI risk via unnecessarily jargony and complex arguments. Is there a simpler version of this scenario you could point me to? Or, failing that, could you link me to some resources for learning about this stuff, so I at least know there’s some pathway by which I could evaluate your argument if I ever wanted to sink in the time?
I’d give that something like +6 karma. I wouldn’t want LWers to make a habit of constantly accusing Alignment Forum people of ‘Eulering’ bystanders just because they’re drawing on technical concepts; but it’s at least an honest- and transparent-sounding statement of someone’s perspective, and it gives clear conditions that would let the person update somewhat away from ‘you’re just Eulering me’.
Jiro’s actual comment, I’d probably give somewhere between −5 and −30 karma if I were king.
I think part of my negative reaction at the inferential leap here is the lack of imagination I feel like it exhibits. It feels roughly the same as if I’d heard someone on the street say ‘Scott Sumner wrote a blog post that’s more than three pages long?! That’s crazy. The only possible reason someone could ever want to write something that long is if they have some political agenda they want to push, and they think they can trick people into agreeing by exhausting them with excess verbiage.’
It’s not that people never have political agendas, or never inflate their text in order to exhaust the reader; it’s that your cheater-detectors are obviously way too trigger-happy if your brain immediately confidently updates to ‘this is the big overriding reason’, based on evidence as weak as ‘this blog post is at least four pages long’.
If someone then responds “Sounds plausible, yeah”, then I update from ‘This one person has bad epistemics’ to ‘I have been teleported to Crazytown’.
I do feel like you are speaking too confidently of the epistemic state of the author. I do think the opening sentence of “I have a less charitable description of the links” feels like it weakens the statement here a good amount, and moves it for me more into the “this is one hypothesis that I have” territory, instead of the “I am confidently declaring this to be obviously true” territory.
Hmmm. I’d agree if it said “a less charitable hypothesis about the links” rather than “a less charitable description of the links”. Calling it a “description” makes it sound even more confident/authoritative/objective.
To be clear, I think a comment like this would have been great:
If I were the King Of Karma, I might set the net karma for a comment like that to somewhere between +8 and +60, depending on the quality of the arguments.
I think this would have been OK too:
I’d give that something like +6 karma. I wouldn’t want LWers to make a habit of constantly accusing Alignment Forum people of ‘Eulering’ bystanders just because they’re drawing on technical concepts; but it’s at least an honest- and transparent-sounding statement of someone’s perspective, and it gives clear conditions that would let the person update somewhat away from ‘you’re just Eulering me’.
Jiro’s actual comment, I’d probably give somewhere between −5 and −30 karma if I were king.