We need a handy way of saying “Yes I understand the standard arguments for P but I still think it’s worth your while considering this argument for ¬P rather than just telling me the standard arguments for P.”
Agreed. In my experience this problem of standard-argument-affirming shows up a lot during debates about uFAI risks. If I try to suggest some non-obvious argument against the Eliezerian position then I tend to mostly get re-assertions or re-phrasings of the standard Eliezerian arguments, which is distracting and a tad insulting. It seems some people identify me as a mainstream-view-loving enemy who is trying to unfairly marginalize the Eliezerian position, and thus don’t bother to carefully check if my argument might be reasonable on its own terms.
In the last few months I’ve been averaging like 5 to 10 karma on my anti-Eliezerian AI risk arguments, and I think that’s because I’ve expressed them more clearly and redundantly. But they’re the same arguments that were getting downvoted to −5 or so back a year or two ago when I wasn’t taking special care not to trigger local immune responses. (Weirdly, even saying that I’d spent a year or so with the Visiting Fellows talking to a lot of SingInst people who didn’t think I was clearly stupid or insane didn’t dissuade people from thinking I was clearly mistaken about basic SingInst arguments. I still don’t really understand that… maybe I was interpreted as making an unjustified claim to authority that shouldn’t be taken as evidence, or something.)
The majority of your comments which I’ve downvoted have been for use of improper vocabulary. That is, you repurpose words in unconventional ways which result in extremely difficult, if not impossible, translation to something I can understand.
Lately, you seem to have been taking more care to use words with their dictionary definitions.
Agreed. In my experience this problem of standard-argument-affirming shows up a lot during debates about uFAI risks. If I try to suggest some non-obvious argument against the Eliezerian position then I tend to mostly get re-assertions or re-phrasings of the standard Eliezerian arguments, which is distracting and a tad insulting. It seems some people identify me as a mainstream-view-loving enemy who is trying to unfairly marginalize the Eliezerian position, and thus don’t bother to carefully check if my argument might be reasonable on its own terms.
In the last few months I’ve been averaging like 5 to 10 karma on my anti-Eliezerian AI risk arguments, and I think that’s because I’ve expressed them more clearly and redundantly. But they’re the same arguments that were getting downvoted to −5 or so back a year or two ago when I wasn’t taking special care not to trigger local immune responses. (Weirdly, even saying that I’d spent a year or so with the Visiting Fellows talking to a lot of SingInst people who didn’t think I was clearly stupid or insane didn’t dissuade people from thinking I was clearly mistaken about basic SingInst arguments. I still don’t really understand that… maybe I was interpreted as making an unjustified claim to authority that shouldn’t be taken as evidence, or something.)
The majority of your comments which I’ve downvoted have been for use of improper vocabulary. That is, you repurpose words in unconventional ways which result in extremely difficult, if not impossible, translation to something I can understand.
Lately, you seem to have been taking more care to use words with their dictionary definitions.
Part of it maybe that people know you and know you’re not an idiot.