The cognitive algorithm of “Assume my current agenda is the most important thing, and then execute whatever political strategies are required to protect its social status, funding, power, un-taintedness, &c.” wouldn’t have led us to noticing the alignment problem, and I would be pretty surprised if it were sufficient to solve it (although that would be very convenient).
This seems to be straw-manning Evan. I’m pretty sure he would explicitly disagree with “Assume my current agenda is the most important thing, and then execute whatever political strategies are required to protect its social status, funding, power, un-taintedness, &c.” and I don’t see much evidence he is implicitly doing this either. In my own case I think taintedness is one consideration that I have to weigh among many and I don’t see what’s wrong with that. Practically, this means trying to find some way of talking about political issues on LW or an LW-adjacent space while limiting how much the more political discussions taint the more technical discussions or taint everyone who is associated with LW in some way.
I’m not sure what your own position is on the “should we discuss object-level politics on LW” question. Are you suggesting that we should just go ahead and do it without giving any weight to taintedness? (You didn’t fully spell out the analogy with the Triskaidekaphobic Calculator, but it seems like you’re saying that worrying about taintedness while trying to solve AI safety is like trying to make a calculator while worrying about triskaidekaphobia, so we shouldn’t do that?)
I’ve been saying that I don’t see how things work out well if x-risk / AI safety people don’t get better at thinking about, talking about, and doing politics. I’m doubtful that I’m understanding you correctly, but if I am, I also don’t see how things work out well if LW starts talking about politics without any regard to taintedness and as a result x-risk / AI safety becomes politically radioactive to most people who have to worry about conventional politics.
I’m not sure what your own position is on the “should we discuss object-level politics on LW” question. Are you suggesting that we should just go ahead and do it without giving any weight to taintedness?
No. (Sorry, I guess this isn’t clear at all from the post, which was written hastily and arguably should have just been a mere comment on the “Against Premature Abstraction” thread; I wanted to make it a top-level post for psychological reasons that I probably shouldn’t elaborate on because (a) you probably don’t care, and (b) they reflect poorly on me. Feel free to downvote if I made the wrong call.)
You didn’t fully spell out the analogy with the Triskaidekaphobic Calculator, but it seems like you’re saying that worrying about taintedness while trying to solve AI safety is like trying to make a calculator while worrying about triskaidekaphobia, so we shouldn’t do that?
More like—we should at least be aware that worrying about taintedness is making the task harder and that we should be on the lookout for ways to strategize around that (e.g., encouraging the use of a separate forum and pseudonyms for non-mathy topics, having “mutual defense pact” norms where curious thinkers support each other rather than defaulting to “Well, you should’ve known better than to say that” victim-blaming, &c.).
This seems to be straw-manning Evan. I’m pretty sure he would explicitly disagree with “Assume my current agenda is the most important thing, and then execute whatever political strategies are required to protect its social status, funding, power, un-taintedness, &c.” and I don’t see much evidence he is implicitly doing this either. In my own case I think taintedness is one consideration that I have to weigh among many and I don’t see what’s wrong with that. Practically, this means trying to find some way of talking about political issues on LW or an LW-adjacent space while limiting how much the more political discussions taint the more technical discussions or taint everyone who is associated with LW in some way.
I’m not sure what your own position is on the “should we discuss object-level politics on LW” question. Are you suggesting that we should just go ahead and do it without giving any weight to taintedness? (You didn’t fully spell out the analogy with the Triskaidekaphobic Calculator, but it seems like you’re saying that worrying about taintedness while trying to solve AI safety is like trying to make a calculator while worrying about triskaidekaphobia, so we shouldn’t do that?)
I’ve been saying that I don’t see how things work out well if x-risk / AI safety people don’t get better at thinking about, talking about, and doing politics. I’m doubtful that I’m understanding you correctly, but if I am, I also don’t see how things work out well if LW starts talking about politics without any regard to taintedness and as a result x-risk / AI safety becomes politically radioactive to most people who have to worry about conventional politics.
No. (Sorry, I guess this isn’t clear at all from the post, which was written hastily and arguably should have just been a mere comment on the “Against Premature Abstraction” thread; I wanted to make it a top-level post for psychological reasons that I probably shouldn’t elaborate on because (a) you probably don’t care, and (b) they reflect poorly on me. Feel free to downvote if I made the wrong call.)
More like—we should at least be aware that worrying about taintedness is making the task harder and that we should be on the lookout for ways to strategize around that (e.g., encouraging the use of a separate forum and pseudonyms for non-mathy topics, having “mutual defense pact” norms where curious thinkers support each other rather than defaulting to “Well, you should’ve known better than to say that” victim-blaming, &c.).