I know discussing politics on LW is discouraged, but is voting in elections a viable method of decreasing existential risk by making it more likely that those who are elected will take more action to decrease it? If so, what parties should be voted for? If this isn’t something that should be discussed on LW, just say so and I can make a reddit post on it.
As far as existential risk goes, I however don’t know whether we have good information about which mainstream (=electable) candidate will decrease risk.
If this isn’t something that should be discussed on LW, just say so and I can make a reddit post on it.
If your goal is to raise awareness of an issue in most cases writing a well argued article is going to do much more than giving your vote to a candidate who isn’t mainstream.
There are already arewell-arguedarticles, I’m not sure how useful more articles would be. Perhaps a more accessible version of Existential Risk as a Global Priority would be useful, though.
Why not? I imagine that different political parties have different views on what the government should do about existential risk and voting for the ones that are potentially more willing to decrease it would be beneficial. Currently, it seems like most parties don’t concern themselves at all with existential risk, but perhaps this will change once strong AI becomes less far off.
I imagine that different political parties have different views on what the government should do about existential risk
Actually, no, I don’t think it is true. I suspect that at the moment the views of all political parties on existential risk are somewhere between “WTF is that?” and “Can I use it to influence my voters?”
That may (or may not) eventually change, but at the moment the answer is a clear “No”.
Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.
Yeah, I suppose you’re right. Still, once something that could pose a large existential risk comes into existence or looks like it will soon come into existence, wouldn’t politicians then consider existential risk reduction? For example, once a group is on the verge of developing AGI, wouldn’t the government think about what to do about it? Or would they still ignore it? Would the responses of different parties vary?
You could definitely be correct, though; I’m not knowledgeable about politics.
Politics is a people sport. Depending on who creates the policy of the party in the time the topic comes up, the results can come out very differently.
I know discussing politics on LW is discouraged, but is voting in elections a viable method of decreasing existential risk by making it more likely that those who are elected will take more action to decrease it? If so, what parties should be voted for? If this isn’t something that should be discussed on LW, just say so and I can make a reddit post on it.
There are already a few good posts about voting on LW http://lesswrong.com/lw/fao/voting_is_like_donating_thousands_of_dollars_to/ comes to mind but there are a variety when you search.
As far as existential risk goes, I however don’t know whether we have good information about which mainstream (=electable) candidate will decrease risk.
You could also try http://www.omnilibrium.com/index.php which is sourced with people from LW.
Remember that there may still be some value in voting for candidates who aren’t mainstream.
If your goal is to raise awareness of an issue in most cases writing a well argued article is going to do much more than giving your vote to a candidate who isn’t mainstream.
There are already are well-argued articles, I’m not sure how useful more articles would be. Perhaps a more accessible version of Existential Risk as a Global Priority would be useful, though.
No.
Why not? I imagine that different political parties have different views on what the government should do about existential risk and voting for the ones that are potentially more willing to decrease it would be beneficial. Currently, it seems like most parties don’t concern themselves at all with existential risk, but perhaps this will change once strong AI becomes less far off.
Actually, no, I don’t think it is true. I suspect that at the moment the views of all political parties on existential risk are somewhere between “WTF is that?” and “Can I use it to influence my voters?”
That may (or may not) eventually change, but at the moment the answer is a clear “No”.
Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.
Noted. I’ll invest my efforts on x-risk reduction into something other than voting.
Do you? I think most politicians would ask “What do you mean with ‘existential risk’?” is you ask them about it.
Yeah, I suppose you’re right. Still, once something that could pose a large existential risk comes into existence or looks like it will soon come into existence, wouldn’t politicians then consider existential risk reduction? For example, once a group is on the verge of developing AGI, wouldn’t the government think about what to do about it? Or would they still ignore it? Would the responses of different parties vary?
You could definitely be correct, though; I’m not knowledgeable about politics.
Politics is a people sport. Depending on who creates the policy of the party in the time the topic comes up, the results can come out very differently.