Why not? I imagine that different political parties have different views on what the government should do about existential risk and voting for the ones that are potentially more willing to decrease it would be beneficial. Currently, it seems like most parties don’t concern themselves at all with existential risk, but perhaps this will change once strong AI becomes less far off.
I imagine that different political parties have different views on what the government should do about existential risk
Actually, no, I don’t think it is true. I suspect that at the moment the views of all political parties on existential risk are somewhere between “WTF is that?” and “Can I use it to influence my voters?”
That may (or may not) eventually change, but at the moment the answer is a clear “No”.
Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.
Yeah, I suppose you’re right. Still, once something that could pose a large existential risk comes into existence or looks like it will soon come into existence, wouldn’t politicians then consider existential risk reduction? For example, once a group is on the verge of developing AGI, wouldn’t the government think about what to do about it? Or would they still ignore it? Would the responses of different parties vary?
You could definitely be correct, though; I’m not knowledgeable about politics.
Politics is a people sport. Depending on who creates the policy of the party in the time the topic comes up, the results can come out very differently.
No.
Why not? I imagine that different political parties have different views on what the government should do about existential risk and voting for the ones that are potentially more willing to decrease it would be beneficial. Currently, it seems like most parties don’t concern themselves at all with existential risk, but perhaps this will change once strong AI becomes less far off.
Actually, no, I don’t think it is true. I suspect that at the moment the views of all political parties on existential risk are somewhere between “WTF is that?” and “Can I use it to influence my voters?”
That may (or may not) eventually change, but at the moment the answer is a clear “No”.
Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.
Noted. I’ll invest my efforts on x-risk reduction into something other than voting.
Do you? I think most politicians would ask “What do you mean with ‘existential risk’?” is you ask them about it.
Yeah, I suppose you’re right. Still, once something that could pose a large existential risk comes into existence or looks like it will soon come into existence, wouldn’t politicians then consider existential risk reduction? For example, once a group is on the verge of developing AGI, wouldn’t the government think about what to do about it? Or would they still ignore it? Would the responses of different parties vary?
You could definitely be correct, though; I’m not knowledgeable about politics.
Politics is a people sport. Depending on who creates the policy of the party in the time the topic comes up, the results can come out very differently.