I think saying that the set of people interested in entertaining the question would lean libertarian, is evidence that this does indeed break down on party lines as I suggested. I find it bizarre to suppose that Republicans or Democrats interested in the Singularity would want to build a singleton without even asking whether that was the right thing to do. But I lean Libertarian.
You may be right wrt. the Doing It Wrong observation. But it’s difficult to say anything meaningful about making friendly AI if we dismiss approaches that are Doing It Wrong, because the set of Doing It Right is miniscule or empty.
In a case like that, where you can’t find the right way, it may be valuable to discuss approaches even if they are known to be wrong, in the hope that the analysis generalizes to approaches that are correct?
I think saying that the set of people interested in entertaining the question would lean libertarian, is evidence that this does indeed break down on party lines as I suggested.
No, I think the source of the correlation is merely that entertaining libertarianism and entertaining the possibility of being governed by an AI both require significant willingness to depart from the mainstream. Most people just write them both off as crazy.
Assume that the mainstream will confront the question eventually. What will they decide to do? In other words, Can we predict that there is a singleton in our future, based on the predominant emotional needs that people express in their choice of political party today? That’s my question.
based on the predominant emotional needs that people express in their choice of political party today
Can you really translate emotional needs into future policy? Doesn’t it depend on how the policy is framed? In particular, if both sides can produce reasons for a policy (as you say here), then bipartisan support does not seem terribly more likely to me than one side’s rhetoric framing the issue and the other side’s reason vanishing.
I think saying that the set of people interested in entertaining the question would lean libertarian, is evidence that this does indeed break down on party lines as I suggested. I find it bizarre to suppose that Republicans or Democrats interested in the Singularity would want to build a singleton without even asking whether that was the right thing to do. But I lean Libertarian.
You may be right wrt. the Doing It Wrong observation. But it’s difficult to say anything meaningful about making friendly AI if we dismiss approaches that are Doing It Wrong, because the set of Doing It Right is miniscule or empty.
In a case like that, where you can’t find the right way, it may be valuable to discuss approaches even if they are known to be wrong, in the hope that the analysis generalizes to approaches that are correct?
I think saying that the set of people interested in entertaining the question would lean libertarian, is evidence that this does indeed break down on party lines as I suggested.
No, I think the source of the correlation is merely that entertaining libertarianism and entertaining the possibility of being governed by an AI both require significant willingness to depart from the mainstream. Most people just write them both off as crazy.
Assume that the mainstream will confront the question eventually. What will they decide to do? In other words, Can we predict that there is a singleton in our future, based on the predominant emotional needs that people express in their choice of political party today? That’s my question.
Can you really translate emotional needs into future policy? Doesn’t it depend on how the policy is framed? In particular, if both sides can produce reasons for a policy (as you say here), then bipartisan support does not seem terribly more likely to me than one side’s rhetoric framing the issue and the other side’s reason vanishing.
If you don’t think you can do that, I advise you not to go into politics.