Doesn’t the prisoner’s dilemma (esp. in the military context) inevitably lead us to further development of AI? If so, it would seem that focusing attention and effort on developing AI as safely as possible is a more practical and worthwhile issue than any attempt to halt such development altogether.
Doesn’t the prisoner’s dilemma (esp. in the military context) inevitably lead us to further development of AI? If so, it would seem that focusing attention and effort on developing AI as safely as possible is a more practical and worthwhile issue than any attempt to halt such development altogether.