It’s definitely worth considering; but it seems intuitively clear at least that having the disposition of negotiating with counterfactual terrorists tends to lead to much greater utility loss than being screwed over now and again by terrorists who are mindlessly destructive irrespective of any gains they could make. I’m not sure exactly what argument would lead one to believe that such mindless terrorists are rare; something like Omohundro’s basic AI drives might indicate that Bayesian utility-maximizing superintelligences are unlikely to be stubbornly destructive at any rate.
(By the way, I like your blog, and am glad to see you posting here on Less Wrong.)
It’s definitely worth considering; but it seems intuitively clear at least that having the disposition of negotiating with counterfactual terrorists tends to lead to much greater utility loss than being screwed over now and again by terrorists who are mindlessly destructive irrespective of any gains they could make. I’m not sure exactly what argument would lead one to believe that such mindless terrorists are rare; something like Omohundro’s basic AI drives might indicate that Bayesian utility-maximizing superintelligences are unlikely to be stubbornly destructive at any rate.
(By the way, I like your blog, and am glad to see you posting here on Less Wrong.)