Existentially dangerous doesn’t mean the benefits still don’t outweigh the costs. If there’s a 95% chance that uFAI kills us all, that’s still a whopping 5% chance at unfathomably large amounts of utility. Technology still ends up having been a good idea after all.
Each level adds necessary nuance. Unfortunately, at each level is a new chance for unnecessary nuance. Strong epistemic rationality is the only thing that can shoulder the weight of the burdensome details.
Added: Your epistemic rationality is limited by your epistemology. There’s a whole bunch of pretty and convincing mathematics that says Bayesian epistemology is the Way. We trust in Bayes because we trust in that math: the math shoulders the weight. A question, then. When is Bayesianism not the ideal epistemology? As humans the answer is ‘limited resources’. But what if you had unlimited resources? At the limit, where doesn’t Bayes hold?
Existentially dangerous doesn’t mean the benefits still don’t outweigh the costs. If there’s a 95% chance that uFAI kills us all, that’s still a whopping 5% chance at unfathomably large amounts of utility. Technology still ends up having been a good idea after all.
Each level adds necessary nuance. Unfortunately, at each level is a new chance for unnecessary nuance. Strong epistemic rationality is the only thing that can shoulder the weight of the burdensome details.
Added: Your epistemic rationality is limited by your epistemology. There’s a whole bunch of pretty and convincing mathematics that says Bayesian epistemology is the Way. We trust in Bayes because we trust in that math: the math shoulders the weight. A question, then. When is Bayesianism not the ideal epistemology? As humans the answer is ‘limited resources’. But what if you had unlimited resources? At the limit, where doesn’t Bayes hold?