Yeah, so I have mixed feelings about this. One problem with the Knightian uncertainty label is that it implies some level of irreducibility; as Nate points out in the sequence (and James points out below), there are in fact a bunch of ways of reducing it, or dividing it into different subcategories.
On the other hand: this post is mainly not about epistemology, it’s mainly about communication. And from a communication perspective, Knightian uncertainty points at a big cluster of things that form a large proportion of the blocker between rationalists and non-rationalists communicating effectively about AI. E.g. as Nate points out:
That said, many of the objections made by advocates of Knightian uncertainty against ideal Bayesian reasoning are sound objections: the future will often defy expectation. In many complicated scenarios, you should expect that the correct hypothesis is inaccessible to you. Humans lack introspective access to their credences, and even if they didn’t, such credences often lack precision.
Most of the advice from the Knightian uncertainty camp is good. It is good to realize that your credences are imprecise. You should often expect to be surprised. In many domains, you should widen your error bars. But I already know how to do these things.
So if you have the opinion that Nate and many other rationalists don’t know how to do these things enough, then you could either debate them about epistemology, or you could say “we have different views about how much you should do this cluster of things that Knightian uncertainty points to, let’s set those aside for now and actually just talk about AI”. I wish I’d had that mental move available to me in my conversations with Eliezer so that we didn’t get derailed into philosophy of science; and so that I spent more time curious and less time annoyed at his overconfidence. (And all of that applies orders of magnitude more to mainstream scientists/ML researchers hearing these arguments.)
Yeah, so I have mixed feelings about this. One problem with the Knightian uncertainty label is that it implies some level of irreducibility; as Nate points out in the sequence (and James points out below), there are in fact a bunch of ways of reducing it, or dividing it into different subcategories.
On the other hand: this post is mainly not about epistemology, it’s mainly about communication. And from a communication perspective, Knightian uncertainty points at a big cluster of things that form a large proportion of the blocker between rationalists and non-rationalists communicating effectively about AI. E.g. as Nate points out:
So if you have the opinion that Nate and many other rationalists don’t know how to do these things enough, then you could either debate them about epistemology, or you could say “we have different views about how much you should do this cluster of things that Knightian uncertainty points to, let’s set those aside for now and actually just talk about AI”. I wish I’d had that mental move available to me in my conversations with Eliezer so that we didn’t get derailed into philosophy of science; and so that I spent more time curious and less time annoyed at his overconfidence. (And all of that applies orders of magnitude more to mainstream scientists/ML researchers hearing these arguments.)