Yeah, that’s a natural argument. The counterargument which immediately springs to mind is that, until we’ve completely and totally solved biology, there’s always going to be some systems we don’t understand yet—just because they haven’t been understood yet does not mean they’re opaque. It boils down to priors: do we have reasons to expect large variance in opaqueness? Do we have reason to expect low variance in opaqueness?
My own thoughts can be summarized by three main lines of argument:
If we look at the entire space of possible programs, it’s not hard to find things which are pretty darn opaque to humans. Crypto and computational complexity theory provide some degree of foundation to that idea. So human-opaque systems do exist.
We can know that something is non-opaque (by understanding it), but we can’t know for sure that something is opaque. Lack of understanding is Bayesian evidence in favor of opaqueness, but the strength of that evidence depends a lot on who’s tried to understand it, how much effort was put in, what the incentives look like, etc.
I personally have made arguments of the form “X is intractably opaque to humans” about many different systems in multiple different fields in the past (not just biology). In most cases, I later turned out to be wrong. So at this point I have a pretty significant prior against opacity.
So I’d say the book provides evidence in favor of a generally-low prior on opaqueness, but people trying and failing to understand a system is the main type of evidence regarding opacity of particular systems.
Unfortunately, these are all outside-view arguments. I do think an inside view is possible here—it intuitively feels like the kinds of systems which turned out not to be opaque (e.g. biological circuits) have visible, qualitative differences from the kinds of systems which we have some theoretical reasons to consider opaque (e.g. pseudorandom number generators). They’re the sort of systems people call “emergent”. Problem is, we don’t have a useful formalization of that idea, and I expect that figuring it out requires solving a large chunk of the problems in the embedded agency cluster.
When I get around to writing a separate post on Alon’s last chapter (evolution of modularity), that will include some additional relevant insight to the question.
Yeah, that’s a natural argument. The counterargument which immediately springs to mind is that, until we’ve completely and totally solved biology, there’s always going to be some systems we don’t understand yet—just because they haven’t been understood yet does not mean they’re opaque. It boils down to priors: do we have reasons to expect large variance in opaqueness? Do we have reason to expect low variance in opaqueness?
My own thoughts can be summarized by three main lines of argument:
If we look at the entire space of possible programs, it’s not hard to find things which are pretty darn opaque to humans. Crypto and computational complexity theory provide some degree of foundation to that idea. So human-opaque systems do exist.
We can know that something is non-opaque (by understanding it), but we can’t know for sure that something is opaque. Lack of understanding is Bayesian evidence in favor of opaqueness, but the strength of that evidence depends a lot on who’s tried to understand it, how much effort was put in, what the incentives look like, etc.
I personally have made arguments of the form “X is intractably opaque to humans” about many different systems in multiple different fields in the past (not just biology). In most cases, I later turned out to be wrong. So at this point I have a pretty significant prior against opacity.
So I’d say the book provides evidence in favor of a generally-low prior on opaqueness, but people trying and failing to understand a system is the main type of evidence regarding opacity of particular systems.
Unfortunately, these are all outside-view arguments. I do think an inside view is possible here—it intuitively feels like the kinds of systems which turned out not to be opaque (e.g. biological circuits) have visible, qualitative differences from the kinds of systems which we have some theoretical reasons to consider opaque (e.g. pseudorandom number generators). They’re the sort of systems people call “emergent”. Problem is, we don’t have a useful formalization of that idea, and I expect that figuring it out requires solving a large chunk of the problems in the embedded agency cluster.
When I get around to writing a separate post on Alon’s last chapter (evolution of modularity), that will include some additional relevant insight to the question.