I’d also like to hear more about why “evolution” is modeled as HAVING a utility function, rather than just being the name we give to the results of of variation and selection. And only then discussion of what that function might be.
I don’t see how decision theory or VNM rationality applies to evolution, let alone what “success” might mean for a species as opposed to an individual conscious entity with semi-coherent goals.
The entire argument of the sharp left turn presupposes evolution has a utility function for the analogy to work, so arguing about that is tangential, but it’ pretty obvious you can model genetic evolution as optimizing for replication count (or inclusive fitness over a species). We have concrete computational models of genetic optimization already, so there is really no no need to bring in rationality or agents, it’s just a matter of optimization functions.
I think there’s a terminological mismatch here. Dagon was asking about a “utility function” as specifically being something satisfying the VNM axioms. But I think you’re using it (in this comment and the one Dagon was replying to) synonymous with the more general concept of an “optimization function”, i.e. a function returning some output that somehow gets optimized for?
I’d also like to hear more about why “evolution” is modeled as HAVING a utility function, rather than just being the name we give to the results of of variation and selection. And only then discussion of what that function might be.
I don’t see how decision theory or VNM rationality applies to evolution, let alone what “success” might mean for a species as opposed to an individual conscious entity with semi-coherent goals.
The entire argument of the sharp left turn presupposes evolution has a utility function for the analogy to work, so arguing about that is tangential, but it’ pretty obvious you can model genetic evolution as optimizing for replication count (or inclusive fitness over a species). We have concrete computational models of genetic optimization already, so there is really no no need to bring in rationality or agents, it’s just a matter of optimization functions.
I think there’s a terminological mismatch here. Dagon was asking about a “utility function” as specifically being something satisfying the VNM axioms. But I think you’re using it (in this comment and the one Dagon was replying to) synonymous with the more general concept of an “optimization function”, i.e. a function returning some output that somehow gets optimized for?