I think the gradient descent bit is spot on. That also looks like the flavour of natural selection, with non infinitesimal (but really small) deltas. Natural selection consumes a proof that a particular δx (mutation) produces δf (fitness) to generate/propagate/multiply δx.
In general, I think the type signature here can indeed be soft or fuzzy or lossy and you still get consequentialism, and the ‘better’ the fidelity, the ‘better’ the consequentialism.
This post has also inspired some further thinking and conversations and refinement about the type of agency/consequentialism which I’m hoping to write up soon. A succinct intuitionistic-logic-flavoured summary is something like (∃A.A→B)→A but there’s obviously more to it than that.
I think the gradient descent bit is spot on. That also looks like the flavour of natural selection, with non infinitesimal (but really small) deltas. Natural selection consumes a proof that a particular δx (mutation) produces δf (fitness) to generate/propagate/multiply δx.
I recently did some thinking about this and found an equivalence proof under certain conditions for the natural selection case and the gradient descent case.
In general, I think the type signature here can indeed be soft or fuzzy or lossy and you still get consequentialism, and the ‘better’ the fidelity, the ‘better’ the consequentialism.
This post has also inspired some further thinking and conversations and refinement about the type of agency/consequentialism which I’m hoping to write up soon. A succinct intuitionistic-logic-flavoured summary is something like (∃A.A→B)→A but there’s obviously more to it than that.