Nice, I’d read the first but didn’t realise there were more. I’ll digest later.
I think agents vs optimisation is definitely reality-carving, but not sure I see the point about utility functions and preference orderings. I assume the idea is that an optimisation process just moves the world towards states, but an agent tries to move the world towards certain states i.e. chooses actions based on how much they move the world towards certain states, so it make sense to quantify how much of a weighting each state gets in its decision-making. But it’s not obvious to me that there’s not a meaningful way to assign weightings to states for an optimisation process too—for example if a ball rolling down a hill gets stuck in the large hole twice as often as it gets stuck in the medium hole and ten times as often as the small hole, maybe it makes sense to quantify this with something like a utility function. Although defining a utility function based on the typical behaviour of the system and then trying to measure its optimisation power against it gets a bit circular.
Anyway, the dynamical systems approach seems good. Have you stopped working on it?
Mostly it’s that I’ve found that, while trying to understand optimization, I’ve never needed to put “weights” on the ordering. (Of course, you always could map your ordering onto a monotonically increasing function.)
I think the concept of “trying” mostly dissolves under the kind of scrutiny I’m trying to apply. Or rather, to well-define “trying”, you need a whole bunch of additional machinery that just makes it a different thing than (my concept of) optimization, and that’s not what I’m studying yet.
I’ve also been working entirely in deterministic settings, so there’s no sense of “how often” a thing happens, just a single trajectory. (This also differentiates my thing from Flint’s.)
I haven’t stopped working on the overall project. I do seem to have stopped writing and editing that particular sequence, though. I’m considering totally changing the way I present the concept (such that the current Intro post would be more like a middle-post) so I decided to just pull the trigger on publishing the current state of it. I’m also trying to get more actual formal results, which is more about stuff from the end of that sequence. But I’m pretty behind on formal training, so I’m also trying to generally catch up on math.
Nice, I’d read the first but didn’t realise there were more. I’ll digest later.
I think agents vs optimisation is definitely reality-carving, but not sure I see the point about utility functions and preference orderings. I assume the idea is that an optimisation process just moves the world towards states, but an agent tries to move the world towards certain states i.e. chooses actions based on how much they move the world towards certain states, so it make sense to quantify how much of a weighting each state gets in its decision-making. But it’s not obvious to me that there’s not a meaningful way to assign weightings to states for an optimisation process too—for example if a ball rolling down a hill gets stuck in the large hole twice as often as it gets stuck in the medium hole and ten times as often as the small hole, maybe it makes sense to quantify this with something like a utility function. Although defining a utility function based on the typical behaviour of the system and then trying to measure its optimisation power against it gets a bit circular.
Anyway, the dynamical systems approach seems good. Have you stopped working on it?
Mostly it’s that I’ve found that, while trying to understand optimization, I’ve never needed to put “weights” on the ordering. (Of course, you always could map your ordering onto a monotonically increasing function.)
I think the concept of “trying” mostly dissolves under the kind of scrutiny I’m trying to apply. Or rather, to well-define “trying”, you need a whole bunch of additional machinery that just makes it a different thing than (my concept of) optimization, and that’s not what I’m studying yet.
I’ve also been working entirely in deterministic settings, so there’s no sense of “how often” a thing happens, just a single trajectory. (This also differentiates my thing from Flint’s.)
I haven’t stopped working on the overall project. I do seem to have stopped writing and editing that particular sequence, though. I’m considering totally changing the way I present the concept (such that the current Intro post would be more like a middle-post) so I decided to just pull the trigger on publishing the current state of it. I’m also trying to get more actual formal results, which is more about stuff from the end of that sequence. But I’m pretty behind on formal training, so I’m also trying to generally catch up on math.