Thanks! Excellent point about the connection to the trajectory of agent selection pressure.
I don’t remember what led me to this idea in particular. I’ve been influenced by doing a lot of thinking about agent foundations and metaethics and noticing the ways in which humans don’t seem to be well modelled as utility maximizers or even just any sort of rational goal-directed agents with stable goals. I also read the book “The Meme Machine” and liked it, though that was after writing this post, not before, IIRC.
I don’t know what you mean by fixed points in policy. Elaborate?
I don’t know what you mean by fixed points in policy. Elaborate?
I might have slightly abused the term “fix point” & being unnecessarily wordy.
I mean that though I don’t see how memes can change objectives of agents in a fundamental way, memes influence “how certain objectives are being maximized”. Low-level objectives are the same yet their policies are implemented differently—because of receiving different memes. I think it’s vaguely like externally installed bias.
Ex: humans all crave social connections but people model their relationship with the society and interpret such desire differently, partially depending on cultural upbringing (meme).
I don’t know if having higher-levels of intelligence/being more rational/coherent cancels out the effects, ex: smarter version of agent now thinks more generally about all possible policies and finds there’s a ‘optimal’ way to realize certain objective and is no longer steered by memes/biases. Though I think in open-ended tasks it’s less likely to see such convergence, because current space of policies is built upon solutions and tools built before and is highly path-dependent in general. So memes early on might matter more to open-ended tasks.
I’m also thinking about agency foundations atm, and also confused about the generality of the utility maximizer frame. One simple answer to why humans don’t fit the frame is “humans aren’t optimizing hard enough (so haven’t shown convergence in policy)”. But this answer doesn’t clarify “what happens when agents aren’t as rational/hard-optimizing”, “dynamics and preconditions when agents-in-general becomes more rational/coherent/utility maximizer”, etc. so I’m not happy with my state of understand on this matter.
The book looks cool, will read soon, TY!
(btw this is my first interaction on lw so it’s cool :) )
Thanks! Excellent point about the connection to the trajectory of agent selection pressure.
I don’t remember what led me to this idea in particular. I’ve been influenced by doing a lot of thinking about agent foundations and metaethics and noticing the ways in which humans don’t seem to be well modelled as utility maximizers or even just any sort of rational goal-directed agents with stable goals. I also read the book “The Meme Machine” and liked it, though that was after writing this post, not before, IIRC.
I don’t know what you mean by fixed points in policy. Elaborate?
I might have slightly abused the term “fix point” & being unnecessarily wordy.
I mean that though I don’t see how memes can change objectives of agents in a fundamental way, memes influence “how certain objectives are being maximized”. Low-level objectives are the same yet their policies are implemented differently—because of receiving different memes. I think it’s vaguely like externally installed bias.
Ex: humans all crave social connections but people model their relationship with the society and interpret such desire differently, partially depending on cultural upbringing (meme).
I don’t know if having higher-levels of intelligence/being more rational/coherent cancels out the effects, ex: smarter version of agent now thinks more generally about all possible policies and finds there’s a ‘optimal’ way to realize certain objective and is no longer steered by memes/biases. Though I think in open-ended tasks it’s less likely to see such convergence, because current space of policies is built upon solutions and tools built before and is highly path-dependent in general. So memes early on might matter more to open-ended tasks.
I’m also thinking about agency foundations atm, and also confused about the generality of the utility maximizer frame. One simple answer to why humans don’t fit the frame is “humans aren’t optimizing hard enough (so haven’t shown convergence in policy)”. But this answer doesn’t clarify “what happens when agents aren’t as rational/hard-optimizing”, “dynamics and preconditions when agents-in-general becomes more rational/coherent/utility maximizer”, etc. so I’m not happy with my state of understand on this matter.
The book looks cool, will read soon, TY!
(btw this is my first interaction on lw so it’s cool :) )