Just sharing some vibe I’ve got from your.. framing! Minimalism ~ path ~ inside-focused ~ the signal/reward Maximalism ~ destination ~ outside-focused ~ the world
These two opposing aesthetics is a well-known confusing bit within agent foundation style research. The classical way to model an agent is to think as it is maximizing outside world variables. Conversely, we can think about minimization ~ inside-focused (reward hacking type error) as a drug addict accomplishing “nothing”
Feels there is also something to say with dopamine vs serotonine/homeostasis, even with deontology vs consequentialism, and I guess these two clumsy clusters mirrors each other in some way (feels isomorph by reverse signe function). Will rethink about it for now.
As an aside note: I’m French too, and was surprised I’m supposed to yuck maximalist aesthetic, but indeed it’s consistent with my reaction reading you about TypeScript, also with my K-type brain.. Anecdotally, not with my love for spicy/rich foods ^^′
Just sharing some vibe I’ve got from your.. framing! Minimalism ~ path ~ inside-focused ~ the signal/reward Maximalist ~ destination ~ outside-focused ~ the world
These two opposing aesthetics is a well-known confusing bit within agent foundation style research. The classical way to model an agent is to think as it is maximizing outside world variables. Conversely, we can think about minimization ~ inside-focused (reward hacking type error) as a drug addict accomplishing “nothing”
Feels there is also something to say with dopamine vs serotonine/homeostasis, even with deontology vs consequentialism, and I guess these two clumsy clusters mirrors each other in some way (feels isomorph by reverse signe function). Will rethink about it for now.
I see what you’re pointing out, but in my head, the minimalism and maximalism that I’ve discussed both allow you quick feedback loops, which is generally the way to go for complex stuff. The tradeoff lies more in some fuzzy notion of usability:
With the minimalism approach, you can more easily iterate in your head, but you need to do more work to lift the basic concepts to the potentially more tricky abstactions you’re trying to express
With the maximalist approach, you get affordances that are eminently practical, so that many of your needs are solved almost instantly; but you need to spend much more expertise and mental effort to simulate what’s going to happen in your head during edge-cases.
As an aside note: I’m French too, and was surprised I’m supposed to yuck maximalist aesthetic, but indeed it’s consistent with my reaction reading you about TypeScript, also with my K-type brain.. Anecdotally, not with my love for spicy/rich foods ^^′
I’m obviously memeing a bit, but the real pattern I’m point out is more for “french engineering school education”, which you also have, rather than mere frenchness.
Thanks for these last three posts!
Just sharing some vibe I’ve got from your.. framing!
Minimalism ~ path ~ inside-focused ~ the signal/reward
Maximalism ~ destination ~ outside-focused ~ the world
These two opposing aesthetics is a well-known confusing bit within agent foundation style research. The classical way to model an agent is to think as it is maximizing outside world variables. Conversely, we can think about minimization ~ inside-focused (reward hacking type error) as a drug addict accomplishing “nothing”
Feels there is also something to say with dopamine vs serotonine/homeostasis, even with deontology vs consequentialism, and I guess these two clumsy clusters mirrors each other in some way (feels isomorph by reverse signe function). Will rethink about it for now.
As an aside note: I’m French too, and was surprised I’m supposed to yuck maximalist aesthetic, but indeed it’s consistent with my reaction reading you about TypeScript, also with my K-type brain.. Anecdotally, not with my love for spicy/rich foods ^^′
I see what you’re pointing out, but in my head, the minimalism and maximalism that I’ve discussed both allow you quick feedback loops, which is generally the way to go for complex stuff. The tradeoff lies more in some fuzzy notion of usability:
With the minimalism approach, you can more easily iterate in your head, but you need to do more work to lift the basic concepts to the potentially more tricky abstactions you’re trying to express
With the maximalist approach, you get affordances that are eminently practical, so that many of your needs are solved almost instantly; but you need to spend much more expertise and mental effort to simulate what’s going to happen in your head during edge-cases.
I’m obviously memeing a bit, but the real pattern I’m point out is more for “french engineering school education”, which you also have, rather than mere frenchness.