Oh, man, yes, I hadn’t seen that post before and it is an awesome post and concept. I think maybe “believing in”s, and prediction-market-like structures of believing-ins, are my attempt to model how Steam gets allocated.
I have the intuition that “believing in”s are how allocating Steam feels like from the inside, or that they are the same thing.
C. “Believing in”s should often be public, and/or be part of a person’s visible identity.
This makes sense if “believing in”s are useful for intra- and inter-agent coordination, which is the thing people accumulate to go on stag hunts together. Coordinating with your future self, in this framework, requires the same resource as coordinating with other agents similar to you along the relevant axes right now (or across time).
Steam might be thought of as a scalar quantity assigned to some action or plan, and which changes depending on the actions being executed or not. Steam is necessarily distinct from probability or utility because if you start making predictions about your own future actions, your belief estimation process (assuming it has some influence on your actions) has a fixed point in predicting the action will not be carried out, and then intervening to prevent the action from being carried out. There is also another fixed point in which the agent is maximally confident it will do something, and then just doing it, but it can’t be persuaded to not do it.[1]
As stated in the original post, steam helps solve the procrastination paradox. I have the intuition that one can relate the (change) in steam/utility/probability to each other: Assuming utility is high,
If actions/plans are performed, their steam increases
If actions/plans are not performed, steam decreases
If steam decreases slowly and actions/plans are executed, increase steam
If steam decreases quickly and actions/plans are not executed, decrease steam even more quickly(?)
If actions/plans are completed, reduce steam
If utility decreases a lot, steam only decreases a bit (hence things like sunk costs). Differential equations look particularly useful to talking more rigorously about this kind of thing.
Steam might also be related to how cognition on a particular topic gets started; to avoid the infinite regress problem of deciding what to think about, and deciding how to think about what to think about, and so on.
For each “category” of thought we have some steam which is adjusted as we observe our own previous thoughts, beliefs changing and values being expressed. So we don’t just think the thoughts that are highest utility to think in expectation, we think the thoughts that are highest in steam, where steam is allocated depending on the change in probability and utility.
Steam or “believing in” seem to be bound up with abstraction à la teleosemantics: When thinking or acting, steam decides where thoughts and actions are directed to create higher clarity on symbolic constructs or plans. I’m especially thinking of the email-writing example: There is a vague notion of “I will write the email”, into which cognitive effort needs to invested to crystallize the purpose, and then bringing up the further effort to actually flesh out all the details.
This is not quite true, a better model would be that the agent discontinuously switches if the badness of the prediction being wrong is outweighed by the badness of not doing the thing.
Oh, man, yes, I hadn’t seen that post before and it is an awesome post and concept. I think maybe “believing in”s, and prediction-market-like structures of believing-ins, are my attempt to model how Steam gets allocated.
Several disjointed thoughts, all exploratory.
I have the intuition that “believing in”s are how allocating Steam feels like from the inside, or that they are the same thing.
This makes sense if “believing in”s are useful for intra- and inter-agent coordination, which is the thing people accumulate to go on stag hunts together. Coordinating with your future self, in this framework, requires the same resource as coordinating with other agents similar to you along the relevant axes right now (or across time).
Steam might be thought of as a scalar quantity assigned to some action or plan, and which changes depending on the actions being executed or not. Steam is necessarily distinct from probability or utility because if you start making predictions about your own future actions, your belief estimation process (assuming it has some influence on your actions) has a fixed point in predicting the action will not be carried out, and then intervening to prevent the action from being carried out. There is also another fixed point in which the agent is maximally confident it will do something, and then just doing it, but it can’t be persuaded to not do it.[1]
As stated in the original post, steam helps solve the procrastination paradox. I have the intuition that one can relate the (change) in steam/utility/probability to each other: Assuming utility is high,
If actions/plans are performed, their steam increases
If actions/plans are not performed, steam decreases
If steam decreases slowly and actions/plans are executed, increase steam
If steam decreases quickly and actions/plans are not executed, decrease steam even more quickly(?)
If actions/plans are completed, reduce steam
If utility decreases a lot, steam only decreases a bit (hence things like sunk costs). Differential equations look particularly useful to talking more rigorously about this kind of thing.
Steam might also be related to how cognition on a particular topic gets started; to avoid the infinite regress problem of deciding what to think about, and deciding how to think about what to think about, and so on.
For each “category” of thought we have some steam which is adjusted as we observe our own previous thoughts, beliefs changing and values being expressed. So we don’t just think the thoughts that are highest utility to think in expectation, we think the thoughts that are highest in steam, where steam is allocated depending on the change in probability and utility.
Steam or “believing in” seem to be bound up with abstraction à la teleosemantics: When thinking or acting, steam decides where thoughts and actions are directed to create higher clarity on symbolic constructs or plans. I’m especially thinking of the email-writing example: There is a vague notion of “I will write the email”, into which cognitive effort needs to invested to crystallize the purpose, and then bringing up the further effort to actually flesh out all the details.
This is not quite true, a better model would be that the agent discontinuously switches if the badness of the prediction being wrong is outweighed by the badness of not doing the thing.
I think this comment is great and worthy of being expanded into a post.