It seems like you’re considering the changes in actions or information-theoretic surprisal, and I’m considering impact to the taxi driver. It’s valid to consider how substantially plans change, it’s just not the focus of the sequence.
I assert that we feel impacted when we change our beliefs about how well we can get what we want. Learning the address does not affect their attainable utility, so (when I simulate this experience) it doesn’t feel impactful in this specific way. It just feels like learning something.
Is this engaging with what you have in mind by “life-changes”?
I would have agreed with “how we can get what we want” but “how well we can get what we want” kind of specifies that it is a scalar quantity.
Utility functions can be constructed or are translatable from/to choice rankings. There can be no meaningful utility change without it being understandable with choices.
Impact as a primitive feeling feels super weird. I get that it has something to do with the idiom “fuck my life”. However there is another idiom “This is my life now” which more captures that quality change that is not neccesarily a move up or down.
There is a “so” word that would suggest theorethical implication but reference to simulated experience and feeling seem like callbacks to imagined emotions. Either or both apply?
I am also confused what the realtionship between expected utility and attainable utility is supposed to be. If you expect to maximise they should be pretty close.
I think I might be expereriencing goal directed behaviour very differntly on the inside and I am unsure how much of the terminology is supposed to be abstract math concepts and how much of it is supposed to be emotional language. It might be for other people there is a more natural link between being in a low or high utility state and feeling low or high.
I am now suspecthing it has less to do with “Objective-life” but rather “subjective-life” or life-as-experienced which tells the approach uses a differnt kind of ontology.
I think I might be expereriencing goal directed behaviour very differntly on the inside and I am unsure how much of the terminology is supposed to be abstract math concepts and how much of it is supposed to be emotional language. It might be for other people there is a more natural link between being in a low or high utility state and feeling low or high.
The sequence uses emotional language (so far), as it’s written to be widely accessible. I’m extensionally defining what I’m thinking of and how that works for me. These intuitions translated for the 20 or so people I showed the first part of the sequence, but minds are different and it’s possible it doesn’t feel the same for you. As long as the idea of “how well the agent can achieve their goals” makes sense and you see why I’m pointing to these properties, that’s probably fine.
I am also confused what the realtionship between expected utility and attainable utility is supposed to be. If you expect to maximise they should be pretty close.
It seems like you’re considering the changes in actions or information-theoretic surprisal, and I’m considering impact to the taxi driver. It’s valid to consider how substantially plans change, it’s just not the focus of the sequence.
I thought that “impact” was the word for that. What is there left of the focus of the sequence if you take “life-changes” away from that?
You think or would say there is no impact for the taxi driver?
I assert that we feel impacted when we change our beliefs about how well we can get what we want. Learning the address does not affect their attainable utility, so (when I simulate this experience) it doesn’t feel impactful in this specific way. It just feels like learning something.
Is this engaging with what you have in mind by “life-changes”?
I would have agreed with “how we can get what we want” but “how well we can get what we want” kind of specifies that it is a scalar quantity.
Utility functions can be constructed or are translatable from/to choice rankings. There can be no meaningful utility change without it being understandable with choices.
Impact as a primitive feeling feels super weird. I get that it has something to do with the idiom “fuck my life”. However there is another idiom “This is my life now” which more captures that quality change that is not neccesarily a move up or down.
There is a “so” word that would suggest theorethical implication but reference to simulated experience and feeling seem like callbacks to imagined emotions. Either or both apply?
I am also confused what the realtionship between expected utility and attainable utility is supposed to be. If you expect to maximise they should be pretty close.
I think I might be expereriencing goal directed behaviour very differntly on the inside and I am unsure how much of the terminology is supposed to be abstract math concepts and how much of it is supposed to be emotional language. It might be for other people there is a more natural link between being in a low or high utility state and feeling low or high.
I am now suspecthing it has less to do with “Objective-life” but rather “subjective-life” or life-as-experienced which tells the approach uses a differnt kind of ontology.
The sequence uses emotional language (so far), as it’s written to be widely accessible. I’m extensionally defining what I’m thinking of and how that works for me. These intuitions translated for the 20 or so people I showed the first part of the sequence, but minds are different and it’s possible it doesn’t feel the same for you. As long as the idea of “how well the agent can achieve their goals” makes sense and you see why I’m pointing to these properties, that’s probably fine.
Great catch, covered two posts from now.