I’m somewhat confused if you are claiming something else than Friston’s notion that everything what brain is doing can be described as minimizing free energy/prediction error, this is important for understanding what human values are, and needs to be understood for ai alignment purposes.
If this is so, it sounds close to a restatement of my ‘best guess of how minds work’ with some in my opinion unhelpful simplification—ignoring the signal inserted into predictive processing via interoception of bodily states, which is actually important part of the picture, -ignoring the emergent ‘agenty’ properties of evolutionary encoded priors, +calling it theory of human values.
(I’m not sure how to state it positively, but I think it would be great if at least one person from the LW community bothered to actually understand my post, as “understanding each sentence”.)
(I’m not sure how to state it positively, but I think it would be great if at least one person from the LW community bothered to actually understand my post, as “understanding each sentence”.)
FWIW I’m not actually sure this is possible without you writing a sequence explaining the model. There are tooany sentences loaded with inferential distance that I couldn’t cross, and didn’t know the relevant places to start to begin to cross them.
It looks like I read your post but forgot about it. I’ll have to look at it again.
I am building this theory in a way that I think is highly compatible with Friston, although I also don’t have a gears-level understanding of Friston, so I find it easier to think in terms of control systems which appear to offer an equivalent model to me.
(My sense was that Abram engaged pretty extensively with the post, though I can’t fully judge since I’ve historically bounced off of a lot of the predictive processing stuff, including your post)
I’m somewhat confused if you are claiming something else than Friston’s notion that everything what brain is doing can be described as minimizing free energy/prediction error, this is important for understanding what human values are, and needs to be understood for ai alignment purposes.
If this is so, it sounds close to a restatement of my ‘best guess of how minds work’ with some in my opinion unhelpful simplification—ignoring the signal inserted into predictive processing via interoception of bodily states, which is actually important part of the picture, -ignoring the emergent ‘agenty’ properties of evolutionary encoded priors, +calling it theory of human values.
(I’m not sure how to state it positively, but I think it would be great if at least one person from the LW community bothered to actually understand my post, as “understanding each sentence”.)
FWIW I’m not actually sure this is possible without you writing a sequence explaining the model. There are tooany sentences loaded with inferential distance that I couldn’t cross, and didn’t know the relevant places to start to begin to cross them.
It looks like I read your post but forgot about it. I’ll have to look at it again.
I am building this theory in a way that I think is highly compatible with Friston, although I also don’t have a gears-level understanding of Friston, so I find it easier to think in terms of control systems which appear to offer an equivalent model to me.
(My sense was that Abram engaged pretty extensively with the post, though I can’t fully judge since I’ve historically bounced off of a lot of the predictive processing stuff, including your post)