you probably need to understand predictive processing better than what you get from reading the SSC article
I’m a bit confused then that the SSC article is your citation for this concept. Did you just read the SSC article? Or if you didn’t, could you link to maybe the things you read? Also, writing a post assuming this concept but that has no sufficient explanation on the web or in the community seems suboptimal, maybe consider writing that post first. Then again, maybe you tried to make a more general point about the brain not being agents, and you could factor out the predictive processing concept and give a different example of a brain architecture that doesn’t have a utility function.
Btw, if that is your goal, that doesn’t speak to my cruxes for why reasoning with about an AI with a utility function makes sense, which are discussed here and pointed to here (something like ‘there is a canonical way to scale me up even if it’s not obvious’).
I read the book the SSC article is reviewing (plus a bunch of articles on predictive-mind, some papers from google scholar + seen several talks). Linking the SSC review seemed more useful than linking amazon.
I don’t think I’m the right person for writing an introduction to predictive processing for the LW community.
Maybe I actually should have included a warning that the whole model I’m trying to describe has nontrivial inferential distance.
I’m a bit confused then that the SSC article is your citation for this concept. Did you just read the SSC article? Or if you didn’t, could you link to maybe the things you read? Also, writing a post assuming this concept but that has no sufficient explanation on the web or in the community seems suboptimal, maybe consider writing that post first. Then again, maybe you tried to make a more general point about the brain not being agents, and you could factor out the predictive processing concept and give a different example of a brain architecture that doesn’t have a utility function.
Btw, if that is your goal, that doesn’t speak to my cruxes for why reasoning with about an AI with a utility function makes sense, which are discussed here and pointed to here (something like ‘there is a canonical way to scale me up even if it’s not obvious’).
I read the book the SSC article is reviewing (plus a bunch of articles on predictive-mind, some papers from google scholar + seen several talks). Linking the SSC review seemed more useful than linking amazon.
I don’t think I’m the right person for writing an introduction to predictive processing for the LW community.
Maybe I actually should have included a warning that the whole model I’m trying to describe has nontrivial inferential distance.