I think this mostly dissolves the other points you bring up that I read as contingent on thinking the theory doesn’t predict humans would find variety and surprise good in some circumstances, but if not please let me know what the remaining concerns are in light of this explanation (or possibly object to my explanation of why we expect surprise to sometimes be net good).
Yeah, I noted that I and other humans often seem to enjoy surprise, but I also had a different point I was trying to make—the claim that it makes sense that you’d observe competent agents doing many things which can be explained by minimizing prediction error, no matter what their goals.
But, it isn’t important for you to respond further to this point if you don’t feel it accounts for your observations.
Yeah, I noted that I and other humans often seem to enjoy surprise, but I also had a different point I was trying to make—the claim that it makes sense that you’d observe competent agents doing many things which can be explained by minimizing prediction error, no matter what their goals.
But, it isn’t important for you to respond further to this point if you don’t feel it accounts for your observations.