I’ll probably update this comment after I finish some edits to previous posts.
I like this line of thinking. I’d say the big place where we’ve swept something under the rug here is what counts as an agent-shaped model? After all, we don’t care about just any simple yet powerful explanations (otherwise the Standard Model would be much more relevant to ethics than it is), we care about the parameters of the agent-shaped models specifically.
I see what you’re getting at. For an arbitrary explanation, we need to take into account not only the complexity of the explanation itself, but also how difficult it is to compute a relevant prediction from that explanation; according to my criteria, the Standard Model (or any sufficiently detailed theory of physics that accurately explains phenomena within a conservative range of low-ish energy environments encountered on Earth) would count as a very good explanation for any behaviour for its complexity, but that’s ignoring the fact that it would be impossible to actually compute those predictions.
While I made the claim that there is a clear dividing line between (accuracy and power) and (complexity), this strikes me as an issue straddling complexity and explanatory power, which muddies the water a little.
Since I’ve appealed to physics explanations in my post, I’m glad you’ve made me think about these points. Moving forward, though, I expect the classes of explanation under consideration to be so constrained as to make this issue insignificant. That is, I expect to be directly comparing explanations taking the form of goals to explanations taking the form of algorithms or similar; each of these has a clear interpretation in terms of its predictions and, while the former might be harder to compute, the difference in difficulty is going to be suitably uniform across the classes (after accounting for complexity of explanations), so that I feel justified in ignoring it until later.
I’ll probably update this comment after I finish some edits to previous posts.
I like this line of thinking. I’d say the big place where we’ve swept something under the rug here is what counts as an agent-shaped model? After all, we don’t care about just any simple yet powerful explanations (otherwise the Standard Model would be much more relevant to ethics than it is), we care about the parameters of the agent-shaped models specifically.
I see what you’re getting at. For an arbitrary explanation, we need to take into account not only the complexity of the explanation itself, but also how difficult it is to compute a relevant prediction from that explanation; according to my criteria, the Standard Model (or any sufficiently detailed theory of physics that accurately explains phenomena within a conservative range of low-ish energy environments encountered on Earth) would count as a very good explanation for any behaviour for its complexity, but that’s ignoring the fact that it would be impossible to actually compute those predictions.
While I made the claim that there is a clear dividing line between (accuracy and power) and (complexity), this strikes me as an issue straddling complexity and explanatory power, which muddies the water a little.
Since I’ve appealed to physics explanations in my post, I’m glad you’ve made me think about these points. Moving forward, though, I expect the classes of explanation under consideration to be so constrained as to make this issue insignificant. That is, I expect to be directly comparing explanations taking the form of goals to explanations taking the form of algorithms or similar; each of these has a clear interpretation in terms of its predictions and, while the former might be harder to compute, the difference in difficulty is going to be suitably uniform across the classes (after accounting for complexity of explanations), so that I feel justified in ignoring it until later.