On 1: I think there’s a huge amount for philosophers to do. I think of Dennett as laying some of the groundwork which will make the rest of that work easier (such as identifying that the key question is when it’s useful to use an intentional stance, rather than trying to figure out which things are objectively “agents”) but the key details are still very vague. Maybe the crux of our disagreement here is how well-specified “treating something as if it’s a rational agent” actually is. I think that definitions in terms of utility functions just aren’t very helpful, and so we need more conceptual analysis of what phrases like this actually mean, which philosophers are best-suited to provide.
On 2: you’re right, as written it does subsume parts of your list. I guess when I wrote that I was thinking that most of the value would come from clarification of the most well-known arguments (i.e. the ones laid out in Superintelligence and What Failure Looks Like). I endorse philosophers pursuing all the items on your list, but from my perspective the disjoint items on my list are much higher priorities.
I think that definitions in terms of utility functions just aren’t very helpful
I agree this seems like a good candidate for our crux. It seems to me that defining “rational agent” in terms of “utility function” is both intuitively and theoretically quite appealing and really useful in practice (see the whole field of economics), and I’m pretty puzzled by your persistent belief that maybe we can do much better.
The recent paper Designing agent incentives to avoid reward tampering also seems relevant here, as it gives a seemingly clear explanation of why if you started with an RL agent, you might want to move to a decision theoretic agent (i.e., something that has a utility function) instead. I wonder if that changes your mind at all.
I wonder if your intuition is along the lines of, an agent is a computational process that is trying to accomplish some real world objective, where “trying” etc. are all cashed out in a lot more detail (similar to how it would need to be cashed out to clarify Paul’s notion of intent alignment). If so, I think I’m sympathetic to this.
On 1: I think there’s a huge amount for philosophers to do. I think of Dennett as laying some of the groundwork which will make the rest of that work easier (such as identifying that the key question is when it’s useful to use an intentional stance, rather than trying to figure out which things are objectively “agents”) but the key details are still very vague. Maybe the crux of our disagreement here is how well-specified “treating something as if it’s a rational agent” actually is. I think that definitions in terms of utility functions just aren’t very helpful, and so we need more conceptual analysis of what phrases like this actually mean, which philosophers are best-suited to provide.
On 2: you’re right, as written it does subsume parts of your list. I guess when I wrote that I was thinking that most of the value would come from clarification of the most well-known arguments (i.e. the ones laid out in Superintelligence and What Failure Looks Like). I endorse philosophers pursuing all the items on your list, but from my perspective the disjoint items on my list are much higher priorities.
I agree this seems like a good candidate for our crux. It seems to me that defining “rational agent” in terms of “utility function” is both intuitively and theoretically quite appealing and really useful in practice (see the whole field of economics), and I’m pretty puzzled by your persistent belief that maybe we can do much better.
AFAICT, the main argument for your position is Coherent behaviour in the real world is an incoherent concept but I feel like I gave a strong counter-argument against it and I’m not sure what your counter-counter-argument is.
The recent paper Designing agent incentives to avoid reward tampering also seems relevant here, as it gives a seemingly clear explanation of why if you started with an RL agent, you might want to move to a decision theoretic agent (i.e., something that has a utility function) instead. I wonder if that changes your mind at all.
I wonder if your intuition is along the lines of, an agent is a computational process that is trying to accomplish some real world objective, where “trying” etc. are all cashed out in a lot more detail (similar to how it would need to be cashed out to clarify Paul’s notion of intent alignment). If so, I think I’m sympathetic to this.