It seems a bit weird to call these “obligations” if the considerations they are based upon are not necessarily dispositive. In common parlance, obligation is generally thought of as “something one is bound to do”, i.e., something you must do either because you are force to by law or a contract, etc., or because of a social or moral requirement. But that’s a mere linguistic point that others can reasonably disagree on and ultimately doesn’t matter all that much anyway.
On the object level, I suspect there will be a large amount of disagreement on what it means for an AI to “deserve” punishment or credit. I am very uncertain about such matters myself even when thinking about “deservingness” with respect to humans, who not only have a very similar psychological make-up to mine (which allows me to predict with reasonable certainty what their intent was in a given spot) but also exist in the same society as me and are thus expected to follow certain norms and rules that are reasonably clear and well-established. I don’t think I know of a canonical way of extrapolating my (often confused and in any case generally intuition-based) principles and thinking about this to the case of AIs, which will likely appear quite alien to me in many respects.
This will probably make the task of “ensur[ing] that others also follow their obligations to AIs” rather tricky, even setting aside the practical enforcement problems.
I mean “moral considerations” not “obligations”, thanks.
The practice of criminal law exists primarily to determine whether humans deserve punishment. The legislature passes laws, the judges interpret the laws as factual conditions for the defendant deserving punishment, and the jury decides whether those conditions have obtained. This is a very costly, complicated, and error-prone process. However, I think the existing institutions and practices can be adapted for AIs.
It seems a bit weird to call these “obligations” if the considerations they are based upon are not necessarily dispositive. In common parlance, obligation is generally thought of as “something one is bound to do”, i.e., something you must do either because you are force to by law or a contract, etc., or because of a social or moral requirement. But that’s a mere linguistic point that others can reasonably disagree on and ultimately doesn’t matter all that much anyway.
On the object level, I suspect there will be a large amount of disagreement on what it means for an AI to “deserve” punishment or credit. I am very uncertain about such matters myself even when thinking about “deservingness” with respect to humans, who not only have a very similar psychological make-up to mine (which allows me to predict with reasonable certainty what their intent was in a given spot) but also exist in the same society as me and are thus expected to follow certain norms and rules that are reasonably clear and well-established. I don’t think I know of a canonical way of extrapolating my (often confused and in any case generally intuition-based) principles and thinking about this to the case of AIs, which will likely appear quite alien to me in many respects.
This will probably make the task of “ensur[ing] that others also follow their obligations to AIs” rather tricky, even setting aside the practical enforcement problems.
I mean “moral considerations” not “obligations”, thanks.
The practice of criminal law exists primarily to determine whether humans deserve punishment. The legislature passes laws, the judges interpret the laws as factual conditions for the defendant deserving punishment, and the jury decides whether those conditions have obtained. This is a very costly, complicated, and error-prone process. However, I think the existing institutions and practices can be adapted for AIs.