I think this gives too much import to consciousness. It seems simplest to model it as a side-effect of some computations, roughly how some collections of models introspect themselves. The question of whether the consciousness or the calculation “decided” something needs to be un-asked, and recognize that the system which includes both did it.
Most legal theory and naive human morality use “intent” heavily as factors, likely as a mechanism to predict future risk. It seems like if we wanted to, we could replace it with a more direct model of the agent in question which predicts how much punishment is necessary to alter future actions.
I think this gives too much import to consciousness. It seems simplest to model it as a side-effect of some computations, roughly how some collections of models introspect themselves. The question of whether the consciousness or the calculation “decided” something needs to be un-asked, and recognize that the system which includes both did it.
Most legal theory and naive human morality use “intent” heavily as factors, likely as a mechanism to predict future risk. It seems like if we wanted to, we could replace it with a more direct model of the agent in question which predicts how much punishment is necessary to alter future actions.