I don’t think that we should be confident that such things are all that matter (indeed, I think that’s not true), or that the value is independent of features like complexity (a thermostat program vs an autonomous social robot).
If someone built a “happy neuron farm” of these, would that be a good thing? Would a “sad neuron farm” be bad?
I would answer “yes” and “yes,” especially in expected value terms.
I don’t think that we should be confident that such things are all that matter (indeed, I think that’s not true), or that the value is independent of features like complexity (a thermostat program vs an autonomous social robot).
I would answer “yes” and “yes,” especially in expected value terms.