After a while, smart machines will probably know what a human is better than individual humans do—due to all the training cases we can easily feed them.
“Is this behaviour ethical” generally seems to be a trickier categorisation problem—humans disagree about it more, it is more a matter of degree—etc.
The whole: “can’t we just let the S-I-M figure it out?” business seems kind of paralysing. Should we stop working on math or physics because the S-I-M will figure it out? No—because we need to use that stuff in the mean time.
After a while, smart machines will probably know what a human is better than individual humans do—due to all the training cases we can easily feed them.
“Is this behaviour ethical” generally seems to be a trickier categorisation problem—humans disagree about it more, it is more a matter of degree—etc.
The whole: “can’t we just let the S-I-M figure it out?” business seems kind of paralysing. Should we stop working on math or physics because the S-I-M will figure it out? No—because we need to use that stuff in the mean time.