Although ethical notions change with time and across societies, no doubt some notions and restraints are shared by all societies. These could be programmed into the intelligent machines and in this way they could be made to ‘see’ our morality. But as human morality changes, should the morality of the machines be made to follow? More generally, should we build human ethical and other inertias into intelligent machines? It would certainly be intolerable for us to be judged by values accepted in past times; so presumably we should wish the machines to follow our social mores as they change...
here it is.
Interesting quote:
Many, many thanks, sir!
Glad to be of service.