If you mean a self-aware AI, then I doubt that the creator will have much to say if the artificial person will be good or bad. How much blame do you put on the parents of a killer?
The actions of a self-aware AI should be referred to Metaphysics and Axiology.
If you mean an automata. then we have the laws already. Why would anyone would want to create a machine to break them? The creator will definitely be responsible. :(
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
You appear to be new here, so to explain why someone has downvoted you—this is a frequently discussed topic here. There is a generally accepted view here (which I do not necessarily share, but which is the view of the community as a whole) that an Artificial Intelligence is likely to be created soon (as in not tomorrow, but probably within the next century or two), and that if it is created without proper care such an AI might well destroy humanity/the Earth/the local universe. It is generally considered in the community that the Asimov Laws wouldn’t prevent such an event.
I suggest you first read http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence and the links on that page, then read or at least skim ‘the Sequences’ (a core set of posts that most people here have read, which you can find here—http://wiki.lesswrong.com/wiki/Sequences ) before making comments about this subject, as it is one that has been the subject of much discussion on this site, and people will consider repetition of ideas that have already been dealt with as being noise, rather than signal.
If you mean a self-aware AI, then I doubt that the creator will have much to say if the artificial person will be good or bad. How much blame do you put on the parents of a killer?
The actions of a self-aware AI should be referred to Metaphysics and Axiology.
If you mean an automata. then we have the laws already. Why would anyone would want to create a machine to break them? The creator will definitely be responsible. :(
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
You appear to be new here, so to explain why someone has downvoted you—this is a frequently discussed topic here. There is a generally accepted view here (which I do not necessarily share, but which is the view of the community as a whole) that an Artificial Intelligence is likely to be created soon (as in not tomorrow, but probably within the next century or two), and that if it is created without proper care such an AI might well destroy humanity/the Earth/the local universe. It is generally considered in the community that the Asimov Laws wouldn’t prevent such an event.
I suggest you first read http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence and the links on that page, then read or at least skim ‘the Sequences’ (a core set of posts that most people here have read, which you can find here—http://wiki.lesswrong.com/wiki/Sequences ) before making comments about this subject, as it is one that has been the subject of much discussion on this site, and people will consider repetition of ideas that have already been dealt with as being noise, rather than signal.