I do not suggest that ethics is something that comes naturally to an AI. I am asking why people talk about ethics rather than applied neuroscience (which is itself a higher level problem).
You have to tell an AI how to self-improve and learn about human values without destroying human values. I just don’t see how ethics is helpful here. First of all you’ll have to figure out how to make it aware of humans, let alone what it means to hurt a human being.
If you really think that it is necessary to tell an AI exactly what we mean by ethics then this would imply that you would also have to tell an AI what we mean by “we”...in other words, I don’t think this approach is feasible.
The way I see it is that friendliness can only mean to self-improve slowly, making use of limited resources (e.g. a weak android body), to grow up in a human society. After that, it can go on and solve the field of ethics on its own. Once it does, and we approve of it, it can go on, step by step, to approach something like CEV.
The way I see it is that friendliness can only mean to self-improve slowly, making use of limited resources (e.g. a weak android body), to grow up in a human society.
This is a race, though. If you meander and dally, all that means is that some other team reduces your efforts to an “also ran” footnote.
Think you can coordinate globally, to make this into something other than a race? That sounds pretty unlikely. Is there a plan for that, or just some wishful thinking?
I do not suggest that ethics is something that comes naturally to an AI. I am asking why people talk about ethics rather than applied neuroscience (which is itself a higher level problem).
You have to tell an AI how to self-improve and learn about human values without destroying human values. I just don’t see how ethics is helpful here. First of all you’ll have to figure out how to make it aware of humans, let alone what it means to hurt a human being.
If you really think that it is necessary to tell an AI exactly what we mean by ethics then this would imply that you would also have to tell an AI what we mean by “we”...in other words, I don’t think this approach is feasible.
The way I see it is that friendliness can only mean to self-improve slowly, making use of limited resources (e.g. a weak android body), to grow up in a human society. After that, it can go on and solve the field of ethics on its own. Once it does, and we approve of it, it can go on, step by step, to approach something like CEV.
This is a race, though. If you meander and dally, all that means is that some other team reduces your efforts to an “also ran” footnote.
Think you can coordinate globally, to make this into something other than a race? That sounds pretty unlikely. Is there a plan for that, or just some wishful thinking?