I’ve mentioned in the past that human brains evaluate moral propositions as “true” and “false” in the same way as other propositions.
It’s true that it there are possible minds that do not do this. But the first AI will be programmed by human beings who are imitating their own minds. So it is very likely that this AI will evaluate moral propositions in the same way that human minds do, namely as true or false. Otherwise it would be very difficult for human beings to engage this AI in conversation, and one of the goals of the programmers would be to ensure that it could converse.
This is why, as I’ve said before, that programming an AI does not require an understanding of morality, it just requires enough knowledge to program general intelligence. And this is what is going to actually happen, in all probability; the odds that Eliezer’s AI will be the very first AI are probably less than 1 in a 1000, given the number of people trying.
I’ve mentioned in the past that human brains evaluate moral propositions as “true” and “false” in the same way as other propositions.
It’s true that it there are possible minds that do not do this. But the first AI will be programmed by human beings who are imitating their own minds. So it is very likely that this AI will evaluate moral propositions in the same way that human minds do, namely as true or false. Otherwise it would be very difficult for human beings to engage this AI in conversation, and one of the goals of the programmers would be to ensure that it could converse.
This is why, as I’ve said before, that programming an AI does not require an understanding of morality, it just requires enough knowledge to program general intelligence. And this is what is going to actually happen, in all probability; the odds that Eliezer’s AI will be the very first AI are probably less than 1 in a 1000, given the number of people trying.