Don’t you think that there would be (at least) strong support for caring about the interests of other intelligent life, if all humans were far more intelligent, knowledgeable, rational, and consistent, and heard all the arguments for and against it?
But making humans more intelligent, more rational would mean to alter their volition. An FAI that would proactively make people become more educated would be similar to one that altered the desires of humans directly. If it told them that the holy Qur’an is not the word of God it would dramatically change their desires. But what if people actually don’t want to learn that truth? In other words, any superhuman intelligence will have a very strong observer effect and will cause a subsequent feedback loop that will shape the future according to the original seed AI, or the influence of its creators. You can’t expect to create a God and still be able to extrapolate the natural desires of human beings. Human desires are not just a fact about their evolutionary history but also a mixture of superstructural parts like environmental and cultural influences. If you have some AI God leading humans into the future then at some point you have altered all those structures and consequently changed human volition. The smallest bias in the original seed AI will be maximized over time by the feedback between the FAI and its human pets.
ETA You could argue that all that matters is the evolutionary template for the human brain. The best way to satisfy it maximally is what we want, what is right. But leaving aside the evolution of culture and the environment seems drastic. Why not go a step further and create a new better mind as well?
I also think it is a mistake to generalize from the people you currently know to be intelligent and reasonable as they might be outliers. Since I am a vegetarian I am used to people telling me that they understand what it means to eat meat but that they don’t care. We should not rule out the possibility that the extrapolated volition of humanity is actually something that would appear horrible and selfish to us “freaks”.
I really don’t think that’ll happen, but honestly, I’ll have to defer to the judgment of our extrapolated selves. They’re smarter and wiser than me, and they’ve heard more of the arguments and evidence than I have.
That is only reasonable if matters of taste are really subject to rational argumentation and judgement. If it really doesn’t matter if we desire pleasure or pain then focusing on smarts might either lead to an infinite regress or nihilism.
But making humans more intelligent, more rational would mean to alter their volition. An FAI that would proactively make people become more educated would be similar to one that altered the desires of humans directly. If it told them that the holy Qur’an is not the word of God it would dramatically change their desires. But what if people actually don’t want to learn that truth? In other words, any superhuman intelligence will have a very strong observer effect and will cause a subsequent feedback loop that will shape the future according to the original seed AI, or the influence of its creators. You can’t expect to create a God and still be able to extrapolate the natural desires of human beings. Human desires are not just a fact about their evolutionary history but also a mixture of superstructural parts like environmental and cultural influences. If you have some AI God leading humans into the future then at some point you have altered all those structures and consequently changed human volition. The smallest bias in the original seed AI will be maximized over time by the feedback between the FAI and its human pets.
ETA You could argue that all that matters is the evolutionary template for the human brain. The best way to satisfy it maximally is what we want, what is right. But leaving aside the evolution of culture and the environment seems drastic. Why not go a step further and create a new better mind as well?
I also think it is a mistake to generalize from the people you currently know to be intelligent and reasonable as they might be outliers. Since I am a vegetarian I am used to people telling me that they understand what it means to eat meat but that they don’t care. We should not rule out the possibility that the extrapolated volition of humanity is actually something that would appear horrible and selfish to us “freaks”.
That is only reasonable if matters of taste are really subject to rational argumentation and judgement. If it really doesn’t matter if we desire pleasure or pain then focusing on smarts might either lead to an infinite regress or nihilism.