I can think of several rubbish ways to make a bunch of humans think that they all have high status, a brain the size of a planet would think of a excellent one.
Create a lot of human-seeming robots = Give everyone a volcano = Fool the humans = Build the Matrix.
To quote myself:
In other words, it isn’t valid to analyze the sensations that people get when their higher status is affirmed by others, and then recreate those sensations directly in everyone, without anyone needing to have low status. If you did that, I can think of only 3 possible interpretations of what you would have done, and I find none of them acceptable:
Consciousness is not dependent on computational structure (this leads to vitalism); or
You have changed the computational structure their behaviors and values are part of, and therefore changed their conscious experience and their values; or
You have embedded them each within their own Matrix, in which they perceive themselves as performing isomorophic computations.
Create a lot of human-seeming robots = Give everyone a volcano = Fool the humans = Build the Matrix
I agree that these are all rubbish ideas, which is why we let the AI solve the problem. Because it’s smarter than us. If this post was about how we should make the world the better place on our own, then these issues are indeed a (small) problem, but since it was framed in terms of FAI, it’s asking the wrong questions.
BTW, how do you let the AI solve the problem of what kind of AI to build?
What kind of AI to be. That’s the essence of being a computationally complex algorithm, and decision-making algorithm in particular: you always learn something new about what you should do, and what you’ll actually do, and not just learn it, but make it so.
Create a lot of human-seeming robots = Give everyone a volcano = Fool the humans = Build the Matrix.
To quote myself:
In other words, it isn’t valid to analyze the sensations that people get when their higher status is affirmed by others, and then recreate those sensations directly in everyone, without anyone needing to have low status. If you did that, I can think of only 3 possible interpretations of what you would have done, and I find none of them acceptable:
Consciousness is not dependent on computational structure (this leads to vitalism); or
You have changed the computational structure their behaviors and values are part of, and therefore changed their conscious experience and their values; or
You have embedded them each within their own Matrix, in which they perceive themselves as performing isomorophic computations.
I agree that these are all rubbish ideas, which is why we let the AI solve the problem. Because it’s smarter than us. If this post was about how we should make the world the better place on our own, then these issues are indeed a (small) problem, but since it was framed in terms of FAI, it’s asking the wrong questions.
You’re missing the main point of the post. Note the bullet points are ranked in order of increasing importance. See the last bullet point.
BTW, how do you let the AI solve the problem of what kind of AI to build?
What kind of AI to be. That’s the essence of being a computationally complex algorithm, and decision-making algorithm in particular: you always learn something new about what you should do, and what you’ll actually do, and not just learn it, but make it so.