I and the people I spend time with by choice are actively seeking to be more informed and more intelligent and more able to carry out our decisions. I know that I live in an IQ bubble and many / most other people do not share these goals. A tool AI might be like me, and might be like someone else who is not like me. I used to think all people were like me, or would be if they knew (insert whatever thing I was into at the time). Now I see more diversity in the world. A ‘dog’ AI that is way happy being a human playmate / servant and doesn’t want at all to be a ruler of humans seems as likely as the alternatives.
I used to think all people were like me, or would be if they knew (insert whatever thing I was into at the time). Now I see more diversity in the world.
A ‘dog’ AI that is way happy being a human playmate / servant and doesn’t want at all to be a ruler of human
I and the people I spend time with by choice are actively seeking to be more informed and more intelligent and more able to carry out our decisions. I know that I live in an IQ bubble and many / most other people do not share these goals. A tool AI might be like me, and might be like someone else who is not like me. I used to think all people were like me, or would be if they knew (insert whatever thing I was into at the time). Now I see more diversity in the world. A ‘dog’ AI that is way happy being a human playmate / servant and doesn’t want at all to be a ruler of humans seems as likely as the alternatives.
Using anthropomorphic reasoning when thinking of AIs can easily lead us astray.
The optimum degree of athropomorphism s not zero, since AIs will to some extent reflect human goals and limitations.
I used to think all people were like me, or would be if they knew (insert whatever thing I was into at the time). Now I see more diversity in the world.
A ‘dog’ AI that is way happy being a human playmate / servant and doesn’t want at all to be a ruler of human