Yes, of course. And this is especially concerning because ‘rationality’, ‘winning’ and the like are quite clearly not ideologically neutral concepts. They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing—be it of fellow human beings or our natural environment, a real-life symbiote without which our communities cannot possibly thrive or be sustainable.
LessWrong folks like to talk about their pursuit of a “Friendly AI” as a possible escape from this dilemma. But it’s not clear at all just how ‘friendly’ an AI could be to, say, indigenous peoples whose way of life and culture does not contemplate Western technology. As a general rule of thumb, our developments in so-called “rationality” have not been kind to such groups.
They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing—be it of fellow human beings or our natural environment
For someone with a strong interest in or preference towards caring and nurturing, rationality is still very useful. It helps you learn on how to best care for as many people as possible or to nurture as many pandas (or whatever). Caring and nurturing still have win-states, they’re just cooperative instead of competitive.
They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing—be it of fellow human beings or our natural environment, a real-life symbiote without which our communities cannot possibly thrive or be sustainable.
“Winning” means maximizing your utility function. If you think that “care and nurturing” are important, and yet you failed to include them in your utility function, the fault lies with you, not rationality. Complaining about rationality not taking into account care and nurturing is like complaining about your car not taking into account red lights.
LessWrong folks like to talk about their pursuit of a “Friendly AI” as a possible escape from this dilemma.
What dilemma?
But it’s not clear at all just how ‘friendly’ an AI could be to, say, indigenous peoples whose way of life and culture does not contemplate Western technology.
An AI friendly to Western values would be a tool through which Western civilization could enforce it values. If you don’t like Western values, then your objection is against Western values, not with the tool used to facilitate them.
As a general rule of thumb, our developments in so-called “rationality” have not been kind to such groups.
I don’t find that to be clear. The mistreatment of non-Western people can arguably be attributed to anti-rational positions, and my most measures, most people are better off today than the average person was a thousand years ago.
Yes, of course. And this is especially concerning because ‘rationality’, ‘winning’ and the like are quite clearly not ideologically neutral concepts. They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing—be it of fellow human beings or our natural environment, a real-life symbiote without which our communities cannot possibly thrive or be sustainable.
LessWrong folks like to talk about their pursuit of a “Friendly AI” as a possible escape from this dilemma. But it’s not clear at all just how ‘friendly’ an AI could be to, say, indigenous peoples whose way of life and culture does not contemplate Western technology. As a general rule of thumb, our developments in so-called “rationality” have not been kind to such groups.
For someone with a strong interest in or preference towards caring and nurturing, rationality is still very useful. It helps you learn on how to best care for as many people as possible or to nurture as many pandas (or whatever). Caring and nurturing still have win-states, they’re just cooperative instead of competitive.
What evidence do you have for that claim? Would that pass objective tests for good evidence?
“Winning” means maximizing your utility function. If you think that “care and nurturing” are important, and yet you failed to include them in your utility function, the fault lies with you, not rationality. Complaining about rationality not taking into account care and nurturing is like complaining about your car not taking into account red lights.
What dilemma?
An AI friendly to Western values would be a tool through which Western civilization could enforce it values. If you don’t like Western values, then your objection is against Western values, not with the tool used to facilitate them.
I don’t find that to be clear. The mistreatment of non-Western people can arguably be attributed to anti-rational positions, and my most measures, most people are better off today than the average person was a thousand years ago.