What scenario do you see where the world is in a sociopolitical state where the powers that be who have influence over the development of AI have any intention of using that influence for eudaimonic ends, and for everyone and not just some select few?
Because right now very few people even want this from their leaders. I’m making this argument on lesswrong because people here are least likely to be hateful or apathetic or whatever else, but there is not really a wider political motivation in the direction of universal anti-suffering.
Humans have never gotten this right before, and I don’t expect them to get it right the one time it really matters.
All such realistic scenarios, in my view, rely on managing who has influence over the development of AI. It certainly must not be a government, for example. (At least not in the sense that the officials at the highest levels of government actually understand what’s happening. I guess it can be a government-backed research group, but, well, without micromanagement — and given what we’re talking about, the only scenario where the government doesn’t do micromanagement is if it doesn’t really understand the implications.) Neither should it be some particularly “transparent” actor that’s catering to the public whims, or an inherently for-profit organization, etc.
… Spreading the knowledge of AI Risk really is not a good idea, is it? Its wackiness is playing to our favour, avoids exposing the people working on it to poisonous incentives or to authorities already terminally poisoned by such incentives.
What scenario do you see where the world is in a sociopolitical state where the powers that be who have influence over the development of AI have any intention of using that influence for eudaimonic ends, and for everyone and not just some select few?
Because right now very few people even want this from their leaders. I’m making this argument on lesswrong because people here are least likely to be hateful or apathetic or whatever else, but there is not really a wider political motivation in the direction of universal anti-suffering.
Humans have never gotten this right before, and I don’t expect them to get it right the one time it really matters.
All such realistic scenarios, in my view, rely on managing who has influence over the development of AI. It certainly must not be a government, for example. (At least not in the sense that the officials at the highest levels of government actually understand what’s happening. I guess it can be a government-backed research group, but, well, without micromanagement — and given what we’re talking about, the only scenario where the government doesn’t do micromanagement is if it doesn’t really understand the implications.) Neither should it be some particularly “transparent” actor that’s catering to the public whims, or an inherently for-profit organization, etc.
… Spreading the knowledge of AI Risk really is not a good idea, is it? Its wackiness is playing to our favour, avoids exposing the people working on it to poisonous incentives or to authorities already terminally poisoned by such incentives.