I agree with the idea that the AI will help with existential risk.
First, superintelligence can create a better utopia.
What I’m asking is “What would this utopia have in particular that dath ilan wouldn’t have?”. The next question then becomes how much better would a society with those things be than a dath ilan-like society. I’m having trouble imagining what the answer to the first question is, so I can’t even think about the second one.
Dath ilan would refrain from optimizing humanity (making them happier, use less resources, etc.) in fear of optimizing away their humanity. An FAI would know exactly what a person is, and would be able to optimize them much better.
The only answer I could really imagine starts to get into the territory of wireheading. But if that’s the end that we seek, then we’re pretty much there now. Soon enough we’ll have the resources to let everyone wirehead as much as they want. If that’s true, then why even bother with FAI (and risk things going wrong with it)? (Note: I suspect that FAI is worth it. But this is the argument I make when I argue against myself, and I don’t really know how to respond.)
The only answer I could really imagine starts to get into the territory of wireheading.
Exactly. If dath ilan tried to do it, they’d get well into the territory of wireheading. Only an FAI could start to get there, and then stop at exactly the right place.
Even if you’re totally in favor of wireheading, whatever it is you’re wireheading has to be sentient. Dath ilan would have to use an entire human brain just to be sure. An FAI could make an optimally sentient orgasmium.
That’s just happiness though. An FAI could create new emotions from scratch. Nobody values complexity. That would just mean setting fire to everything so there’s more entropy. The key is figuring out exactly what it is we value, to tell if a complicated system is valuable. An FAI could give us a very interesting set of emotions.
I agree with the idea that the AI will help with existential risk.
What I’m asking is “What would this utopia have in particular that dath ilan wouldn’t have?”. The next question then becomes how much better would a society with those things be than a dath ilan-like society. I’m having trouble imagining what the answer to the first question is, so I can’t even think about the second one.
Dath ilan would refrain from optimizing humanity (making them happier, use less resources, etc.) in fear of optimizing away their humanity. An FAI would know exactly what a person is, and would be able to optimize them much better.
How?
The only answer I could really imagine starts to get into the territory of wireheading. But if that’s the end that we seek, then we’re pretty much there now. Soon enough we’ll have the resources to let everyone wirehead as much as they want. If that’s true, then why even bother with FAI (and risk things going wrong with it)? (Note: I suspect that FAI is worth it. But this is the argument I make when I argue against myself, and I don’t really know how to respond.)
Exactly. If dath ilan tried to do it, they’d get well into the territory of wireheading. Only an FAI could start to get there, and then stop at exactly the right place.
Even if you’re totally in favor of wireheading, whatever it is you’re wireheading has to be sentient. Dath ilan would have to use an entire human brain just to be sure. An FAI could make an optimally sentient orgasmium.
That’s just happiness though. An FAI could create new emotions from scratch. Nobody values complexity. That would just mean setting fire to everything so there’s more entropy. The key is figuring out exactly what it is we value, to tell if a complicated system is valuable. An FAI could give us a very interesting set of emotions.