My question stems from a place of personal experiences where people see a certain solution as the best option and agree to apply the said solution. Only to later fail to follow up. This failure may lead to grave consequences, over and over again but the same mistakes keep getting repeated again and again.
The conclusion so far is that this is caused by some psychological limitations, usually emotional in nature.
An ASI may try to straighten this up for us. But would have to take a support role to us. Is that likely if the ASI develops its own consciousness?
And, it’s highly likely that an ASI will have non-reductive emergent properties beyond our comprehension same as we developed some non-reductive emergent properties beyond the comprehension of other animals. In which case, this 👇🏾
I don’t think that this is how values or beneficence works. I think that, if you had an aligned superintellience, that was actually an aligned superintelligence, it would be able to give you a simple, obvious in retrospect, explanation of why “helping” people in the manner you’re worried about isn’t even a coherent thing for an aligned superintelligence to do.
But we’d be talking about an ASI, not AGI. An aligned ASI for that matter. I don’t think it’s possible to speculate about what it would be like and how it would resolve contradictions and confusions that originate from human personality traits.
But if it is modeled after us, is it possible for an AGI to choose not to handle matters the way we do when things don’t go our way?
An example: we put down animals that fail to act the way we want, even if they’re acting right according to their nature.
If an all-reaching AGI were to find itself in a similar situation, its scope of action will be considerably broad.
My question stems from a place of personal experiences where people see a certain solution as the best option and agree to apply the said solution. Only to later fail to follow up. This failure may lead to grave consequences, over and over again but the same mistakes keep getting repeated again and again.
The conclusion so far is that this is caused by some psychological limitations, usually emotional in nature.
An ASI may try to straighten this up for us. But would have to take a support role to us. Is that likely if the ASI develops its own consciousness?
And, it’s highly likely that an ASI will have non-reductive emergent properties beyond our comprehension same as we developed some non-reductive emergent properties beyond the comprehension of other animals. In which case, this 👇🏾
But we’d be talking about an ASI, not AGI. An aligned ASI for that matter. I don’t think it’s possible to speculate about what it would be like and how it would resolve contradictions and confusions that originate from human personality traits.
But if it is modeled after us, is it possible for an AGI to choose not to handle matters the way we do when things don’t go our way?
An example: we put down animals that fail to act the way we want, even if they’re acting right according to their nature.
If an all-reaching AGI were to find itself in a similar situation, its scope of action will be considerably broad.