I used google-translate to understand your question:
Thank you very much for the explanation to each question. I just wanted to say something about whether an AI will become hostile and annihilate us or not: Is there no way to make an AI “live” and develop into a simulation? Like the movie “The Matrix” only it will be the AI that is trapped without being aware that he lives a simulation, but there is a possibility that he discovers in some way that what he lives is not real. We would be an entity not known and invisible to AI, just as God is to us.
A principle of AI design that I have heard some AI safety researchers talk about, is that you shouldn’t try to run a process where, if it’s more powerful than you think, it will kill you. You want to run a process where, if it’s more powerful than you think, then you get more of what you want (or at least things stay neutral). So the goal is to make an AI that is not hostile that you’re working against adversarially, but something that cares about your values and doesn’t require being trapped in the matrix.
I used google-translate to understand your question:
A principle of AI design that I have heard some AI safety researchers talk about, is that you shouldn’t try to run a process where, if it’s more powerful than you think, it will kill you. You want to run a process where, if it’s more powerful than you think, then you get more of what you want (or at least things stay neutral). So the goal is to make an AI that is not hostile that you’re working against adversarially, but something that cares about your values and doesn’t require being trapped in the matrix.
Entiendo, pero realmente no sabemos cómo evolucionará una IA, sólo podemos especular y pensar que actuará en base a cómo es construída.