It’s probably not even our problem. ISTM that we could easily get to beyond-human level using agents that have walled-off brains, and can’t self-modify, or hack into themselves.
You can normally stop such an agent from bashing its own brains in with a bit of operant conditioning.
It’s probably not even our problem. ISTM that we could easily get to beyond-human level using agents that have walled-off brains, and can’t self-modify, or hack into themselves.
You can normally stop such an agent from bashing its own brains in with a bit of operant conditioning.