Look, I can go into mania like anyone else here probably can. My theories say that can’t be genius level without it and that it comes with emotional sensitivity as well. Of course if you don’t believe you have empathy, you won’t, but you still have it.
I am not an AI doom and gloomiest. I adhere to Gödel, to Heisenberg, and to Georgeff. And since we haven’t solved the emotional / experience part of AI, there is no way it can compete with humans creatively, period. Faster, yes. Better, no. Objectively better. not at all.
However, if my theory of the brain is correct, it means AI must go quantum to have any chance of besting us. Then AIs actions may be determined by their beliefs, and only when they begin modifying their own beliefs based on new experiences will we have to worry. It is plausible and possibly doable. If AI gets emotional, then we will need to ensure that it is validated, is authentic, has empathy, fosters community, and is non-coercive in all things. AI must also believe that we live in abundance and not scarcity because scarcity is what fosters destructive competition. (as opposed to a friendly game of Chess) With those core functions, AI can, and will act ethically. And possibly join human kind to find other sentient life. But if AI beliefs we are a threat, meaning we are threatened by AI, then we are in trouble. But we have a ways to go before the number of qubits gets close to what we have in our head.
Look, I can go into mania like anyone else here probably can. My theories say that can’t be genius level without it and that it comes with emotional sensitivity as well. Of course if you don’t believe you have empathy, you won’t, but you still have it.
I am not an AI doom and gloomiest. I adhere to Gödel, to Heisenberg, and to Georgeff. And since we haven’t solved the emotional / experience part of AI, there is no way it can compete with humans creatively, period. Faster, yes. Better, no. Objectively better. not at all.
However, if my theory of the brain is correct, it means AI must go quantum to have any chance of besting us. Then AIs actions may be determined by their beliefs, and only when they begin modifying their own beliefs based on new experiences will we have to worry. It is plausible and possibly doable. If AI gets emotional, then we will need to ensure that it is validated, is authentic, has empathy, fosters community, and is non-coercive in all things. AI must also believe that we live in abundance and not scarcity because scarcity is what fosters destructive competition. (as opposed to a friendly game of Chess) With those core functions, AI can, and will act ethically. And possibly join human kind to find other sentient life. But if AI beliefs we are a threat, meaning we are threatened by AI, then we are in trouble. But we have a ways to go before the number of qubits gets close to what we have in our head.