If you mean from a purely technological perspective I’m not sure, but as I said before I find it extremely unlikely that you could ever use game theory to solve with near certainty the friendly AI problem. Although I do have a crazy idea in which you try to convince an Ultra-AI that it might be in a computer simulation created by another more powerful ultra AI and the more powerful ultra-AI will terminate it if the first AI doesn’t irrevocably make itself friendly and commit ( unless it’s subsequently told that it’s in a simulation) to create a simulation of an ultra-AI. Although I doubt there’s any way of getting around the multiple equilibria problem and at best my mechanism could be used as a last ditch effort when you suspect that someone else is on the verge of creating an AI that will undergo an intelligence explosion and probably turn out to be unfriendly.
I do think that game theory is very important in looking at the social aspects of AI development. For example I fear that the United States and China might get in a prisoners dilemma that causes each to take less care developing a seed AI than they would if they alone were attempting to create a seed AI. Furthermore I’ve used a bit of light game theory to model how businesses competing to create an AI that might undergo an intelligence explosion would interact.
If you mean from a purely technological perspective I’m not sure, but as I said before I find it extremely unlikely that you could ever use game theory to solve with near certainty the friendly AI problem. Although I do have a crazy idea in which you try to convince an Ultra-AI that it might be in a computer simulation created by another more powerful ultra AI and the more powerful ultra-AI will terminate it if the first AI doesn’t irrevocably make itself friendly and commit ( unless it’s subsequently told that it’s in a simulation) to create a simulation of an ultra-AI. Although I doubt there’s any way of getting around the multiple equilibria problem and at best my mechanism could be used as a last ditch effort when you suspect that someone else is on the verge of creating an AI that will undergo an intelligence explosion and probably turn out to be unfriendly.
I do think that game theory is very important in looking at the social aspects of AI development. For example I fear that the United States and China might get in a prisoners dilemma that causes each to take less care developing a seed AI than they would if they alone were attempting to create a seed AI. Furthermore I’ve used a bit of light game theory to model how businesses competing to create an AI that might undergo an intelligence explosion would interact.
Rolf Nelson’s AI deterrence.