roko: “Game theory doesn’t tell you what you should do, it only tells you how to do it. E.g. in the classic prisoner’s dilemma, defection is only an optimal strategy if you’ve already decided that the right thing to do is to minimize your prison sentence.”
Survival and growth affect the trajectory of a particle in mind space. Some “ethical systems” may act as attractors. Particles interact, clumps interact, higher level behaviors emerge. A super AI might be able to navigate the density substructures of mind space guided by game theory. The “right” decision would be the one that maximizes persistence/growth. (I’m not saying that this would be good for humanity. I’m only suggesting that a theory of non-human ethics is possible.)
(Phil Goetz, I wrote the above before reading your comment: ”...variation in possible minds, for sufficiently intelligent AIs, is smaller than the variation in human minds”
Yes, this what I was trying to convey by “attractors” and navigation of density substructures in mind space.)
roko: “Game theory doesn’t tell you what you should do, it only tells you how to do it. E.g. in the classic prisoner’s dilemma, defection is only an optimal strategy if you’ve already decided that the right thing to do is to minimize your prison sentence.”
Survival and growth affect the trajectory of a particle in mind space. Some “ethical systems” may act as attractors. Particles interact, clumps interact, higher level behaviors emerge. A super AI might be able to navigate the density substructures of mind space guided by game theory. The “right” decision would be the one that maximizes persistence/growth. (I’m not saying that this would be good for humanity. I’m only suggesting that a theory of non-human ethics is possible.)
(Phil Goetz, I wrote the above before reading your comment: ”...variation in possible minds, for sufficiently intelligent AIs, is smaller than the variation in human minds” Yes, this what I was trying to convey by “attractors” and navigation of density substructures in mind space.)