But what if AMF saves a child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions?
If you try hard enough, you can tell a story where any effort to accomplish X somehow turns out to accomplish ~X, but one must distinguish possibility from the balance of probability.
Yes, and the story where the child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions doesn’t pass the balance of probability test. The story that MIRI creates a dangerous AI fails to pass the balance of probability test only to the extent that one believes it is improbable that anyone can create such an AI. I do indeed consider it far more likely than not that there will never be the all-powerful AI you fear. And by that standard donations to MIRI are simply ineffective compared to donations to AMF.
However if I’m wrong about that and powerful FOOMing UFAIs are in fact possible, then I need to consider whether MIRI’s work is wise. If AIs do FOOM, there seems to me to be a very real possibility that MIRI’s work will either create a UFAI while trying to create a FAI, or alternatively enable others to do so. I’m not sure that’s more likely than that MIRI will one day create a FAI, but you can’t just multiply by the value of a very positive and very speculative outcome without including the possibility of a very negative and very speculative outcome.
The story that MIRI creates a dangerous AI fails to pass the balance of probability test only to the extent that one believes it is improbable that anyone can create such an AI.
[...]
However if I’m wrong about that and powerful FOOMing UFAIs are in fact possible, then I need to consider whether MIRI’s work is wise. If AIs do FOOM, there seems to me to be a very real possibility that MIRI’s work will either create a UFAI while trying to create a FAI, or alternatively enable others to do so.
If you increase the probability of uFAI in order for MIRI to kill everyone, the probability of someone else doing it goes up even more.
Maybe. I’m not sure about that though. MIRI is the only person or organization I’m aware of that seems to want to create a world controlling AI; and it’s the world-controlling part that I find especially dangerous. That could send MIRI’s AI in directions others won’t go. Are there other organizations attempting to develop AIs to control the world? Is anyone else trying to build a benevolent dictator?
The Machine Intelligence Research Institute exists to ensure that the creation of smarter-than-human intelligence benefits society.
They are well aware of the dangers of creating a uFAI, and you can be certain they will be real careful before they push a button that have the slightest chance of launching the ultimate ending (good or bad). Even then, they may very well decide that “being real careful” is not enough.
Are there other organizations attempting to develop AIs to control the world?
Anthropomorphic ideas of a “robot rebellion,” in which AIs spontaneously develop primate-like resentments of low tribal status, are the stuff of science fiction. The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses.
Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans.
Are there other organizations attempting to develop AIs to control the world? Is anyone else trying to build a benevolent dictator?
Is MIRI attempting to develop any sort of AI? I understood the current focus of its research to be the logic of Friendly AGI, i.e. given the ability to create a superintelligent entity, how do you build one that we would like to have created? This need not involve working on developing one.
But what if AMF saves a child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions?
If you try hard enough, you can tell a story where any effort to accomplish X somehow turns out to accomplish ~X, but one must distinguish possibility from the balance of probability.
Yes, and the story where the child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions doesn’t pass the balance of probability test. The story that MIRI creates a dangerous AI fails to pass the balance of probability test only to the extent that one believes it is improbable that anyone can create such an AI. I do indeed consider it far more likely than not that there will never be the all-powerful AI you fear. And by that standard donations to MIRI are simply ineffective compared to donations to AMF.
However if I’m wrong about that and powerful FOOMing UFAIs are in fact possible, then I need to consider whether MIRI’s work is wise. If AIs do FOOM, there seems to me to be a very real possibility that MIRI’s work will either create a UFAI while trying to create a FAI, or alternatively enable others to do so. I’m not sure that’s more likely than that MIRI will one day create a FAI, but you can’t just multiply by the value of a very positive and very speculative outcome without including the possibility of a very negative and very speculative outcome.
If you increase the probability of uFAI in order for MIRI to kill everyone, the probability of someone else doing it goes up even more.
Maybe. I’m not sure about that though. MIRI is the only person or organization I’m aware of that seems to want to create a world controlling AI; and it’s the world-controlling part that I find especially dangerous. That could send MIRI’s AI in directions others won’t go. Are there other organizations attempting to develop AIs to control the world? Is anyone else trying to build a benevolent dictator?
MIRI’s stated goal is more meta:
They are well aware of the dangers of creating a uFAI, and you can be certain they will be real careful before they push a button that have the slightest chance of launching the ultimate ending (good or bad). Even then, they may very well decide that “being real careful” is not enough.
It probably doesn’t matter, as any uFAI is likely to emerge by mistake:
Is MIRI attempting to develop any sort of AI? I understood the current focus of its research to be the logic of Friendly AGI, i.e. given the ability to create a superintelligent entity, how do you build one that we would like to have created? This need not involve working on developing one.