If you want it to cure cancer, you need to give it a lot of information about physics, chemistry and mammalian biology.
This is much, much safer than elections or wars, since we can basically prevent it from learning human models.
And I should made this explicit, but I believe sandboxing can be done in such a way that it basically incurs no performance penalty.
That is, I believe AI sandboxing is one of the most competitive proposals here that reduces the risk to arguably 0, in the STEM AI case.
This is much, much safer than elections or wars, since we can basically prevent it from learning human models.
And I should made this explicit, but I believe sandboxing can be done in such a way that it basically incurs no performance penalty.
That is, I believe AI sandboxing is one of the most competitive proposals here that reduces the risk to arguably 0, in the STEM AI case.