In that example you propose someone giving a thin AI with a very general goal which would require a lot of general intelligence to even understand.
If you have an AI which understands biochemistry you’d give it a goal like “design me a protein which binds to this molecule” not “maximize goodness and minimize badness”
The only way what you’re proposing would work would be for it to be a general AI with merely human level abilities in most areas combined with a small number of areas of extreme expertise. that is not a thin AI or a non-general AI.
I think you’re sneaking in a lot with the measure of health. As far as I can see, the only reason its dangerous is because it caches out in the real world, on the real broad population rather than a simulation. Having the AI reason about a drugs effects on a real world population definitely seems like a general skill, not a narrow skill.
In that example you propose someone giving a thin AI with a very general goal which would require a lot of general intelligence to even understand.
If you have an AI which understands biochemistry you’d give it a goal like “design me a protein which binds to this molecule” not “maximize goodness and minimize badness”
The only way what you’re proposing would work would be for it to be a general AI with merely human level abilities in most areas combined with a small number of areas of extreme expertise. that is not a thin AI or a non-general AI.
It seems the general goal could be cashed out in simple ways, with biochemistry, epidemeology, and a (potentially flawed) measure of “health”.
I think you’re sneaking in a lot with the measure of health. As far as I can see, the only reason its dangerous is because it caches out in the real world, on the real broad population rather than a simulation. Having the AI reason about a drugs effects on a real world population definitely seems like a general skill, not a narrow skill.