No, none of this needs to be explicitly taught to it, that’s what I’m trying to say.
The AI understands psychology, so just point it at the internet and tell it to inform itself. It might even read through this very comment of yours, think that these topics might be important for its task and decide to read about them, all on its own.
By ordering it to imagine what it would do in your position you implicitly order it to inform itself of all these things so that it can judge well.
If it fails to do so, the humans conversing with the AI will be able to point out a lot of things in the AI’s suggestion that they wouldn’t be comfortable with. This in turn will tell the AI that it should better inform itself of all these topics and consider them so that the humans will be more content with its next suggestion.
No, none of this needs to be explicitly taught to it, that’s what I’m trying to say.
The AI understands psychology, so just point it at the internet and tell it to inform itself. It might even read through this very comment of yours, think that these topics might be important for its task and decide to read about them, all on its own.
By ordering it to imagine what it would do in your position you implicitly order it to inform itself of all these things so that it can judge well.
If it fails to do so, the humans conversing with the AI will be able to point out a lot of things in the AI’s suggestion that they wouldn’t be comfortable with. This in turn will tell the AI that it should better inform itself of all these topics and consider them so that the humans will be more content with its next suggestion.