That depends, what sort of solution is it trying to find? If it’s trying to maximize my happiness, that’s all fine and dandy; if it’s trying to minimize my capacity as an impediment to its acquisition of superior paperclip-maximizing hardware, I would object. Either way, I base my trust on the AI’s goal, rather than its algorithms (assuming that the algorithms are effective at accomplishing that goal).
That depends, what sort of solution is it trying to find? If it’s trying to maximize my happiness, that’s all fine and dandy; if it’s trying to minimize my capacity as an impediment to its acquisition of superior paperclip-maximizing hardware, I would object. Either way, I base my trust on the AI’s goal, rather than its algorithms (assuming that the algorithms are effective at accomplishing that goal).