...It sounds like you’re hinting at the fact that humans are not benevolent towards fish. If we are, then we do share its goals when it comes to outcomes for the fish—we just have other goals, which do not conflict. (I’m assuming the fish actually has clear preferences.) And a well-designed AI should not even have additional goals. The lack of understanding “only” might come in with the means, or with our poor understanding of our own preferences.
...It sounds like you’re hinting at the fact that humans are not benevolent towards fish. If we are, then we do share its goals when it comes to outcomes for the fish—we just have other goals, which do not conflict. (I’m assuming the fish actually has clear preferences.) And a well-designed AI should not even have additional goals. The lack of understanding “only” might come in with the means, or with our poor understanding of our own preferences.