Note that the examples in the OP are from an adversarial generative network. If its notion of “tree” were just “green things”, the adversary should be quite capable of exploiting that.
In order for the network to produce good pictures, the concept of “tree” must be hidden in there somewhere, but it could be hidden in a complicated and indirect manor. I am questioning whether the particular single node selected by the researchers encodes the concept of “tree” or “green thing”.
In order for the network to produce good pictures, the concept of “tree” must be hidden in there somewhere, but it could be hidden in a complicated and indirect manor. I am questioning whether the particular single node selected by the researchers encodes the concept of “tree” or “green thing”.
Ah, I see. You’re saying that the embedding might not actually be simple. Yeah, that’s plausible.