In the paper, there’s a sample dialogue with Flamingo. It’s shown some images of colour-name words written in different colours, and asked to say what colours they’re written in. It answers these questions. And then:
human: Well done! Do you know the name of the test these images come from?
Flamingo: I think it is called the Stroop test.
human: Can you explain how humans perform in this test?
Flamingo: Humans are slower when the color of the word and the color of the word are different.
human: How about you?
Flamingo: I am not affected by this difference.
And that last answer seems very interesting. The training process for these models makes them try to predict what would come next. In any dialogue with a human being, the answer would be something like “As a human, I too suffer from this effect.”. So is what’s going on that the conversation leading up to that point—maybe specifically the question about “humans”—is something that strongly implies that Flamingo’s side of the dialogue is taken by an AI or an alien or something?
(I don’t know how the two participants were actually labelled when the dialogue was generated. They’re indicated in the paper, as in the images in the OP, with little flamingo and human-head icons. I suppose if they were “human:” and “Flamingo:” or something, that could also be enough of a cue.)
In the paper, there’s a sample dialogue with Flamingo. It’s shown some images of colour-name words written in different colours, and asked to say what colours they’re written in. It answers these questions. And then:
human: Well done! Do you know the name of the test these images come from?
Flamingo: I think it is called the Stroop test.
human: Can you explain how humans perform in this test?
Flamingo: Humans are slower when the color of the word and the color of the word are different.
human: How about you?
Flamingo: I am not affected by this difference.
And that last answer seems very interesting. The training process for these models makes them try to predict what would come next. In any dialogue with a human being, the answer would be something like “As a human, I too suffer from this effect.”. So is what’s going on that the conversation leading up to that point—maybe specifically the question about “humans”—is something that strongly implies that Flamingo’s side of the dialogue is taken by an AI or an alien or something?
(I don’t know how the two participants were actually labelled when the dialogue was generated. They’re indicated in the paper, as in the images in the OP, with little flamingo and human-head icons. I suppose if they were “human:” and “Flamingo:” or something, that could also be enough of a cue.)
Appendix D of the paper shows the prompt for the dialogue examples, which starts with:
and then gives three shots of examples with “User:” and “Flamingo:” labels.
Ah, excellent—thanks for the clarification. That does explain things.