Meta comment (I can PM my actual responses when I work out what I want them to be); I found I really struggled with this process, because of the awkward tension between answering the questions and playing a role. I just don’t understand what my goal is.
Let me call my view position 1, and the other view position A. The first time I read just this post and I thought it was just a survey, where I should “give my honest opinion”, but where some of the position A questions would be non-sensical for someone of position 1 so just pretend a little in order to give an answer that’s not “mu”.
Then I read the link on what an Ideological Turing test actually was, and that changed my thinking completely. I don’t want to give almost-honest answers to position A. I want to create a character who is a genuinely in position A and write entirely fake answers that are as believable as possible and may have nothing to do with my opinions.
In my first attempt at that though, it was still obvious which was which, because my actual views for position 1 were nuanced, unusual and contained a fair number of pro-A elements, making it quite clear when I was giving my actual opinion. So I start meta-gaming. If I want to fool people I really want a fake position 1 opinion as well. In fact if I really want to fool people I need to create a complete character with views nothing like my own, and answer as them for both sets. But surely anyone could get 50% by just writing obviously ignorant answers for both sides? Which doesn’t seem productive.
I guess my question is, what’s my “win” condition here? Are we taking individuals and trying to classify their position? If so do I “win” if it’s 50-50, or do I “win” if it’s 100-0 in favour of the opposite opinion? Or are we mixing all the answers for position A and then classifying them as genuine or fake, then separately doing the same for position 1? In that case I suppose I “win” if the position I support is the one classified with higher accuracy. In other words I want to get classified as genuine twice. That actually makes the most sense, maybe I’m just getting confused by all the paired-by-individual responses in the comments, which is not at all how the evaluators will see it, they should not be told which pairs are from the same person at all.
Sorry maybe everyone else gets this already, but I would have thought there’s others reading just this post without enough context who might have similar issues.
Meta comment (I can PM my actual responses when I work out what I want them to be); I found I really struggled with this process, because of the awkward tension between answering the questions and playing a role. I just don’t understand what my goal is.
Let me call my view position 1, and the other view position A. The first time I read just this post and I thought it was just a survey, where I should “give my honest opinion”, but where some of the position A questions would be non-sensical for someone of position 1 so just pretend a little in order to give an answer that’s not “mu”.
Then I read the link on what an Ideological Turing test actually was, and that changed my thinking completely. I don’t want to give almost-honest answers to position A. I want to create a character who is a genuinely in position A and write entirely fake answers that are as believable as possible and may have nothing to do with my opinions.
In my first attempt at that though, it was still obvious which was which, because my actual views for position 1 were nuanced, unusual and contained a fair number of pro-A elements, making it quite clear when I was giving my actual opinion. So I start meta-gaming. If I want to fool people I really want a fake position 1 opinion as well. In fact if I really want to fool people I need to create a complete character with views nothing like my own, and answer as them for both sets. But surely anyone could get 50% by just writing obviously ignorant answers for both sides? Which doesn’t seem productive.
I guess my question is, what’s my “win” condition here? Are we taking individuals and trying to classify their position? If so do I “win” if it’s 50-50, or do I “win” if it’s 100-0 in favour of the opposite opinion? Or are we mixing all the answers for position A and then classifying them as genuine or fake, then separately doing the same for position 1? In that case I suppose I “win” if the position I support is the one classified with higher accuracy. In other words I want to get classified as genuine twice. That actually makes the most sense, maybe I’m just getting confused by all the paired-by-individual responses in the comments, which is not at all how the evaluators will see it, they should not be told which pairs are from the same person at all.
Sorry maybe everyone else gets this already, but I would have thought there’s others reading just this post without enough context who might have similar issues.