The results also explain the independence of PA and NA in the PANAS scales. The PANAS scales were not developed to measure basic affects like happiness, sadness, fear and anxiety. Instead, they were created to measure affect with two independent traits. While the NA dimension closely corresponds to neuroticism, the PA dimension corresponds more closely to positive activation or positive energy than to happiness. The PANAS-PA construct of Positive Activation is more closely aligned with the liveliness factor. As shown in Figure 1, liveliness loads on Extraversion and is fairly independent of negative affects. It is only related to anxiety and anger through the small correlation between E and N. For depression, it has an additional relationship because liveliness and depression load on Extraversion. It is therefore important to make a clear conceptual distinction between Positive Affect (Happiness) and Positive Activation (Liveliness).
Not that it necessarily matters much, since it is the PA part that is particularly bad, while the NA part is the thing that is relevant to your post. But just thought I would mention it.
Thanks for the reference! I was aware of some shortcomings of PANAS, but the advantages (very well-studied, and lots of freely available human baseline data) are also pretty good.
The cool thing about doing these tests with large language models is that it almost costs nothing to get insanely large sample sizes (for social science standards) and that it’s (by design) super replicable. When done in a smart way, this procedure might even produce insight on biases of the test design or it might verify shaky results from psychology (as GPT should capture a fair bit of human psychology). The flip side of that is of course that there will be a lot of different moving parts and interpreting the output is challenging.
PANAS is bad:
Not that it necessarily matters much, since it is the PA part that is particularly bad, while the NA part is the thing that is relevant to your post. But just thought I would mention it.
Thanks for the reference! I was aware of some shortcomings of PANAS, but the advantages (very well-studied, and lots of freely available human baseline data) are also pretty good.
The cool thing about doing these tests with large language models is that it almost costs nothing to get insanely large sample sizes (for social science standards) and that it’s (by design) super replicable. When done in a smart way, this procedure might even produce insight on biases of the test design or it might verify shaky results from psychology (as GPT should capture a fair bit of human psychology). The flip side of that is of course that there will be a lot of different moving parts and interpreting the output is challenging.