You could say “why would you connect the playful and the serious” and I’d be like “they’re the same person, this is how they think, their character comes across when they play”.
This feels close to a crux to me. Compare: if you were in a theater troupe, and someone preferred to play malicious characters, would you make the same judgment?
So, it’s not a question of “playful” versus “serious” attitudes, but of “bounded by fiction” versus “executed in reality”. The former is allowed to leak into the latter in ways that are firmly on the side of nondestructive, so optional money handouts in themselves don’t result in recoil. But when that unipolar filter is breached, such as when flip-side consequences like increased moderator scrutiny also arrive in reality, not having a clear barrier where you’ve applied the same serious consideration that the real action would receive feels like introducing something adverse under false pretenses. (There is some exception made here for psychological consequences of e.g. satire.)
The modern April Fools’ tradition as I have usually interpreted it implies that otherwise egregious-seeming things done on April Fools’ Day are expected to be primarily fiction, with something like the aforementioned unipolar liminality to them.
Similarly, I think there’s something silly/funny about making good heart tokens and paying for them on April First. And yet, if someone tries to steal them, I will think of that as stealing.
Combining this with the above, I would predict TLW to be much less disturbed by a statement of “for the purpose of Good Heart tokens, we will err on the broad side in terms of non-intrusively detecting exploitative behavior and disallowing monetary redemption of tokens accumulated in such a way, but for all other moderation purposes, the level of scrutiny applied will remain as it was”. That would limit any increase in negative consequences to canceling the positive individual consequences “leaking out of” the experiment.
The other and arguably more important half of things here is that the higher-consequence action has been overlaid onto an existing habitual action in an invasive way. If you were playing a board game, moving resource tokens to your area contrary to the rules of the game might be considered antisocial cheating in the real world. However, if the host suddenly announced that the tokens in the game would be cashed out in currency and that stealing them would be considered equivalent to stealing money from their purse, while the game were ongoing, I would expect some people to get up and leave, even if they weren’t intending to cheat, because the tradeoff parameters around other “noise” risks have suddenly been pulled out from underneath them. This is as distinct from e.g. consciously entering a tournament where you know there will be real-money prizes, and it’s congruent with TLW’s initial question about opting out.
For my part, I’m not particularly worried (edit: on a personal level), but I do find it confusing that I didn’t see an explicit rule for which votes would be part of this experiment and which wouldn’t. My best guess is that it applies when both the execution of the vote and the creation of its target fall within the experiment period; is that right?
Compare: if you were in a theater troupe, and someone preferred to play malicious characters, would you make the same judgment?
So, it’s not a question of “playful” versus “serious” attitudes, but of “bounded by fiction” versus “executed in reality”.
It’s not anything like a 1:1 relationship, but I do indeed infer some information of that sort. I think people on-average play roles in acting that are “a part of them”. It’s easy to play a character when you can empathize with them.
There are people I know who like to wear black and play evil/trollish roles in video games. When I talk to them about their actual plans in life regarding work and friendship, they come up with similarly trollish and (playfully) evil strategies. It’s another extension of themselves. In contrast I think sometimes people let their shadows play the roles that are the opposite of who they play in life, and that’s also information about who they are, but it is inverted.
Again, this isn’t a rule and there’s massive swathes of exceptions, but I wouldn’t say “I don’t get much information about a person’s social and ethical qualities from what roles they like to play in contexts that are bounded-by-fiction”.
However, if the host suddenly announced that the tokens in the game would be cashed out in currency and that stealing them would be considered equivalent to stealing money from their purse, while the game were ongoing, I would expect some people to get up and leave, even if they weren’t intending to cheat, because the tradeoff parameters around other “noise” risks have suddenly been pulled out from underneath them.
Right. Good analogy.
I definitely updated a bunch due to TLW explaining that this noise is sufficiently serious for them to not want to be on the site. It seems like they’ve been treating their site participation more seriously than I think the median regular site-user does. When I thought about this game setup during its creation I thought a lot more about “most” users rather than the users on the tails.
Like, I didn’t think “some users will find this noisy relationship to things-related-to-deanonymization to be very threatening and consider leaving the site but I’ll do it anyway”, I thought “most users will think it’s fun or they’ll think it’s silly/irritating but just for a week, and be done with it afterward”. Which was an inaccurate prediction! TLW giving feedback rather than staying silent is personally appreciated.
It’s plausible to me that users like TLW would find it valuable to know more about how much I value anonymity and pseudonymity online.
For example about around two years ago I dropped everything for a couple days to make DontDoxScottAlexander.com with Jacob Lagerros, to help coordinate a coalition of people to pressure the NYT to have better policies against doxing (in that case and generally).
When a LW user asked if I would vouch for their good standing when they wanted to write a post about a local organization where they were concerned about inappropriate retaliation, I immediately said yes (before knowing the topic of the post or why they were asking) and I did so, even while I later received a lot of pressure to not do this, and ended up myself with a bunch of criticisms of the post.
And just last week I used my role as an admin to quickly undo the doxing of a LW user who I (correctly) suspected did not wish to be deanonymized. (I did that 5 mins after the comment was originally posted.)
After doing the last one I texted my friend saying it’s kind of stressful to make those mod calls within a couple minutes close to midnight, and that there’s lots of reasons why people might think it mod overreach (e.g. I edited someone else’s comment which feels kind of dirty to me), but I think it’s kind of crucial to protect pseudonymous identities on the internet.
(Obvious sentences that I’m saying to add redundancy: this doesn’t mean I didn’t make a mistake in this instance, and it doesn’t mean that your and TLW critiques aren’t true.)
Congratulations[1]. You have managed to describe my position substantially more eloquently and accurately than I could do so myself. I find myself scared and slightly in awe.
Combining this with the above, I would predict TLW to be much less disturbed by a statement of “for the purpose of Good Heart tokens, we will err on the broad side in terms of non-intrusively detecting exploitative behavior and disallowing monetary redemption of tokens accumulated in such a way, but for all other moderation purposes, the level of scrutiny applied will remain as it was”.
Correct, even to the point of correctly predicting “much less” but not zero.
The other and arguably more important half of things here is that the higher-consequence action has been overlaid onto an existing habitual action in an invasive way. If you were playing a board game, moving resource tokens to your area contrary to the rules of the game might be considered antisocial cheating in the real world. However, if the host suddenly announced that the tokens in the game would be cashed out in currency and that stealing them would be considered equivalent to stealing money from their purse, while the game were ongoing, I would expect some people to get up and leave, even if they weren’t intending to cheat, because the tradeoff parameters around other “noise” risks have suddenly been pulled out from underneath them.
This is a very good analogy. One other implication: it also likely results in consequences for future games with said host, not just the current one. The game has changed.
=*=*=*=
I ended up walking away from LessWrong for the (remaining) duration of Good Hart Week; I am debating as to if I should delete my account and walk away permanently, or if I should “just” operate under the assumption[2] that all information I post on this site can and will be later adversarially used against me[3][4] (which includes, but is not limited to, not posting controversial opinions in general).
I was initially leaning toward the former; I think I will do the latter.
This is my default assumption on most sites; I was operating under the (erroneous) assumption that a site whose main distinguishing feature was supposedly the pursuit of rationality wouldn’t go down this path[5].
I’m sorry you’re considering leaving the site or restraining what content you post. I wish it were otherwise. Even as a relatively new writer I like your contributions, and think it likely good for the site for you to contribute more over the coming years.
As perhaps a last note for now, I’ll point to the past events listed at the end of this comment as hopefully helpful for you to have a full-picture of how at least I think about anonymity on the site.
This feels close to a crux to me. Compare: if you were in a theater troupe, and someone preferred to play malicious characters, would you make the same judgment?
So, it’s not a question of “playful” versus “serious” attitudes, but of “bounded by fiction” versus “executed in reality”. The former is allowed to leak into the latter in ways that are firmly on the side of nondestructive, so optional money handouts in themselves don’t result in recoil. But when that unipolar filter is breached, such as when flip-side consequences like increased moderator scrutiny also arrive in reality, not having a clear barrier where you’ve applied the same serious consideration that the real action would receive feels like introducing something adverse under false pretenses. (There is some exception made here for psychological consequences of e.g. satire.)
The modern April Fools’ tradition as I have usually interpreted it implies that otherwise egregious-seeming things done on April Fools’ Day are expected to be primarily fiction, with something like the aforementioned unipolar liminality to them.
Combining this with the above, I would predict TLW to be much less disturbed by a statement of “for the purpose of Good Heart tokens, we will err on the broad side in terms of non-intrusively detecting exploitative behavior and disallowing monetary redemption of tokens accumulated in such a way, but for all other moderation purposes, the level of scrutiny applied will remain as it was”. That would limit any increase in negative consequences to canceling the positive individual consequences “leaking out of” the experiment.
The other and arguably more important half of things here is that the higher-consequence action has been overlaid onto an existing habitual action in an invasive way. If you were playing a board game, moving resource tokens to your area contrary to the rules of the game might be considered antisocial cheating in the real world. However, if the host suddenly announced that the tokens in the game would be cashed out in currency and that stealing them would be considered equivalent to stealing money from their purse, while the game were ongoing, I would expect some people to get up and leave, even if they weren’t intending to cheat, because the tradeoff parameters around other “noise” risks have suddenly been pulled out from underneath them. This is as distinct from e.g. consciously entering a tournament where you know there will be real-money prizes, and it’s congruent with TLW’s initial question about opting out.
For my part, I’m not particularly worried (edit: on a personal level), but I do find it confusing that I didn’t see an explicit rule for which votes would be part of this experiment and which wouldn’t. My best guess is that it applies when both the execution of the vote and the creation of its target fall within the experiment period; is that right?
It’s not anything like a 1:1 relationship, but I do indeed infer some information of that sort. I think people on-average play roles in acting that are “a part of them”. It’s easy to play a character when you can empathize with them.
There are people I know who like to wear black and play evil/trollish roles in video games. When I talk to them about their actual plans in life regarding work and friendship, they come up with similarly trollish and (playfully) evil strategies. It’s another extension of themselves. In contrast I think sometimes people let their shadows play the roles that are the opposite of who they play in life, and that’s also information about who they are, but it is inverted.
Again, this isn’t a rule and there’s massive swathes of exceptions, but I wouldn’t say “I don’t get much information about a person’s social and ethical qualities from what roles they like to play in contexts that are bounded-by-fiction”.
Right. Good analogy.
I definitely updated a bunch due to TLW explaining that this noise is sufficiently serious for them to not want to be on the site. It seems like they’ve been treating their site participation more seriously than I think the median regular site-user does. When I thought about this game setup during its creation I thought a lot more about “most” users rather than the users on the tails.
Like, I didn’t think “some users will find this noisy relationship to things-related-to-deanonymization to be very threatening and consider leaving the site but I’ll do it anyway”, I thought “most users will think it’s fun or they’ll think it’s silly/irritating but just for a week, and be done with it afterward”. Which was an inaccurate prediction! TLW giving feedback rather than staying silent is personally appreciated.
It’s plausible to me that users like TLW would find it valuable to know more about how much I value anonymity and pseudonymity online.
For example about around two years ago I dropped everything for a couple days to make DontDoxScottAlexander.com with Jacob Lagerros, to help coordinate a coalition of people to pressure the NYT to have better policies against doxing (in that case and generally).
When a LW user asked if I would vouch for their good standing when they wanted to write a post about a local organization where they were concerned about inappropriate retaliation, I immediately said yes (before knowing the topic of the post or why they were asking) and I did so, even while I later received a lot of pressure to not do this, and ended up myself with a bunch of criticisms of the post.
And just last week I used my role as an admin to quickly undo the doxing of a LW user who I (correctly) suspected did not wish to be deanonymized. (I did that 5 mins after the comment was originally posted.)
After doing the last one I texted my friend saying it’s kind of stressful to make those mod calls within a couple minutes close to midnight, and that there’s lots of reasons why people might think it mod overreach (e.g. I edited someone else’s comment which feels kind of dirty to me), but I think it’s kind of crucial to protect pseudonymous identities on the internet.
(Obvious sentences that I’m saying to add redundancy: this doesn’t mean I didn’t make a mistake in this instance, and it doesn’t mean that your and TLW critiques aren’t true.)
Congratulations[1]. You have managed to describe my position substantially more eloquently and accurately than I could do so myself. I find myself scared and slightly in awe.
Correct, even to the point of correctly predicting “much less” but not zero.
This is a very good analogy. One other implication: it also likely results in consequences for future games with said host, not just the current one. The game has changed.
=*=*=*=
I ended up walking away from LessWrong for the (remaining) duration of Good Hart Week; I am debating as to if I should delete my account and walk away permanently, or if I should “just” operate under the assumption[2] that all information I post on this site can and will be later adversarially used against me[3][4] (which includes, but is not limited to, not posting controversial opinions in general).
I was initially leaning toward the former; I think I will do the latter.
To be clear, because text on the internet can easily be misinterpreted: this is intended to be a strong compliment.
To be clear: as in “for the purposes of bounding risk” not as in “I believe this has a high priority of happening”.
Which is far more restrictive than had I been planning for this from the start.
This is my default assumption on most sites; I was operating under the (erroneous) assumption that a site whose main distinguishing feature was supposedly the pursuit of rationality wouldn’t go down this path[5].
You can easily get strategic-voting-like suboptimal outcomes, for one.
I’m sorry you’re considering leaving the site or restraining what content you post. I wish it were otherwise. Even as a relatively new writer I like your contributions, and think it likely good for the site for you to contribute more over the coming years.
As perhaps a last note for now, I’ll point to the past events listed at the end of this comment as hopefully helpful for you to have a full-picture of how at least I think about anonymity on the site.