Is there a way I can completely opt out of this, such that I do not have to concern myself as to what precisely is or is not considered a voting ring / etc?
To be clear:
Moderation: We’ll bring mod powers against accounts that are abusing the system. We’ll also do a pass over the votes at the end of the week to check for any suspicious behavior (while aiming to minimize any deanonymization).
This is a strong negative for me, to the point I am considering leaving the site as a result.
Insofar as your point is about deanonymization, I’ll say that so far in the LW team we’ve tried hard to not share data about identities within the team and definitely not outside of it. When I wrote that sentence I was primarily meaning “while aiming to minimize any deanonymization within the team” e.g. if we have to, one person checks an identity and doesn’t tell it to the other team members. I almost didn’t even think about public deanonymization as a possible outcome. When I said “pass over the votes” I’m primarily expecting us to look at userIDs which are long number/digit strings and avoid looking at actual usernames.
I only expect us to look into identities if something obvious and egregious is happening. I’m not sure what to say other than this is how it’s kind of been most of the time (e.g. this has happened in the past when we’ve looked into sockpuppets) and I don’t think you need to worry about this if you’re not trying to game the system and you’re not actively trying to exchange votes.
Same goes for other mod powers (e.g. bans), I think it’s very unlikely you’ll be affected by these things this week unless you’re consciously engaging in vote trading.
(I’ve also opened a PM chat with you if there’s things you’d like to say/ask privately.)
This is a strong negative for me, to the point I am considering leaving the site as a result.
Just for the record, we generally monitor anonymized voting activity and do very rare deanonymizing spot-checks if things look suspicious, since LessWrong has a long history of having to deal with people abusing the voting system in one way or another. I don’t think the above really implies anything out of the ordinary, so if this is an issue, maybe it’s been an issue for a long time?
Does it? I don’t think it’s a trivial concern if someone is trying to just take hundreds or potentially thousands of dollars. That’s the sort of concern that I think is pretty reasonable for a LW mod to check anonymized (and if it’s looking pretty bad some deanonymized) voting records. As a datapoint I expect financial companies like Stripe would do this stuff for much lower levels of fraud.
A nearby argument you could make is “having a week where you substantially increase the reward for attacking the site is not worth it because the measures you’ll need to use to defend the site will infringe upon users’ privacy”. To which I say this is a cost-benefit calculation, I think the value is quite high, and I think it’s quite likely that nobody will try any funny business at all and nobody’s privacy will be infringed upon by the mod team.
(Your discomfort is noted, and I’ll certainly weigh it and others’ discomfort when (a) reflecting on whether this experiment was net valuable and (b) when thinking about running other experiments in the future.)
I think it’s pretty reasonable to choose to do something a little out-there / funny on April Fool’s, even if there are additional more serious reasons to do it.
Bayesian reasoning means that April Fool’s is by far the worst day of the year to do something that you wish to be taken as not purely a joke, especially something that is also a little out there / funny.
I think having a schelling day for trying weird stuff is good, and April Fool’s day seems fine. I don’r have nearly as strong a feeling as you seem to that April Fool’s jokes are never partially serious.
When you say that “Bayesian reasoning means that April Fool’s is by far the worst day of the year [for an experiment like this]”, what do you mean? I expect you mean something relating to reasoning about intentions during April Fool’s (and that this lack of clarity amongst commenters is a negative), but the specifics are unclear to me. Your more expansive post above details some of the problems you have with this experiment, but doesn’t relate back to Bayes in any way I can identify.
I see! You thought it was just a joke. That makes way more sense.
Still I don’t quite get this. I don’t get why serious and playful are supposed to be separate magesteria, never to interact. Like, if someone cheats a lot when we play board games, then I think of them as more likely to be a cheater in real life too. You could say “why would you connect the playful and the serious” and I’d be like “they’re the same person, this is how they think, their character comes across when they play”. Similarly, I think there’s something silly/funny about making good heart tokens and paying for them on April First. And yet, if someone tries to steal them, I will think of that as stealing.
But yeah, noted that you assumed it was a joke. Not the first time this has happened to me.
You could say “why would you connect the playful and the serious” and I’d be like “they’re the same person, this is how they think, their character comes across when they play”.
This feels close to a crux to me. Compare: if you were in a theater troupe, and someone preferred to play malicious characters, would you make the same judgment?
So, it’s not a question of “playful” versus “serious” attitudes, but of “bounded by fiction” versus “executed in reality”. The former is allowed to leak into the latter in ways that are firmly on the side of nondestructive, so optional money handouts in themselves don’t result in recoil. But when that unipolar filter is breached, such as when flip-side consequences like increased moderator scrutiny also arrive in reality, not having a clear barrier where you’ve applied the same serious consideration that the real action would receive feels like introducing something adverse under false pretenses. (There is some exception made here for psychological consequences of e.g. satire.)
The modern April Fools’ tradition as I have usually interpreted it implies that otherwise egregious-seeming things done on April Fools’ Day are expected to be primarily fiction, with something like the aforementioned unipolar liminality to them.
Similarly, I think there’s something silly/funny about making good heart tokens and paying for them on April First. And yet, if someone tries to steal them, I will think of that as stealing.
Combining this with the above, I would predict TLW to be much less disturbed by a statement of “for the purpose of Good Heart tokens, we will err on the broad side in terms of non-intrusively detecting exploitative behavior and disallowing monetary redemption of tokens accumulated in such a way, but for all other moderation purposes, the level of scrutiny applied will remain as it was”. That would limit any increase in negative consequences to canceling the positive individual consequences “leaking out of” the experiment.
The other and arguably more important half of things here is that the higher-consequence action has been overlaid onto an existing habitual action in an invasive way. If you were playing a board game, moving resource tokens to your area contrary to the rules of the game might be considered antisocial cheating in the real world. However, if the host suddenly announced that the tokens in the game would be cashed out in currency and that stealing them would be considered equivalent to stealing money from their purse, while the game were ongoing, I would expect some people to get up and leave, even if they weren’t intending to cheat, because the tradeoff parameters around other “noise” risks have suddenly been pulled out from underneath them. This is as distinct from e.g. consciously entering a tournament where you know there will be real-money prizes, and it’s congruent with TLW’s initial question about opting out.
For my part, I’m not particularly worried (edit: on a personal level), but I do find it confusing that I didn’t see an explicit rule for which votes would be part of this experiment and which wouldn’t. My best guess is that it applies when both the execution of the vote and the creation of its target fall within the experiment period; is that right?
Compare: if you were in a theater troupe, and someone preferred to play malicious characters, would you make the same judgment?
So, it’s not a question of “playful” versus “serious” attitudes, but of “bounded by fiction” versus “executed in reality”.
It’s not anything like a 1:1 relationship, but I do indeed infer some information of that sort. I think people on-average play roles in acting that are “a part of them”. It’s easy to play a character when you can empathize with them.
There are people I know who like to wear black and play evil/trollish roles in video games. When I talk to them about their actual plans in life regarding work and friendship, they come up with similarly trollish and (playfully) evil strategies. It’s another extension of themselves. In contrast I think sometimes people let their shadows play the roles that are the opposite of who they play in life, and that’s also information about who they are, but it is inverted.
Again, this isn’t a rule and there’s massive swathes of exceptions, but I wouldn’t say “I don’t get much information about a person’s social and ethical qualities from what roles they like to play in contexts that are bounded-by-fiction”.
However, if the host suddenly announced that the tokens in the game would be cashed out in currency and that stealing them would be considered equivalent to stealing money from their purse, while the game were ongoing, I would expect some people to get up and leave, even if they weren’t intending to cheat, because the tradeoff parameters around other “noise” risks have suddenly been pulled out from underneath them.
Right. Good analogy.
I definitely updated a bunch due to TLW explaining that this noise is sufficiently serious for them to not want to be on the site. It seems like they’ve been treating their site participation more seriously than I think the median regular site-user does. When I thought about this game setup during its creation I thought a lot more about “most” users rather than the users on the tails.
Like, I didn’t think “some users will find this noisy relationship to things-related-to-deanonymization to be very threatening and consider leaving the site but I’ll do it anyway”, I thought “most users will think it’s fun or they’ll think it’s silly/irritating but just for a week, and be done with it afterward”. Which was an inaccurate prediction! TLW giving feedback rather than staying silent is personally appreciated.
It’s plausible to me that users like TLW would find it valuable to know more about how much I value anonymity and pseudonymity online.
For example about around two years ago I dropped everything for a couple days to make DontDoxScottAlexander.com with Jacob Lagerros, to help coordinate a coalition of people to pressure the NYT to have better policies against doxing (in that case and generally).
When a LW user asked if I would vouch for their good standing when they wanted to write a post about a local organization where they were concerned about inappropriate retaliation, I immediately said yes (before knowing the topic of the post or why they were asking) and I did so, even while I later received a lot of pressure to not do this, and ended up myself with a bunch of criticisms of the post.
And just last week I used my role as an admin to quickly undo the doxing of a LW user who I (correctly) suspected did not wish to be deanonymized. (I did that 5 mins after the comment was originally posted.)
After doing the last one I texted my friend saying it’s kind of stressful to make those mod calls within a couple minutes close to midnight, and that there’s lots of reasons why people might think it mod overreach (e.g. I edited someone else’s comment which feels kind of dirty to me), but I think it’s kind of crucial to protect pseudonymous identities on the internet.
(Obvious sentences that I’m saying to add redundancy: this doesn’t mean I didn’t make a mistake in this instance, and it doesn’t mean that your and TLW critiques aren’t true.)
Congratulations[1]. You have managed to describe my position substantially more eloquently and accurately than I could do so myself. I find myself scared and slightly in awe.
Combining this with the above, I would predict TLW to be much less disturbed by a statement of “for the purpose of Good Heart tokens, we will err on the broad side in terms of non-intrusively detecting exploitative behavior and disallowing monetary redemption of tokens accumulated in such a way, but for all other moderation purposes, the level of scrutiny applied will remain as it was”.
Correct, even to the point of correctly predicting “much less” but not zero.
The other and arguably more important half of things here is that the higher-consequence action has been overlaid onto an existing habitual action in an invasive way. If you were playing a board game, moving resource tokens to your area contrary to the rules of the game might be considered antisocial cheating in the real world. However, if the host suddenly announced that the tokens in the game would be cashed out in currency and that stealing them would be considered equivalent to stealing money from their purse, while the game were ongoing, I would expect some people to get up and leave, even if they weren’t intending to cheat, because the tradeoff parameters around other “noise” risks have suddenly been pulled out from underneath them.
This is a very good analogy. One other implication: it also likely results in consequences for future games with said host, not just the current one. The game has changed.
=*=*=*=
I ended up walking away from LessWrong for the (remaining) duration of Good Hart Week; I am debating as to if I should delete my account and walk away permanently, or if I should “just” operate under the assumption[2] that all information I post on this site can and will be later adversarially used against me[3][4] (which includes, but is not limited to, not posting controversial opinions in general).
I was initially leaning toward the former; I think I will do the latter.
This is my default assumption on most sites; I was operating under the (erroneous) assumption that a site whose main distinguishing feature was supposedly the pursuit of rationality wouldn’t go down this path[5].
I’m sorry you’re considering leaving the site or restraining what content you post. I wish it were otherwise. Even as a relatively new writer I like your contributions, and think it likely good for the site for you to contribute more over the coming years.
As perhaps a last note for now, I’ll point to the past events listed at the end of this comment as hopefully helpful for you to have a full-picture of how at least I think about anonymity on the site.
Bayesian reasoning means that April Fool’s is by far the worst day of the year to do something that you wish to be taken as not purely a joke, especially something that is playful.
Given that the phrasing of your reply implies[1] that this isn’t just a joke, I have additional concerns:
Calling anything real-money-related ‘playful’ is a yellow flag, and being confused as to why anyone might consider someone this a yellow flag is a red flag [2].
You are discouraging anonymous[3] participants compared to non-anonymous participants, due to the difficulty in anonymously transferring money. This disincentivizes rational discussion.
You are further discouraging throwaways and anonymous participants compared to non-anonymous participants, due to the threshold for withdrawals. This also disincentivizes rational discussion.
You are yet further discouraging anonymous participants compared to non-anonymous participants, due to the signaling that you are willing to dox people. This too disincentivizes rational discussion.
This unilaterally moves voting from a signal of ‘do I wish for this to be more visible[4]’ to ‘do I wish the person who made this comment to get more money’. These are not the same thing. In particular, this discourages upvoting valid-and-insightful comments by participants that I believe are doing more harm than good on the net, and encourages upvoting invalid-or-uninsightful comments by participants that I believe are doing more good than harm on the net, however both of these overall disincentivize rational discussion[5].
This seriously disincentivizes ‘risky’ comments by accounts that have a good reputation. This can easily result in strategic-voting-like suboptimal outcomes.
Doing this and then calling them Good Hearttokens implies that you explicitly brought up the connection to Goodhart’s Law and then decided to push for it anyway.
Likely more, but I am too frustrated by this to continue right now.
I seriously hope you understand why. If you don’t, I have to seriously re-examine this forum[6]. I might note that the main defining feature of ‘play’ is that, unlike most things which are externally motivated, play is intrinsically motivated, whereas the classic example of an extrinsic motivator is… money.
I am aware this forum isn’t particularly anonymous. And yes, I consider it a strike against it. And yes, there are valid[7] points I have self-censored as a result.
Even the base ‘number goes up’ of standard upvotes/downvotes is bad enough, with discussions about the problems and possibilities as to how to mitigate this on this very site.
A forum whose main distinguishing feature is supposedly the pursuit of rationality[8] where of its few[9] admins doesn’t get something this basic and was able to make this much of a change without consulting to see what they were missing is, uh, not great.
This seriously disincentivizes ‘risky’ comments by accounts that have a good reputation. This can easily result in strategic-voting-like suboptimal outcomes.
Highlighting this bit. I hadn’t thought about this at all.
I think I understand that. I do think it’s pretty unlikely this is some kind of step towards a broader trivialization of looking at voting data, but I do understand the concern.
...and clicking through to the linked post[1] it’s also talking about e.g. future potential extensions to split between logged-in and non-logged-in users.
Admittedly, this is not specifically voting data, and this step is still a fairly aggregated statistic, but it is a step in the broader trivialization of looking at user data.
For the record, I think it’s the wrong call for the EA Forum to show that data to users. I know from looking at that data on LW that it is a terrible and distracting proxy for what the actual great content is on the site. I think every time we’ve done the annual LW review, the most-viewed post that year has not passed review. One of the top 20 most viewed LW posts of all time is a bad joke about having an orange for a head. It’s a spiky metric and everyone I know whose job depends on views/clicks finds it incredibly stressful. Karma is a way better metric for what’s valued on-site, and passing annual review is an even better metric.
As an aside: I suspect that “If you vote as-usual and don’t think about it, I do not expect you will end up explicitly trading votes with other people” is less true for me than “usual”.
One of the things I tend to end up doing to discover content on a site like this is to flip through the user pages of people with whom I have had interesting comment chains.
If I ever end up doing so to someone who does the same, this would look very much like trading votes (using an external or implicit collusion mechanism).
Is there a way I can completely opt out of this, such that I do not have to concern myself as to what precisely is or is not considered a voting ring / etc?
To be clear:
This is a strong negative for me, to the point I am considering leaving the site as a result.
I appreciated your comment on my post earlier today! Don’t leave!
(Responding to your “To be clear” edit.)
I see.
Insofar as your point is about deanonymization, I’ll say that so far in the LW team we’ve tried hard to not share data about identities within the team and definitely not outside of it. When I wrote that sentence I was primarily meaning “while aiming to minimize any deanonymization within the team” e.g. if we have to, one person checks an identity and doesn’t tell it to the other team members. I almost didn’t even think about public deanonymization as a possible outcome. When I said “pass over the votes” I’m primarily expecting us to look at userIDs which are long number/digit strings and avoid looking at actual usernames.
I only expect us to look into identities if something obvious and egregious is happening. I’m not sure what to say other than this is how it’s kind of been most of the time (e.g. this has happened in the past when we’ve looked into sockpuppets) and I don’t think you need to worry about this if you’re not trying to game the system and you’re not actively trying to exchange votes.
Same goes for other mod powers (e.g. bans), I think it’s very unlikely you’ll be affected by these things this week unless you’re consciously engaging in vote trading.
(I’ve also opened a PM chat with you if there’s things you’d like to say/ask privately.)
Just for the record, we generally monitor anonymized voting activity and do very rare deanonymizing spot-checks if things look suspicious, since LessWrong has a long history of having to deal with people abusing the voting system in one way or another. I don’t think the above really implies anything out of the ordinary, so if this is an issue, maybe it’s been an issue for a long time?
If I had to summarize: Good Heart normalizes using this for ‘trivial’ purposes to an extent that I am uncomfortable with.
Does it? I don’t think it’s a trivial concern if someone is trying to just take hundreds or potentially thousands of dollars. That’s the sort of concern that I think is pretty reasonable for a LW mod to check anonymized (and if it’s looking pretty bad some deanonymized) voting records. As a datapoint I expect financial companies like Stripe would do this stuff for much lower levels of fraud.
A nearby argument you could make is “having a week where you substantially increase the reward for attacking the site is not worth it because the measures you’ll need to use to defend the site will infringe upon users’ privacy”. To which I say this is a cost-benefit calculation, I think the value is quite high, and I think it’s quite likely that nobody will try any funny business at all and nobody’s privacy will be infringed upon by the mod team.
(Your discomfort is noted, and I’ll certainly weigh it and others’ discomfort when (a) reflecting on whether this experiment was net valuable and (b) when thinking about running other experiments in the future.)
Ah. Here might be some of the issue.
Given that this was introduced on April 1st, I have a strong prior that it was an April Fool’s joke.
If it was, you’re sending a signal that you’re willing to dox people for an April Fool’s joke.
If it wasn’t, you picked a very unfortunate time to do something serious.
I think it’s pretty reasonable to choose to do something a little out-there / funny on April Fool’s, even if there are additional more serious reasons to do it.
I’d argue the exact opposite.
Bayesian reasoning means that April Fool’s is by far the worst day of the year to do something that you wish to be taken as not purely a joke, especially something that is also a little out there / funny.
I think having a schelling day for trying weird stuff is good, and April Fool’s day seems fine. I don’r have nearly as strong a feeling as you seem to that April Fool’s jokes are never partially serious.
When you say that “Bayesian reasoning means that April Fool’s is by far the worst day of the year [for an experiment like this]”, what do you mean? I expect you mean something relating to reasoning about intentions during April Fool’s (and that this lack of clarity amongst commenters is a negative), but the specifics are unclear to me. Your more expansive post above details some of the problems you have with this experiment, but doesn’t relate back to Bayes in any way I can identify.
The probability that it’s just a joke is higher on April Fool’s.
P(fake|AprilFools)/P(real|AprilFools) is pretty large (note that “fake” means “not(real)”, which makes this a Bayes Ratio.
I see! You thought it was just a joke. That makes way more sense.
Still I don’t quite get this. I don’t get why serious and playful are supposed to be separate magesteria, never to interact. Like, if someone cheats a lot when we play board games, then I think of them as more likely to be a cheater in real life too. You could say “why would you connect the playful and the serious” and I’d be like “they’re the same person, this is how they think, their character comes across when they play”. Similarly, I think there’s something silly/funny about making good heart tokens and paying for them on April First. And yet, if someone tries to steal them, I will think of that as stealing.
But yeah, noted that you assumed it was a joke. Not the first time this has happened to me.
This feels close to a crux to me. Compare: if you were in a theater troupe, and someone preferred to play malicious characters, would you make the same judgment?
So, it’s not a question of “playful” versus “serious” attitudes, but of “bounded by fiction” versus “executed in reality”. The former is allowed to leak into the latter in ways that are firmly on the side of nondestructive, so optional money handouts in themselves don’t result in recoil. But when that unipolar filter is breached, such as when flip-side consequences like increased moderator scrutiny also arrive in reality, not having a clear barrier where you’ve applied the same serious consideration that the real action would receive feels like introducing something adverse under false pretenses. (There is some exception made here for psychological consequences of e.g. satire.)
The modern April Fools’ tradition as I have usually interpreted it implies that otherwise egregious-seeming things done on April Fools’ Day are expected to be primarily fiction, with something like the aforementioned unipolar liminality to them.
Combining this with the above, I would predict TLW to be much less disturbed by a statement of “for the purpose of Good Heart tokens, we will err on the broad side in terms of non-intrusively detecting exploitative behavior and disallowing monetary redemption of tokens accumulated in such a way, but for all other moderation purposes, the level of scrutiny applied will remain as it was”. That would limit any increase in negative consequences to canceling the positive individual consequences “leaking out of” the experiment.
The other and arguably more important half of things here is that the higher-consequence action has been overlaid onto an existing habitual action in an invasive way. If you were playing a board game, moving resource tokens to your area contrary to the rules of the game might be considered antisocial cheating in the real world. However, if the host suddenly announced that the tokens in the game would be cashed out in currency and that stealing them would be considered equivalent to stealing money from their purse, while the game were ongoing, I would expect some people to get up and leave, even if they weren’t intending to cheat, because the tradeoff parameters around other “noise” risks have suddenly been pulled out from underneath them. This is as distinct from e.g. consciously entering a tournament where you know there will be real-money prizes, and it’s congruent with TLW’s initial question about opting out.
For my part, I’m not particularly worried (edit: on a personal level), but I do find it confusing that I didn’t see an explicit rule for which votes would be part of this experiment and which wouldn’t. My best guess is that it applies when both the execution of the vote and the creation of its target fall within the experiment period; is that right?
It’s not anything like a 1:1 relationship, but I do indeed infer some information of that sort. I think people on-average play roles in acting that are “a part of them”. It’s easy to play a character when you can empathize with them.
There are people I know who like to wear black and play evil/trollish roles in video games. When I talk to them about their actual plans in life regarding work and friendship, they come up with similarly trollish and (playfully) evil strategies. It’s another extension of themselves. In contrast I think sometimes people let their shadows play the roles that are the opposite of who they play in life, and that’s also information about who they are, but it is inverted.
Again, this isn’t a rule and there’s massive swathes of exceptions, but I wouldn’t say “I don’t get much information about a person’s social and ethical qualities from what roles they like to play in contexts that are bounded-by-fiction”.
Right. Good analogy.
I definitely updated a bunch due to TLW explaining that this noise is sufficiently serious for them to not want to be on the site. It seems like they’ve been treating their site participation more seriously than I think the median regular site-user does. When I thought about this game setup during its creation I thought a lot more about “most” users rather than the users on the tails.
Like, I didn’t think “some users will find this noisy relationship to things-related-to-deanonymization to be very threatening and consider leaving the site but I’ll do it anyway”, I thought “most users will think it’s fun or they’ll think it’s silly/irritating but just for a week, and be done with it afterward”. Which was an inaccurate prediction! TLW giving feedback rather than staying silent is personally appreciated.
It’s plausible to me that users like TLW would find it valuable to know more about how much I value anonymity and pseudonymity online.
For example about around two years ago I dropped everything for a couple days to make DontDoxScottAlexander.com with Jacob Lagerros, to help coordinate a coalition of people to pressure the NYT to have better policies against doxing (in that case and generally).
When a LW user asked if I would vouch for their good standing when they wanted to write a post about a local organization where they were concerned about inappropriate retaliation, I immediately said yes (before knowing the topic of the post or why they were asking) and I did so, even while I later received a lot of pressure to not do this, and ended up myself with a bunch of criticisms of the post.
And just last week I used my role as an admin to quickly undo the doxing of a LW user who I (correctly) suspected did not wish to be deanonymized. (I did that 5 mins after the comment was originally posted.)
After doing the last one I texted my friend saying it’s kind of stressful to make those mod calls within a couple minutes close to midnight, and that there’s lots of reasons why people might think it mod overreach (e.g. I edited someone else’s comment which feels kind of dirty to me), but I think it’s kind of crucial to protect pseudonymous identities on the internet.
(Obvious sentences that I’m saying to add redundancy: this doesn’t mean I didn’t make a mistake in this instance, and it doesn’t mean that your and TLW critiques aren’t true.)
Congratulations[1]. You have managed to describe my position substantially more eloquently and accurately than I could do so myself. I find myself scared and slightly in awe.
Correct, even to the point of correctly predicting “much less” but not zero.
This is a very good analogy. One other implication: it also likely results in consequences for future games with said host, not just the current one. The game has changed.
=*=*=*=
I ended up walking away from LessWrong for the (remaining) duration of Good Hart Week; I am debating as to if I should delete my account and walk away permanently, or if I should “just” operate under the assumption[2] that all information I post on this site can and will be later adversarially used against me[3][4] (which includes, but is not limited to, not posting controversial opinions in general).
I was initially leaning toward the former; I think I will do the latter.
To be clear, because text on the internet can easily be misinterpreted: this is intended to be a strong compliment.
To be clear: as in “for the purposes of bounding risk” not as in “I believe this has a high priority of happening”.
Which is far more restrictive than had I been planning for this from the start.
This is my default assumption on most sites; I was operating under the (erroneous) assumption that a site whose main distinguishing feature was supposedly the pursuit of rationality wouldn’t go down this path[5].
You can easily get strategic-voting-like suboptimal outcomes, for one.
I’m sorry you’re considering leaving the site or restraining what content you post. I wish it were otherwise. Even as a relatively new writer I like your contributions, and think it likely good for the site for you to contribute more over the coming years.
As perhaps a last note for now, I’ll point to the past events listed at the end of this comment as hopefully helpful for you to have a full-picture of how at least I think about anonymity on the site.
Bayesian reasoning means that April Fool’s is by far the worst day of the year to do something that you wish to be taken as not purely a joke, especially something that is playful.
Given that the phrasing of your reply implies[1] that this isn’t just a joke, I have additional concerns:
Calling anything real-money-related ‘playful’ is a yellow flag, and being confused as to why anyone might consider someone this a yellow flag is a red flag [2].
You are discouraging anonymous[3] participants compared to non-anonymous participants, due to the difficulty in anonymously transferring money. This disincentivizes rational discussion.
You are further discouraging throwaways and anonymous participants compared to non-anonymous participants, due to the threshold for withdrawals. This also disincentivizes rational discussion.
You are yet further discouraging anonymous participants compared to non-anonymous participants, due to the signaling that you are willing to dox people. This too disincentivizes rational discussion.
This unilaterally moves voting from a signal of ‘do I wish for this to be more visible[4]’ to ‘do I wish the person who made this comment to get more money’. These are not the same thing. In particular, this discourages upvoting valid-and-insightful comments by participants that I believe are doing more harm than good on the net, and encourages upvoting invalid-or-uninsightful comments by participants that I believe are doing more good than harm on the net, however both of these overall disincentivize rational discussion[5].
This seriously disincentivizes ‘risky’ comments by accounts that have a good reputation. This can easily result in strategic-voting-like suboptimal outcomes.
Doing this and then calling them Good Heart tokens implies that you explicitly brought up the connection to Goodhart’s Law and then decided to push for it anyway.
Likely more, but I am too frustrated by this to continue right now.
But doesn’t explicitly state, I note.
I seriously hope you understand why. If you don’t, I have to seriously re-examine this forum[6]. I might note that the main defining feature of ‘play’ is that, unlike most things which are externally motivated, play is intrinsically motivated, whereas the classic example of an extrinsic motivator is… money.
I am aware this forum isn’t particularly anonymous. And yes, I consider it a strike against it. And yes, there are valid[7] points I have self-censored as a result.
Of course, you can argue about this precise definition too. The point is, these two definitions are not the same.
Even the base ‘number goes up’ of standard upvotes/downvotes is bad enough, with discussions about the problems and possibilities as to how to mitigate this on this very site.
A forum whose main distinguishing feature is supposedly the pursuit of rationality[8] where of its few[9] admins doesn’t get something this basic and was able to make this much of a change without consulting to see what they were missing is, uh, not great.
At least to the best of my knowledge. Obviously I haven’t been able to check by posting said items on this forum.
“To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.”, from https://www.lesswrong.com/posts/bJ2haLkcGeLtTWaD5/welcome-to-lesswrong
I don’t actually know offhand how many.
Highlighting this bit. I hadn’t thought about this at all.
(I am not part of ‘the team’ btw.)
I think I understand that. I do think it’s pretty unlikely this is some kind of step towards a broader trivialization of looking at voting data, but I do understand the concern.
Counterpoint, in this very comment section there is this comment:
...and clicking through to the linked post[1] it’s also talking about e.g. future potential extensions to split between logged-in and non-logged-in users.
Admittedly, this is not specifically voting data, and this step is still a fairly aggregated statistic, but it is a step in the broader trivialization of looking at user data.
...which I was actually somewhat loathe to do.
For the record, I think it’s the wrong call for the EA Forum to show that data to users. I know from looking at that data on LW that it is a terrible and distracting proxy for what the actual great content is on the site. I think every time we’ve done the annual LW review, the most-viewed post that year has not passed review. One of the top 20 most viewed LW posts of all time is a bad joke about having an orange for a head. It’s a spiky metric and everyone I know whose job depends on views/clicks finds it incredibly stressful. Karma is a way better metric for what’s valued on-site, and passing annual review is an even better metric.
If you vote as-usual and don’t think about it, I do not expect you will end up explicitly trading votes with other people.
But no, there is no way to opt-out of Good Heart Tokens this week.
To be clear: I don’t expect that either. Nevertheless. You’re sending a signal that you’re willing to dox people for an April Fool’s joke.
As an aside: I suspect that “If you vote as-usual and don’t think about it, I do not expect you will end up explicitly trading votes with other people” is less true for me than “usual”.
One of the things I tend to end up doing to discover content on a site like this is to flip through the user pages of people with whom I have had interesting comment chains.
If I ever end up doing so to someone who does the same, this would look very much like trading votes (using an external or implicit collusion mechanism).