One tool here is for a non-anonymous person to vouch for the anonymous person (because they know the person, and/or can independently verify the account).
True. A maybe not-immediately-obvious possibility: someone playing Aella’s role of posting anonymous accounts could offer the following option: if you given an account and take this option, then if the poster later finds out that you seriously lied, then, they have the option to de-anonymize you. The point being, in the hypothetical where the account is egregiously false, the accounter’s reputation still takes a hit; and so, these accounts can be trusted more. If there’s no possibility of de-anonymization, then the account can only be trusted insofar as you trust the poster’s ability to track accounter’s trustworthiness. Which seems like a more complicated+difficult task. (This might be terrible thing to do, IDK.)
(Downvoted. I’d have strong downvoted but −5 seems too harsh. Sounds like you’re responding to something other than what I said, and if that’s right, I don’t like that you said “VERY creepy” about the proposal, rather than about whatever you took from it.)
I was very up-front about the role I am attempting to embody in this: Relating to, and trying to serve, people with complicated opinions who are finding it hard to talk about this.
I feel we needed someone to take this role. I wish someone had done it for me, when my stuff happened.
You seem to not understand that I am making this statement, from that place and in that capacity.
Try seeing it through through the lens of that, rather than thinking that I’m making confident statements about your epistemic creepiness.
Depends on the algorithm to determine whether “you seriously lied”.
Imagine a hypothetical situation where telling the truth puts you in danger, but you read this offer, think “well, I am telling the truth, so they will protect my anonymity”, and describe truthfully your version. Unluckily for you, your opponent lied, and was more convincing than you. Afterwards, because your story contradicts the accepted version of events, it seems that you were lying, accusing unfairly the people who are deemed innocent. As a punishment for “seriously lying”, your identity is exposed.
If people with sensitive information suspect that something like this could happen, then it defeats the purpose of the proposal.
Yeah, that seems like a big potential flaw. (Which could just mean, no one should stick their neck out like that.) I’m imagining that there’s only potential benefit here in cases where the accounter also has strong trust in the poster, such that they think the poster almost certainly won’t be falsely convinced that a truth is an egregious lie.
In particular, the agreement isn’t about whether the court of public opinion decides it was a lie, just the poster’s own opinion. (The poster can’t be held accountable to that by the public, unless the public changes its mind again, but the poster can at least be held accountable by the accounter.) (We could also worry that this option would only be taken by accounters with accounts that are infeasible to ever reveal as egregious lies, which would be a further selection bias, though this is sort of going down a hypothetical rabbit hole.)
One tool here is for a non-anonymous person to vouch for the anonymous person (because they know the person, and/or can independently verify the account).
True. A maybe not-immediately-obvious possibility: someone playing Aella’s role of posting anonymous accounts could offer the following option: if you given an account and take this option, then if the poster later finds out that you seriously lied, then, they have the option to de-anonymize you. The point being, in the hypothetical where the account is egregiously false, the accounter’s reputation still takes a hit; and so, these accounts can be trusted more. If there’s no possibility of de-anonymization, then the account can only be trusted insofar as you trust the poster’s ability to track accounter’s trustworthiness. Which seems like a more complicated+difficult task. (This might be terrible thing to do, IDK.)
I get VERY creepy vibes from this proposal, and want to push back hard on it.
Although, hm… I think “lying” and “enemy action” are different?
Enemy action occasionally warrants breaking contracts back, after they didn’t respect yours.
Whereas if there is ZERO lying-through-negligence in accounts of PERSONAL EXPERIENCES, we can be certain we set the bar-of-entry far too high.
(Downvoted. I’d have strong downvoted but −5 seems too harsh. Sounds like you’re responding to something other than what I said, and if that’s right, I don’t like that you said “VERY creepy” about the proposal, rather than about whatever you took from it.)
I was very up-front about the role I am attempting to embody in this: Relating to, and trying to serve, people with complicated opinions who are finding it hard to talk about this.
I feel we needed someone to take this role. I wish someone had done it for me, when my stuff happened.
You seem to not understand that I am making this statement, from that place and in that capacity.
Try seeing it through through the lens of that, rather than thinking that I’m making confident statements about your epistemic creepiness.
Hopefully this helps to resolve your confusion.
Depends on the algorithm to determine whether “you seriously lied”.
Imagine a hypothetical situation where telling the truth puts you in danger, but you read this offer, think “well, I am telling the truth, so they will protect my anonymity”, and describe truthfully your version. Unluckily for you, your opponent lied, and was more convincing than you. Afterwards, because your story contradicts the accepted version of events, it seems that you were lying, accusing unfairly the people who are deemed innocent. As a punishment for “seriously lying”, your identity is exposed.
If people with sensitive information suspect that something like this could happen, then it defeats the purpose of the proposal.
Yeah, that seems like a big potential flaw. (Which could just mean, no one should stick their neck out like that.) I’m imagining that there’s only potential benefit here in cases where the accounter also has strong trust in the poster, such that they think the poster almost certainly won’t be falsely convinced that a truth is an egregious lie.
In particular, the agreement isn’t about whether the court of public opinion decides it was a lie, just the poster’s own opinion. (The poster can’t be held accountable to that by the public, unless the public changes its mind again, but the poster can at least be held accountable by the accounter.) (We could also worry that this option would only be taken by accounters with accounts that are infeasible to ever reveal as egregious lies, which would be a further selection bias, though this is sort of going down a hypothetical rabbit hole.)