I’ve thought of another process it’ll probably need. Moderation cases, generally requests that a presence be unendorsed by its endorsers, along with reasons why (a link to an offending article, for instance). For instance, a very straightforward case: If a presence is hacked and starts posting spam, anyone who saw that spam should make a moderation case which will be seen and dealt with by that presence’s endorsers along the not a spambot web. If they fail to deal with a moderation order, they may be unendorsed themselves, as that would be the only way for the rest of the network to excise the offender, so they have pressure to be responsive.
These are stronger and more explicit than the dislike action. The everyday I don’t think this person actually has good taste situations should just be communicated by accumulations of dislike signals. Perhaps there is a (client-side?) automatic process that notices those accumulations and produces a moderation order itself.
I’ve considered antiendorsements, but I don’t think it would work neatly. Aside from complicating the properties of the web of trust data structure… there’s an asymmetry where, an erroneous endorsement makes noise and screams to be fixed because you can see the bad articles that get into the web as a result of it, an erroneous anti-endorsement produces silence, it is unlikely to ever be fixed. A standard solution here is to have a time limit on blocks or mutes, but it might be better to just.. not have a feature that can lastingly screw someone over without having to explain why they were removed or ever come up for reconsideration.
As with everything else, anyone can make a moderation case about any presence’s endorsement, but it will only be seen if the author (or the one who tagged it as moderation case) is in a web that the endorsers are listening to or respect.
I’ve thought of another process it’ll probably need. Moderation cases, generally requests that a presence be unendorsed by its endorsers, along with reasons why (a link to an offending article, for instance). For instance, a very straightforward case: If a presence is hacked and starts posting spam, anyone who saw that spam should make a moderation case which will be seen and dealt with by that presence’s endorsers along the not a spambot web. If they fail to deal with a moderation order, they may be unendorsed themselves, as that would be the only way for the rest of the network to excise the offender, so they have pressure to be responsive.
These are stronger and more explicit than the dislike action. The everyday I don’t think this person actually has good taste situations should just be communicated by accumulations of dislike signals. Perhaps there is a (client-side?) automatic process that notices those accumulations and produces a moderation order itself.
I’ve considered antiendorsements, but I don’t think it would work neatly. Aside from complicating the properties of the web of trust data structure… there’s an asymmetry where, an erroneous endorsement makes noise and screams to be fixed because you can see the bad articles that get into the web as a result of it, an erroneous anti-endorsement produces silence, it is unlikely to ever be fixed. A standard solution here is to have a time limit on blocks or mutes, but it might be better to just.. not have a feature that can lastingly screw someone over without having to explain why they were removed or ever come up for reconsideration.
As with everything else, anyone can make a moderation case about any presence’s endorsement, but it will only be seen if the author (or the one who tagged it as moderation case) is in a web that the endorsers are listening to or respect.