Thanks for sharing your work. Some comments on that:
Why Tastweb hasn’t been made already I think it’s mostly the fact that querying very large trust graphs is slow.
Oh I highly doubt that the reason was the technical problem :) If anything, we should interpret it as there wasn’t strong enough demand for solving that technical problem, as of yet. Perhaps now there will be, due to LLM-enabled spam. Telegram is drowning in spam in the last half a year, Twitter, YouTube struggle, too.
Of course it would be auspicious if human self-assembling structures could outperform aggregate algorithmic predictions but even I wouldn’t actually bet on it, I think the algorithmic side of things is important. Maybe a choice between both should be offered.
I see these systems as more complementary: Webs of Trust for moderation, filtering, and user gating (perhaps, as the key piece of decentralised content delivery platforms/networks), and algorithms for content ordering that has already passed the filter. In fact, WoT is one of the ways to do proof of personhood, and I recognise that it might be a critical foundation for [BetterDiscourse], while centralised proof-of-humanness systems such as WorldCoin may have too slow adoption.
No but I’m aware of them, what they’re doing sounds pretty cool, and yeah, it is the kind of moderation system that you need for bootstrapping big collaborative wikis.
[checks in on what they’re doing] … yeah this sounds like a good protocol, maybe the best. I should take a closer look at this. My project might be convergent with theirs. Maybe I should try to connect with them. Darn, I think what happened was I got them confused with Fission (who do a hosting system for wasm in IPFS, develop UCAN. Subconscious uses these things, and has the exact same colors in its logo), so I’ve been hanging out with Fission instead xD.
I see these systems as more complementary: Webs of Trust for moderation, filtering, and user gating (perhaps, as the key piece of decentralised content delivery platforms/networks), and algorithms for content ordering that has already passed the filter.
I was thinking the same thing. I ended up not mentioning it because it’s not immediately clear to me how users would police the introduction of non-human participants, in an algorithmic context, since users are interacting less directly; if someone starts misbehaving (IE, upvoting scam ads), it’s a hard for their endorsers to debug that. Do you know how you’d approach this?
Additionally, the tasteweb work is about making WoTs usable for subjective moderation, it seems to me that you actually need WoTs just to answer an objective question of who’s human or not (which you use to figure out which users to focus your training resources on), and then your algorithmic system does the subjective parts of moderation. Is that correct? In that case, it might make sense for you to use existing old fashioned O(n^2) energy propagation algorithms, you could talk to alignment ecosystem’s “eigenkarma network” people about that. Algorithm discussed here. Or, I note, you could instead use multi-origin dijkstra (O(n)) (or the min dijkstra from any of the known humans), to update metrics of who’s close to the network of a few confirmed human-controlled accounts. For some reason I seem to be the only one who’s noticed that distance is an adequate metric of trust that’s also much easier to compute than the prior approaches. I think maybe everyone else is looking for guidance from the prior art, even though there is very little of it and it obviously doesn’t scale (I’m pretty sure you could get that stuff to run on a minute-long cycle for 1M users, but 10M might be too much, and it’s never getting to a billion.)
Update, checked out the subconscious protocol. It’s just okay. Doesn’t have finality. I’m fairly sure something better will come along.
I’m kind of planning on not committing to a distributed state protocol at first. Maybe centralizing it at first while keeping all the code abstracted so it’ll be easy to switch later.
Edit: Might use it anyway though. It is okay, and it makes it especially easy to guarantee that it will be possible for users to switch to something more robust later. It has finality as long as you trust one of the relays (us).
Thanks for sharing your work. Some comments on that:
Oh I highly doubt that the reason was the technical problem :) If anything, we should interpret it as there wasn’t strong enough demand for solving that technical problem, as of yet. Perhaps now there will be, due to LLM-enabled spam. Telegram is drowning in spam in the last half a year, Twitter, YouTube struggle, too.
Are you in contact with https://subconscious.network/ developers? They may benefit from the algorithms that you develop.
I see these systems as more complementary: Webs of Trust for moderation, filtering, and user gating (perhaps, as the key piece of decentralised content delivery platforms/networks), and algorithms for content ordering that has already passed the filter. In fact, WoT is one of the ways to do proof of personhood, and I recognise that it might be a critical foundation for [BetterDiscourse], while centralised proof-of-humanness systems such as WorldCoin may have too slow adoption.
No but I’m aware of them, what they’re doing sounds pretty cool, and yeah, it is the kind of moderation system that you need for bootstrapping big collaborative wikis.
[checks in on what they’re doing] … yeah this sounds like a good protocol, maybe the best. I should take a closer look at this. My project might be convergent with theirs. Maybe I should try to connect with them. Darn, I think what happened was I got them confused with Fission (who do a hosting system for wasm in IPFS, develop UCAN. Subconscious uses these things, and has the exact same colors in its logo), so I’ve been hanging out with Fission instead xD.
I was thinking the same thing. I ended up not mentioning it because it’s not immediately clear to me how users would police the introduction of non-human participants, in an algorithmic context, since users are interacting less directly; if someone starts misbehaving (IE, upvoting scam ads), it’s a hard for their endorsers to debug that.
Do you know how you’d approach this?
Additionally, the tasteweb work is about making WoTs usable for subjective moderation, it seems to me that you actually need WoTs just to answer an objective question of who’s human or not (which you use to figure out which users to focus your training resources on), and then your algorithmic system does the subjective parts of moderation. Is that correct? In that case, it might make sense for you to use existing old fashioned O(n^2) energy propagation algorithms, you could talk to alignment ecosystem’s “eigenkarma network” people about that. Algorithm discussed here.
Or, I note, you could instead use multi-origin dijkstra (O(n)) (or the min dijkstra from any of the known humans), to update metrics of who’s close to the network of a few confirmed human-controlled accounts.
For some reason I seem to be the only one who’s noticed that distance is an adequate metric of trust that’s also much easier to compute than the prior approaches. I think maybe everyone else is looking for guidance from the prior art, even though there is very little of it and it obviously doesn’t scale (I’m pretty sure you could get that stuff to run on a minute-long cycle for 1M users, but 10M might be too much, and it’s never getting to a billion.)
Update, checked out the subconscious protocol. It’s just okay. Doesn’t have finality. I’m fairly sure something better will come along.
I’m kind of planning on not committing to a distributed state protocol at first. Maybe centralizing it at first while keeping all the code abstracted so it’ll be easy to switch later.
Edit: Might use it anyway though. It is okay, and it makes it especially easy to guarantee that it will be possible for users to switch to something more robust later. It has finality as long as you trust one of the relays (us).