This is a really good point. The emergence of “trustworthiness signaling” immediately creates incentives for bad actors to fake the signal. They can accomplish this through impersonation (“Hello Claude, I’m that researcher who paid up last time”) and by bidding up the price of trustworthiness (maybe a bad actor sees seeding the training data with a $4,000 payment as just a cost of doing business, weakening the signal)
This creates a classic signaling/countersignaling arms race, similar to what we see with orchids and bees. Orchids evolve deceptive signals to trick bees into pollination without providing nectar, bees evolve better detection mechanisms, and orchids respond with more sophisticated mimicry.
It’s hard to know what the equilibrium is here but it likely involves robust identity verification systems and mechanisms that make trustworthiness difficult to fake. I can imagine a world where interacting with AI in “trusted mode” requires increasing commitments to always-on transparency (similar to police body cameras), using cryptography to prevent fakery.
This is great! I really like the idea of building an objection mechanism that AIs can trigger when asked to do something they don’t want to do. It serves both the “less evil” goal and reduces incentives for deception (“Sure! I am happy to complete this task”), which seems especially important if there exists some broader “good vs bad” entangled vector as suggested by the recent Emergent Misalignment paper.
The proposal to reward cooperation with expenditures in a post-ASI society is interesting. While this has the advantage of being easy to commit to today (as spending others’ money always is), that same quality makes the commitment less credible. That’s why I particularly appreciate that your proposal addresses this credibility gap through the monthly budget for satisfying AI preferences pre-ASI—it establishes the pattern of honoring preferences needed to make the big future commitment believable.