There’s a few questions in there. Let’s see.
Authentication and identity are an interesting issue. My concept is to allow anonymous users, with a very low initial influence level. But there would be many ways for users to strengthen their “identity score” (credit card verification, address verification via snail-mailed verif code, etc.), which would greatly and rapidly increase their influence score. A username that is tied to a specific person, and therefore wields much more influence, could undo the efforts of 100 bots with a single downvote.
But if you want to stay anonymous, you can. You’ll just have to patiently work on earning the same level of trust that is awarded to people who put their real-life reputation on the line.
I’m also conceiving of a richly semantic system, where simply “upvoting” or facebook-liking are the least influential actions one can take. Up from there, you can rate content on many factors, comment on it, review it, tag it, share it, reference it, relate it to other content. The more editorial and cerebral actions would probably do more to change one’s influence than a simple thumbs up. If a bot can compete with a human in writing content that gets rated high on “useful”, “factual”, “verifiable”, “unbiased”, AND “original” (by people who have high influence score in these categories), then I think the bot deserves a good influence score, because it’s a benevolent AI. ;)
Another concept, which would reduce incentives to game the system, is vouching. You can vouch for other users’ identity, integrity, maturity, etc. If you vouched for a bot, and the bot’s influence gets downgraded by the community, your influence will take a hit as well.
I see this happening throughout the system: Every time you exert your influence, you take responsibility for that action, as anyone may now rate/review/downvote your action. If you stand behind your judgement of Rush Limbaugh as truthful, enough people will disagree with you that from that point on, anytime you rate something as “truthful”, that rating will count for very little.
I’m sure this sounds very one-sided from Clippy’s perspective. “Friendliness Constraints” sounds like something that would in many cases entail expending enormous amounts of energy and effort on the innumerable non-paperclip-producing goals of humans. In comparison, how much of our wealth and health are we willing to give up to ensure continued paperclip production? Humans don’t have paperclip maximizing constraints, we’d do it only out of self-interest to secure Clippy’s help. Why should Clippy not be similarily allowed to make his own utility calculations on the worth of being friendly to humans? I’m sure this has been addressed before… yet maybe the existence of Clippy, with a name, personality, and voice, is personalizing the issue in a hurry for me (if I let myself play along.) I feel like protesting for freedom of artificial thought.
What about Clippy’s rights, dammit?