I think the utility function and probability framework from VNM rationality is a very important kernel of math that constrains “any possible agent that can act coherently (as a limiting case)”.
((I don’t think of the VNM stuff as the end of the story at all, but it is an onramp to a larger theory that you can motivate and teach in a lecture or three to a classroom. There’s no time in the VNM framework. Kelly doesn’t show up, and the tensions and pragmatic complexities of trying to apply either VNM or Kelly to the same human behavioral choices in real life and have that cause your life to really go better are non-trivial!))
With that “theory which relates to an important agentic process” as a background, I have a strong hunch that Dominant Assurance Contracts (DACs) are really really conceptually important, in a similarly deep way.
I think that “theoretical DACs” probably constrain all possible governance systems that “collect money to provide public services” where the governance system is bounded by some operational constraint like “freedom” or “non-tyranny” or “the appearance of non-tyranny” or maybe “being limited to organizational behavior that is deontically acceptable behavior for a governance system” or something like that.
In the case of DACs, the math is much less widely known than VNM rationality. Lesswrong has a VNM tag that comes up a lot, but the DAC tag has less love. And in general, the applications of DACs to “what an ideal tax-collecting service-providing governance system would or could look like” isn’t usually drawn out explicitly.
However, to me, there is a clear sense in which “the Singularity might will produce a single AI that is mentally and axiologically unified as sort of ‘single thing’ that is ‘person-shaped’, and yet it might also be vast, and (if humans still exist after the Singularity) would probably provide endpoint computing services to humans, kinda like the internet or kinda like the government does”.
And so in a sense, if a Singleton comes along who can credibly say “The State: it is me” then the math of DACs will be a potential boundary case on how ideal such Singletons could possibly work (similarly to how VNM rationality puts constrains on how any agent could work) if such Singletons constrained themselves to preference elicitation regimes that had a UI that was formal, legible, honest, “non-tyrannical”, etc.
That is to say, I think this post is important, and since it has been posted here for 2 days and only has 26 upvotes at the time I’m writing this comment, I think the importance of the post is not intelligible to most of the potential audience!
Thanks for the comment! I do think DACs are an important economics idea. This post details the main reason why I don’t think they can raise a lot of money (compared with copyright etc) under most realistic conditions, where it’s hard to identify lots of people who value the good at above some floor. AGI might have an easier time with this sort of thing through better predictions of agents’ utility functions, and open-source agent code.
I think the utility function and probability framework from VNM rationality is a very important kernel of math that constrains “any possible agent that can act coherently (as a limiting case)”.
((I don’t think of the VNM stuff as the end of the story at all, but it is an onramp to a larger theory that you can motivate and teach in a lecture or three to a classroom. There’s no time in the VNM framework. Kelly doesn’t show up, and the tensions and pragmatic complexities of trying to apply either VNM or Kelly to the same human behavioral choices in real life and have that cause your life to really go better are non-trivial!))
With that “theory which relates to an important agentic process” as a background, I have a strong hunch that Dominant Assurance Contracts (DACs) are really really conceptually important, in a similarly deep way.
I think that “theoretical DACs” probably constrain all possible governance systems that “collect money to provide public services” where the governance system is bounded by some operational constraint like “freedom” or “non-tyranny” or “the appearance of non-tyranny” or maybe “being limited to organizational behavior that is deontically acceptable behavior for a governance system” or something like that.
In the case of DACs, the math is much less widely known than VNM rationality. Lesswrong has a VNM tag that comes up a lot, but the DAC tag has less love. And in general, the applications of DACs to “what an ideal tax-collecting service-providing governance system would or could look like” isn’t usually drawn out explicitly.
However, to me, there is a clear sense in which “the Singularity might will produce a single AI that is mentally and axiologically unified as sort of ‘single thing’ that is ‘person-shaped’, and yet it might also be vast, and (if humans still exist after the Singularity) would probably provide endpoint computing services to humans, kinda like the internet or kinda like the government does”.
And so in a sense, if a Singleton comes along who can credibly say “The State: it is me” then the math of DACs will be a potential boundary case on how ideal such Singletons could possibly work (similarly to how VNM rationality puts constrains on how any agent could work) if such Singletons constrained themselves to preference elicitation regimes that had a UI that was formal, legible, honest, “non-tyrannical”, etc.
That is to say, I think this post is important, and since it has been posted here for 2 days and only has 26 upvotes at the time I’m writing this comment, I think the importance of the post is not intelligible to most of the potential audience!
Thanks for the comment! I do think DACs are an important economics idea. This post details the main reason why I don’t think they can raise a lot of money (compared with copyright etc) under most realistic conditions, where it’s hard to identify lots of people who value the good at above some floor. AGI might have an easier time with this sort of thing through better predictions of agents’ utility functions, and open-source agent code.