I remember being quite excited when I first read about Agoric Computing. From the authors’ website:
Like all systems involving goals, resources, and actions, computation can be viewed in economic terms. This paper examines markets as a model for computation and proposes a framework—agoric systems—for applying the power of market mechanisms to the software domain. It then explores the consequences of this model and outlines initial market strategies.
Until today when Robin Hanson’s blog post reminded me, I had forgotten that one of the authors of Agoric Computing is Eric Drexler, who also authored Comprehensive AI Services as General Intelligence, which has stirred a lot of recent discussions in the AI safety community. (One reason for my excitement was that I was going through a market-maximalist phase, due to influences from Vernor Vinge’s anarcho-captalism, Tim May’s crypto-anarchy, as well as a teacher who was a libertarian and a big fan of the Austrian school of economics.)
Here’s a concrete way that Agoric Computing might work:
For concreteness, let us briefly consider one possible form of market-based system. In this system, machine resources-storage space, processor time, and so forth-have owners, and the owners charge other objects for use of these resources. Objects, in turn, pass these costs on to the objects they serve, or to an object representing the external user; they may add royalty charges, and thus earn a profit. The ultimate user thus pays for all the costs directly or indirectly incurred. If the ultimate user also owns the machine resources (and any objects charging royalties), then currency simply circulates inside the system, incurring computational overhead and (one hopes) providing information that helps coordinate computational activities.
When later it appeared as if Agoric Computing wasn’t going to take over the world, I tried to figure out why, and eventually settled upon the answer that markets often don’t align incentives correctly for maximum computing efficiency. For example, consider an object whose purpose is to hold onto some valuable data in the form of a lookup table and perform lookup services. For efficiency you might have only one copy of this object in a system, but that makes it a monopolist, so if the object is profit maximizing (e.g., running some algorithm that automatically adjusts prices so as to maximize profits) then it would end up charging an inefficiently high price. Objects that might use its services are incentivized to try to do without the data, or to maintain an internal cache of past data retrieved, even if that’s bad for efficiency.
Suppose this system somehow came into existence anyway. A programmer would likely notice that it would be better if the lookup table and its callers were merged into one economic agent which would eliminate the inefficiencies described above, but then that agent would itself still be a monopolist (unless you inefficiently maintained multiple copies of it) so then they’d want to merge that agent with its callers, and so on.
My curiosity stopped at that point and I went on to other interests, but now I wonder if that is actually a correct understanding of why Agoric Computing didn’t become popular. Does anyone have any insights to offer on this topic?
The limiting factor on a thing being charged as a utility is that it is evolved enough and understood enough that the underlying architecture won’t change (and thus leave all the consumers of that utility with broken products). We’ve now basically gotten there with storage, and computing time is next on the chopping block as the next wave of competitive advantage comes from moving to serverless architecture.
Once serverless becomes the defacto standard, the next step will be to commoditizie particular common functions (starting with obvious one like user login/permission systems/etc). Once these functions begin to be commoditized, you essentially have an Agora computing architecture for webapps. The limiting factor is simply the technological breakthroughs, evolution of practice, and understanding of customer needshat allowed first storage, then compute, and eventually computer functions to become commodotized. Understanding S-curves and Wardley mapping is key here to understanding the trajectory.
One obstacle has been security. To develop any software that exchanges services for money, you need to put substantially more thought into the security risks of that software, and you probably can’t trust a large fraction of the existing base of standard software. Coauthor Mark S. Miller has devoted lots of effort to replacing existing operating systems and programming languages with secure alternatives, with very limited success.
One other explanation that I’ve wondered about involves conflicts of interest. Market interactions are valuable mainly when they generate cooperation among agents who have divergent goals. Most software development happens in environments where there’s enough cooperation that adding market forces wouldn’t provide much value via improved cooperation. I think that’s true even within large companies. I’ll guess that the benefits of the agoric approach only become interesting when large number of companies switch to using it, and there’s little reward to being the first such company.
It seems like market forces could even actively damage existing cooperation. While I’m not terribly familiar with the details, I’ve heard complaints of this happening at one university that I know of. There’s an internal market where departments need to pay for using spaces within the university building. As a result, rooms that would otherwise be used will sit empty because the benefit of paying the rent isn’t worth it.
Possibly this is still overall worth it—the system increasing the amount of spare capacity means that there are more spaces available for when a department really does need a space—but people do seem to complain about it anyway.
This is confusing. Why doesn’t the rent on the empty rooms fall until there are either no empty rooms or no buyers looking to use rooms? Any kind of auction mechanism (which is what I’d expect to see from something described as a “market”) should exhibit the behavior I’ve described.
Those concerns would have slowed adoption of agoric computing, but they seem to apply to markets in general, so they don’t seem useful in explaining why agoric computing is less popular than markets in other goods/services.
Note that market economies aren’t pure in any other realm either. They work well only for some scales and processes, and only when there are functioning command or obligation frameworks that adjoin the markets (in government and cultural norms “above” the market, and in family and cultural norms “below” it, and in non-market competition and cooperation at the subpersonal level). We actually have well-functioning markets for compute resources, just at a somewhat courser level (but getting finer—AWS sells compute for $0.00001667 per GB-second, and makes it easy to write functions that use this compute resource to calculate whether to use more or less of it in the future) than Agoric Computing envisions.
I suspect the root cause is that many of the decisions are outside the modeling of the price/purchase system, and the inefficiency of actually having the market infrastructure (ability to offer, counteroffer, accept, perform, and pay, across time and with negotiation of penalties for failure) outweighs the inefficiency of a command economy.
I also suspect that the knowledge problem (what do participants want, and how to you measure the level of those preferences) is much reduced when the software doesn’t actually have any preferences of it’s own, only what the programmers/masters have specified.
Alternately, perhaps this is more integrated into current thinking than we realize, and we just didn’t notice it because “the market” is bigger than we thought, and automatically incorporated (and was overwhelmed by) the larger sphere of human market interactions. Finding and tuning cost functions for algorithms to minimize is a big deal. However, there’s so much impact from reducing cost on a macro scale, that reducing cost by making software calculations more efficiently is lost in the noise.
I don’t think the features of the theoretical system were particularly relevant. I can see several reasons why this wouldn’t take off, and no reasons why it would. For example:
This part raises a few red flags. We were pretty terrible at asynchronous anything in 2001; the only successful example I know of is communication systems running Erlang, and Erlang is inefficient at math so cost functions would have had a lot of overhead. Further, in the meantime a lot of development effort was put into exploring alternatives to the model he proposes, which we see now in things like practical Haskell and newer languages like Rust or Julia.
Further, we’ve gotten quite good at getting the value such a system proposes, we just write programs that manage the resources. For example, I do tech support for Hadoop, the central concept of which is adding trade-offs between storage and compute, and Google used Deepmind to manage energy usage in its datacenters. Cloud computing is basically Agoric computing at the application level.
In order for Agoric computing to be popular, there would need to be clear benefits to a lot of stakeholders that would exceed the costs of doing a complete re-write of everything. In a nutshell, it looks to me like Drexler was suggesting we should re-write all software under a market paradigm when we can get most of the same value by writing software under the current—or additional—paradigm(s) and just adding a few programs which optimize efficiency and provide direct trade-offs.
An alternative, more abstract way of thinking about the problem: it is hard to create a market where there aren’t currently any transactions. Until quite recently, transactions were only located where the software was sold and where it was written.
I think the effort would have been more successful if the question was not “how do we make software to use market transactions” but rather “how do we extend market transactions into how software works” because then it would be clear we need to approach it from one end or the other: in order to get software to use transactions we would need to make software production transactions more granular or software consumption transactions more granular. The current trend is firmly on the latter side.
Agoric Computing seems like a new name given to a very common mechanism employed by many programs in the software industry for decades. It is quite common to want to balance the use of resources such as time, memory, disk space, etc. Accurately estimating these things ahead of their use may use substantial resource by itself. Instead, a much simpler formula is associated with each resource usage type and that stands as a proxy for the actual cost. Some kind of control program uses these cost functions to decide how best to allocate tasks and use actual resources. The algorithms to compute costs and manipulate the market can be as simple or as complex as the designer desires.
This control program can be thought of as an operating system but it might also be done in the context of tasks within a single process. This might result in markets within markets.
I doubt many software engineers would think of these things in terms of the market analogy. For one thing, they would gain little constraining their thinking to a market-based system. I suspect many software engineers might be fascinated to think of such things in terms of markets but only for curiosity sake. I don’t see how this point of view really solves any problems for which they don’t already have a solution.