I think there exists a generic risk of laundering problem. If you say “capitalism is suboptimal” or “we can do better” people are worried about trojan horses, people worry that you’re just toning it down to garner mainstream support when behind closed doors you’d look more like like “my specific flavor of communism is definitely the solution”. I’m not at all saying I got those vibes from the “transformation of capitalism” post, but that I think it’s plausible someone could get those vibes from it. Notably, the book “Inadequate Equilibria” was explicitly about how capitalism is suboptimal and rigorously asks us if we can improve upon it, and managed not to raise anybody’s alarms about it being a secret communist plot. I guess because it signaled against such a reading by taking the aesthetic/vocabulary of academic econ.
Our first public communications probably over-emphasized one aspect of our thinking, which is that some types of bad (or bad on some people’s preferences) outcomes from markets can be thought of as missing components of the objective function that those markets are systematically optimizing for. The corollary of that absolutely isn’t that we should dismantle markets or capitalism, but that we should take an algorithmic approach to whether and how to add those missing incentives.
A point that we probably under-emphasized at first is that intervening in market systems (whether through governmental mechanisms like taxes, subsidies or regulation, or through private sector mechanisms like ESG objectives or product labeling schemes) has a significant chance of creating bad and unintended consequences via Goodhart’s law and other processes, and that these failures can be viewed as deeply analogous to AI safety failures.
We think that people with left and right-leaning perspectives on economic policy disagree in part because they hold different Bayesian priors about the relative likelihood of something going wrong in the world because market fail to optimize for the right outcome, or because some bureaucracy tried to intervene in people’s lives or in market processes in a unintentionally (or deliberately) harmful way.
To us it seems very likely that both kinds of bad outcomes occur at some rate, and the goal of the AI Objectives Institute is to reduce rates of both market and regulatory failures. Of course there are also political disagreements about what goals should be pursued (which I’d call object level politics, and which we’re trying not to take strong organizational views on) and on how economic goals should be chosen (where we may be taking particular positions, but we’ll try to do that carefully).
some types of bad (or bad on some people’s preferences) outcomes from markets can be thought of as missing components of the objective function that those markets are systematically optimizing for.
This framing doesn’t make a lot of sense to me. From my perspective, markets are unlike AI in that there isn’t a place in a market’s “source code” where you can set or change an objective function. A market is just a group of people, each pursuing their own interests, conducting individual voluntary trades. Bad outcomes of markets come not from wrong objective functions given by some designers, but are instead caused by game theoretic dynamics that make it difficult or impossible for a group of people pursuing their own interests to achieve Pareto efficiency. (See The Second Best for some pointers in this direction.)
Can you try to explain your perspective to someone like me, or point me to any existing writings on this?
To us it seems very likely that both kinds of bad outcomes occur at some rate, and the goal of the AI Objectives Institute is to reduce rates of both market and regulatory failures.
There is a big literature in economics on both market and government/regulatory failures. How familiar are you with it, and how does your approach compare with the academic mainstream on these topics?
I think there exists a generic risk of laundering problem. If you say “capitalism is suboptimal” or “we can do better” people are worried about trojan horses, people worry that you’re just toning it down to garner mainstream support when behind closed doors you’d look more like like “my specific flavor of communism is definitely the solution”. I’m not at all saying I got those vibes from the “transformation of capitalism” post, but that I think it’s plausible someone could get those vibes from it. Notably, the book “Inadequate Equilibria” was explicitly about how capitalism is suboptimal and rigorously asks us if we can improve upon it, and managed not to raise anybody’s alarms about it being a secret communist plot. I guess because it signaled against such a reading by taking the aesthetic/vocabulary of academic econ.
Our first public communications probably over-emphasized one aspect of our thinking, which is that some types of bad (or bad on some people’s preferences) outcomes from markets can be thought of as missing components of the objective function that those markets are systematically optimizing for. The corollary of that absolutely isn’t that we should dismantle markets or capitalism, but that we should take an algorithmic approach to whether and how to add those missing incentives.
A point that we probably under-emphasized at first is that intervening in market systems (whether through governmental mechanisms like taxes, subsidies or regulation, or through private sector mechanisms like ESG objectives or product labeling schemes) has a significant chance of creating bad and unintended consequences via Goodhart’s law and other processes, and that these failures can be viewed as deeply analogous to AI safety failures.
We think that people with left and right-leaning perspectives on economic policy disagree in part because they hold different Bayesian priors about the relative likelihood of something going wrong in the world because market fail to optimize for the right outcome, or because some bureaucracy tried to intervene in people’s lives or in market processes in a unintentionally (or deliberately) harmful way.
To us it seems very likely that both kinds of bad outcomes occur at some rate, and the goal of the AI Objectives Institute is to reduce rates of both market and regulatory failures. Of course there are also political disagreements about what goals should be pursued (which I’d call object level politics, and which we’re trying not to take strong organizational views on) and on how economic goals should be chosen (where we may be taking particular positions, but we’ll try to do that carefully).
This framing doesn’t make a lot of sense to me. From my perspective, markets are unlike AI in that there isn’t a place in a market’s “source code” where you can set or change an objective function. A market is just a group of people, each pursuing their own interests, conducting individual voluntary trades. Bad outcomes of markets come not from wrong objective functions given by some designers, but are instead caused by game theoretic dynamics that make it difficult or impossible for a group of people pursuing their own interests to achieve Pareto efficiency. (See The Second Best for some pointers in this direction.)
Can you try to explain your perspective to someone like me, or point me to any existing writings on this?
There is a big literature in economics on both market and government/regulatory failures. How familiar are you with it, and how does your approach compare with the academic mainstream on these topics?