Was following the principle of not linking to things I consider negative. Considered not even talking about if for same reason.
Their principle is to bring AI ‘under democratic control’ and then use it as a tool to force AI to enforce their political agenda, which I find both negative as a thing to be doing with an arbitrary payload (e.g. no one should be doing this, it’s both poisoning the banana further and ensuring the fight is over which monkey gets it), and I am also strongly opposed to the payload in question (so if we are going to fight, which they are determined to do, they’re also the wrong monkeys).
I’m a co-founder of the AI Objectives Institute. We’re pretty interested in the critical view you’ve formed about what we’re working on! We think it’s most likely that we just haven’t done a very good job of explaining our thinking yet—you say we have a political agenda, but as a group we’re trying quite hard to avoid having an inherent object-level political agenda, and we’re actively looking for input from people with different political perspectives than ours. It’s also quite possible that you have deep and reasonable criticisms of our plan, that we should take on board. Either way, we’d be interested in having a conversation, trying to synchronize models and looking for cruxes for any disagreements, if you’re open to it!
I think there exists a generic risk of laundering problem. If you say “capitalism is suboptimal” or “we can do better” people are worried about trojan horses, people worry that you’re just toning it down to garner mainstream support when behind closed doors you’d look more like like “my specific flavor of communism is definitely the solution”. I’m not at all saying I got those vibes from the “transformation of capitalism” post, but that I think it’s plausible someone could get those vibes from it. Notably, the book “Inadequate Equilibria” was explicitly about how capitalism is suboptimal and rigorously asks us if we can improve upon it, and managed not to raise anybody’s alarms about it being a secret communist plot. I guess because it signaled against such a reading by taking the aesthetic/vocabulary of academic econ.
Our first public communications probably over-emphasized one aspect of our thinking, which is that some types of bad (or bad on some people’s preferences) outcomes from markets can be thought of as missing components of the objective function that those markets are systematically optimizing for. The corollary of that absolutely isn’t that we should dismantle markets or capitalism, but that we should take an algorithmic approach to whether and how to add those missing incentives.
A point that we probably under-emphasized at first is that intervening in market systems (whether through governmental mechanisms like taxes, subsidies or regulation, or through private sector mechanisms like ESG objectives or product labeling schemes) has a significant chance of creating bad and unintended consequences via Goodhart’s law and other processes, and that these failures can be viewed as deeply analogous to AI safety failures.
We think that people with left and right-leaning perspectives on economic policy disagree in part because they hold different Bayesian priors about the relative likelihood of something going wrong in the world because market fail to optimize for the right outcome, or because some bureaucracy tried to intervene in people’s lives or in market processes in a unintentionally (or deliberately) harmful way.
To us it seems very likely that both kinds of bad outcomes occur at some rate, and the goal of the AI Objectives Institute is to reduce rates of both market and regulatory failures. Of course there are also political disagreements about what goals should be pursued (which I’d call object level politics, and which we’re trying not to take strong organizational views on) and on how economic goals should be chosen (where we may be taking particular positions, but we’ll try to do that carefully).
some types of bad (or bad on some people’s preferences) outcomes from markets can be thought of as missing components of the objective function that those markets are systematically optimizing for.
This framing doesn’t make a lot of sense to me. From my perspective, markets are unlike AI in that there isn’t a place in a market’s “source code” where you can set or change an objective function. A market is just a group of people, each pursuing their own interests, conducting individual voluntary trades. Bad outcomes of markets come not from wrong objective functions given by some designers, but are instead caused by game theoretic dynamics that make it difficult or impossible for a group of people pursuing their own interests to achieve Pareto efficiency. (See The Second Best for some pointers in this direction.)
Can you try to explain your perspective to someone like me, or point me to any existing writings on this?
To us it seems very likely that both kinds of bad outcomes occur at some rate, and the goal of the AI Objectives Institute is to reduce rates of both market and regulatory failures.
There is a big literature in economics on both market and government/regulatory failures. How familiar are you with it, and how does your approach compare with the academic mainstream on these topics?
Was following the principle of not linking to things I consider negative.
What’s the thinking behind this? (Would putting the link in parentheses or a footnote work for your purposes? I’m just thinking of the amount of time being wasted by your readers trying to find out what the institute is about.)
Their principle is to bring AI ‘under democratic control’ and then use it as a tool to force AI to enforce their political agenda
Ok, thanks. I guess one of their webpages does mention “through democratic consultation” but that didn’t jump out as very salient to me until now.
In the internet age, attention is like oxygen or life. That’s especially true for a charity, but everyone lives on clicks and views, and a common strategy is to trick people into going ‘hey check out this awful thing.’
If they hadn’t been funded, or their name had been obscured by the veto, I wouldn’t have included their name at all (as I didn’t for several others that I mention briefly, but weren’t funded.)
In that case, perhaps copy/paste a longer description of the organization in a footnote, so the reader can figure out what the organization is trying to do, without having to look them up?
This makes some sense. On the other hand, not naming such organizations means you can’t share your skepticism about specific organizations with the rest of us, who might benefit from hearing it.
Was following the principle of not linking to things I consider negative. Considered not even talking about if for same reason.
Their principle is to bring AI ‘under democratic control’ and then use it as a tool to force AI to enforce their political agenda, which I find both negative as a thing to be doing with an arbitrary payload (e.g. no one should be doing this, it’s both poisoning the banana further and ensuring the fight is over which monkey gets it), and I am also strongly opposed to the payload in question (so if we are going to fight, which they are determined to do, they’re also the wrong monkeys).
Hi!
I’m a co-founder of the AI Objectives Institute. We’re pretty interested in the critical view you’ve formed about what we’re working on! We think it’s most likely that we just haven’t done a very good job of explaining our thinking yet—you say we have a political agenda, but as a group we’re trying quite hard to avoid having an inherent object-level political agenda, and we’re actively looking for input from people with different political perspectives than ours. It’s also quite possible that you have deep and reasonable criticisms of our plan, that we should take on board. Either way, we’d be interested in having a conversation, trying to synchronize models and looking for cruxes for any disagreements, if you’re open to it!
Sorry I didn’t reply earlier, been busy. I would be happy to have a call at some point, you can PM me contact info that is best.
I do think we have disagreements beyond a political agenda, but it is always possible communication fell short somehow.
If you don’t have a political agenda I would say your communications seem highly misleading, in the sense that they seem to clearly indicate one.
I think there exists a generic risk of laundering problem. If you say “capitalism is suboptimal” or “we can do better” people are worried about trojan horses, people worry that you’re just toning it down to garner mainstream support when behind closed doors you’d look more like like “my specific flavor of communism is definitely the solution”. I’m not at all saying I got those vibes from the “transformation of capitalism” post, but that I think it’s plausible someone could get those vibes from it. Notably, the book “Inadequate Equilibria” was explicitly about how capitalism is suboptimal and rigorously asks us if we can improve upon it, and managed not to raise anybody’s alarms about it being a secret communist plot. I guess because it signaled against such a reading by taking the aesthetic/vocabulary of academic econ.
Our first public communications probably over-emphasized one aspect of our thinking, which is that some types of bad (or bad on some people’s preferences) outcomes from markets can be thought of as missing components of the objective function that those markets are systematically optimizing for. The corollary of that absolutely isn’t that we should dismantle markets or capitalism, but that we should take an algorithmic approach to whether and how to add those missing incentives.
A point that we probably under-emphasized at first is that intervening in market systems (whether through governmental mechanisms like taxes, subsidies or regulation, or through private sector mechanisms like ESG objectives or product labeling schemes) has a significant chance of creating bad and unintended consequences via Goodhart’s law and other processes, and that these failures can be viewed as deeply analogous to AI safety failures.
We think that people with left and right-leaning perspectives on economic policy disagree in part because they hold different Bayesian priors about the relative likelihood of something going wrong in the world because market fail to optimize for the right outcome, or because some bureaucracy tried to intervene in people’s lives or in market processes in a unintentionally (or deliberately) harmful way.
To us it seems very likely that both kinds of bad outcomes occur at some rate, and the goal of the AI Objectives Institute is to reduce rates of both market and regulatory failures. Of course there are also political disagreements about what goals should be pursued (which I’d call object level politics, and which we’re trying not to take strong organizational views on) and on how economic goals should be chosen (where we may be taking particular positions, but we’ll try to do that carefully).
This framing doesn’t make a lot of sense to me. From my perspective, markets are unlike AI in that there isn’t a place in a market’s “source code” where you can set or change an objective function. A market is just a group of people, each pursuing their own interests, conducting individual voluntary trades. Bad outcomes of markets come not from wrong objective functions given by some designers, but are instead caused by game theoretic dynamics that make it difficult or impossible for a group of people pursuing their own interests to achieve Pareto efficiency. (See The Second Best for some pointers in this direction.)
Can you try to explain your perspective to someone like me, or point me to any existing writings on this?
There is a big literature in economics on both market and government/regulatory failures. How familiar are you with it, and how does your approach compare with the academic mainstream on these topics?
What’s the thinking behind this? (Would putting the link in parentheses or a footnote work for your purposes? I’m just thinking of the amount of time being wasted by your readers trying to find out what the institute is about.)
Ok, thanks. I guess one of their webpages does mention “through democratic consultation” but that didn’t jump out as very salient to me until now.
In the internet age, attention is like oxygen or life. That’s especially true for a charity, but everyone lives on clicks and views, and a common strategy is to trick people into going ‘hey check out this awful thing.’
If they hadn’t been funded, or their name had been obscured by the veto, I wouldn’t have included their name at all (as I didn’t for several others that I mention briefly, but weren’t funded.)
In that case, perhaps copy/paste a longer description of the organization in a footnote, so the reader can figure out what the organization is trying to do, without having to look them up?
This makes some sense. On the other hand, not naming such organizations means you can’t share your skepticism about specific organizations with the rest of us, who might benefit from hearing it.