Let’s suppose we solve the problem of building a truth-seeking community that knows and discovers lots of important things, especially the answers to deep philosophical questions. And more importantly, let’s say the incentives of this group were correctly aligned with human values. It would be nice to have a permanent group of people that act as sort of a cognitive engine, dedicated to making sure that all of our efforts stayed on the right track and couldn’t be influenced by outside societal forces, public opinion, political pressure, etc. Like some sort of philosophical query machine that the people who are actually in power, or have influence and a public persona, would have to actually follow directives from – or at least, would face heavy costs if they began to do things against the wishes of this group.
This is sort of like the First versus Second Foundation. The First had all the manpower, finances, technology, and military strength, but the Second made sure everything happened according to the plan. And the First was destined to become corrupt and malign anyway, as this would happen with any large and unwieldy organization that gains too much power.
The problem of course is that the Second Foundation used manipulation, mind control, and outright treachery to influence events. So how would we structure incentives so that our larger and more influential organizations actually have to follow certain directives, especially ones that could possibly change rapidly over time?
Politically this can sometimes be accomplished through democracy, or the threat of revolt, but this never gets us very close to an ideal system. Economically, this can sometimes be accomplished by consumer choice, but when an organization forms a legally-sanctioned monopoly or sometimes becomes too far separated from the consumer, then there is no way to keep the organization aligned (see Equifax).
This is even a problem with Effective Altruist organizations, because even though there are still options for most philanthropists, the main thing most philanthropic organizations seek are donations from people with very high net-worth, so they will mainly become influenced by the wants of those individuals, to the extent that public opinion does not matter.
And to the extent that public opinion does matter, these organizations will have to ensure that they never propose any actions too far outside of the window of social acceptability, and when they do choose to take small steps outside of this window, they may have to partially conceal or limit the transparency of these actions.
And all this does have tangible effects on which projects actually get completed, which things get funded and so on, because we absolutely do need lots of resources to accomplish good things in the world, and the people with the most control over these resources also tend to be the most visible and already tied to lots of different incentive structures that we have almost no ability to override.
I know that LW has managed to seed some people into these organizations so that they are at least exposed to these ideas and so-on, and I know that this has had some pretty positive effects, but I also am somewhat skeptical that this will be enough as EA orgs grow and become more mainstream than they are. Every large organization must move towards greater bureaucracy and greater inertia as they grow, and if they become misaligned it becomes very difficult for them to change course. Correctly seeding them seems to be the best strategy but beyond that it is an unsolved problem.
Like some sort of philosophical query machine that the people who are actually in power, or have influence and a public persona, would have to actually follow directives from – or at least, would face heavy costs if they began to do things against the wishes of this group
Anything that is reliable influential seems like it would be attacked by individuals seeking influence. Maybe it needs to be surprisingly influential, like the surprising influence of the consents in Anathem (for those that haven’t read it, there is a group of monks who are regularly shut off from the outside would and have little influence, but are occasionally emerge into the real world and are super effective at getting stuff done).
I know that LW has managed to seed some people into these organizations so that they are at least exposed to these ideas and so-on, and I know that this has had some pretty positive effects, but I also am somewhat skeptical that this will be enough as EA orgs grow and become more mainstream than they are. Every large organization must move towards greater bureaucracy and greater inertia as they grow, and if they become misaligned it becomes very difficult for them to change course. Correctly seeding them seems to be the best strategy but beyond that it is an unsolved problem.
I think EA might be able to avoid stagnation if there is a healthy crop of new organisations that spring up and it is not just dominated by behemoths. So perhaps expect organisations to be single shot things, create lots of them and then try and rely on the community to differentially fund the organisations as we decide whatever is needed.
Interesting comment, but the way you’ve written it makes it sound like there is some kind of conspiracy which does not exist and which would fail anyway if it was attempted.
To be clear, I do not believe that trying to create such a conspiracy is feasible, and wanted to emphasize that even if it were possible, you’d still need to have a bunch of other problems already solved (like making an ideal truth-seeking community). Sometimes it seems that rationalists want to have an organization that accomplishes the maximum utilitarian good, and hypothetically, this implies that some kind of conspiracy—if you wish to call it that—would need to exist. For a massively influential and secretive conspiracy, I might assign a < 1% change of one already existing (in which case it would be too powerful to overcome) and a greater than 99% of none existing (in which case it’s probably impossible to succeed in creating one).
That said, to solve even just the highest priority issues of interest to EAs, which probably won’t require a massively influential and secretive conspiracy, I think you’d still need to solve the problem of alignment of large organizations with these objectives, especially for things like AI, where development and deployment of such will mainly be accomplished by the most enormous and wealthy firms. These are the kind of organizations that can’t be seeded to have good intentions from the start. But it seems like you’d still want to have some influence over them in some way.
Let’s suppose we solve the problem of building a truth-seeking community that knows and discovers lots of important things, especially the answers to deep philosophical questions. And more importantly, let’s say the incentives of this group were correctly aligned with human values. It would be nice to have a permanent group of people that act as sort of a cognitive engine, dedicated to making sure that all of our efforts stayed on the right track and couldn’t be influenced by outside societal forces, public opinion, political pressure, etc. Like some sort of philosophical query machine that the people who are actually in power, or have influence and a public persona, would have to actually follow directives from – or at least, would face heavy costs if they began to do things against the wishes of this group.
This is sort of like the First versus Second Foundation. The First had all the manpower, finances, technology, and military strength, but the Second made sure everything happened according to the plan. And the First was destined to become corrupt and malign anyway, as this would happen with any large and unwieldy organization that gains too much power.
The problem of course is that the Second Foundation used manipulation, mind control, and outright treachery to influence events. So how would we structure incentives so that our larger and more influential organizations actually have to follow certain directives, especially ones that could possibly change rapidly over time?
Politically this can sometimes be accomplished through democracy, or the threat of revolt, but this never gets us very close to an ideal system. Economically, this can sometimes be accomplished by consumer choice, but when an organization forms a legally-sanctioned monopoly or sometimes becomes too far separated from the consumer, then there is no way to keep the organization aligned (see Equifax).
This is even a problem with Effective Altruist organizations, because even though there are still options for most philanthropists, the main thing most philanthropic organizations seek are donations from people with very high net-worth, so they will mainly become influenced by the wants of those individuals, to the extent that public opinion does not matter.
And to the extent that public opinion does matter, these organizations will have to ensure that they never propose any actions too far outside of the window of social acceptability, and when they do choose to take small steps outside of this window, they may have to partially conceal or limit the transparency of these actions.
And all this does have tangible effects on which projects actually get completed, which things get funded and so on, because we absolutely do need lots of resources to accomplish good things in the world, and the people with the most control over these resources also tend to be the most visible and already tied to lots of different incentive structures that we have almost no ability to override.
I know that LW has managed to seed some people into these organizations so that they are at least exposed to these ideas and so-on, and I know that this has had some pretty positive effects, but I also am somewhat skeptical that this will be enough as EA orgs grow and become more mainstream than they are. Every large organization must move towards greater bureaucracy and greater inertia as they grow, and if they become misaligned it becomes very difficult for them to change course. Correctly seeding them seems to be the best strategy but beyond that it is an unsolved problem.
Anything that is reliable influential seems like it would be attacked by individuals seeking influence. Maybe it needs to be surprisingly influential, like the surprising influence of the consents in Anathem (for those that haven’t read it, there is a group of monks who are regularly shut off from the outside would and have little influence, but are occasionally emerge into the real world and are super effective at getting stuff done).
I think EA might be able to avoid stagnation if there is a healthy crop of new organisations that spring up and it is not just dominated by behemoths. So perhaps expect organisations to be single shot things, create lots of them and then try and rely on the community to differentially fund the organisations as we decide whatever is needed.
Interesting comment, but the way you’ve written it makes it sound like there is some kind of conspiracy which does not exist and which would fail anyway if it was attempted.
To be clear, I do not believe that trying to create such a conspiracy is feasible, and wanted to emphasize that even if it were possible, you’d still need to have a bunch of other problems already solved (like making an ideal truth-seeking community). Sometimes it seems that rationalists want to have an organization that accomplishes the maximum utilitarian good, and hypothetically, this implies that some kind of conspiracy—if you wish to call it that—would need to exist. For a massively influential and secretive conspiracy, I might assign a < 1% change of one already existing (in which case it would be too powerful to overcome) and a greater than 99% of none existing (in which case it’s probably impossible to succeed in creating one).
That said, to solve even just the highest priority issues of interest to EAs, which probably won’t require a massively influential and secretive conspiracy, I think you’d still need to solve the problem of alignment of large organizations with these objectives, especially for things like AI, where development and deployment of such will mainly be accomplished by the most enormous and wealthy firms. These are the kind of organizations that can’t be seeded to have good intentions from the start. But it seems like you’d still want to have some influence over them in some way.