To be clear, I do not believe that trying to create such a conspiracy is feasible, and wanted to emphasize that even if it were possible, you’d still need to have a bunch of other problems already solved (like making an ideal truth-seeking community). Sometimes it seems that rationalists want to have an organization that accomplishes the maximum utilitarian good, and hypothetically, this implies that some kind of conspiracy—if you wish to call it that—would need to exist. For a massively influential and secretive conspiracy, I might assign a < 1% change of one already existing (in which case it would be too powerful to overcome) and a greater than 99% of none existing (in which case it’s probably impossible to succeed in creating one).
That said, to solve even just the highest priority issues of interest to EAs, which probably won’t require a massively influential and secretive conspiracy, I think you’d still need to solve the problem of alignment of large organizations with these objectives, especially for things like AI, where development and deployment of such will mainly be accomplished by the most enormous and wealthy firms. These are the kind of organizations that can’t be seeded to have good intentions from the start. But it seems like you’d still want to have some influence over them in some way.
To be clear, I do not believe that trying to create such a conspiracy is feasible, and wanted to emphasize that even if it were possible, you’d still need to have a bunch of other problems already solved (like making an ideal truth-seeking community). Sometimes it seems that rationalists want to have an organization that accomplishes the maximum utilitarian good, and hypothetically, this implies that some kind of conspiracy—if you wish to call it that—would need to exist. For a massively influential and secretive conspiracy, I might assign a < 1% change of one already existing (in which case it would be too powerful to overcome) and a greater than 99% of none existing (in which case it’s probably impossible to succeed in creating one).
That said, to solve even just the highest priority issues of interest to EAs, which probably won’t require a massively influential and secretive conspiracy, I think you’d still need to solve the problem of alignment of large organizations with these objectives, especially for things like AI, where development and deployment of such will mainly be accomplished by the most enormous and wealthy firms. These are the kind of organizations that can’t be seeded to have good intentions from the start. But it seems like you’d still want to have some influence over them in some way.