If existing intelligence works the way I think it does, “small and secret” could be a very poor approach to solving an unreasonably difficult problem. You’d want a large, relatively informal network of researchers working on the problem. The first challenge, then, would be working out how to begin to align the network in a way that lets it learn on the problem.
There’s a curious self-reflective recursivity here. Intuitively, I suspect the task of aligning the reseach network would turn out isomorphic to the AI alignment problem it was trying to solve.
If existing intelligence works the way I think it does, “small and secret” could be a very poor approach to solving an unreasonably difficult problem. You’d want a large, relatively informal network of researchers working on the problem. The first challenge, then, would be working out how to begin to align the network in a way that lets it learn on the problem.
There’s a curious self-reflective recursivity here. Intuitively, I suspect the task of aligning the reseach network would turn out isomorphic to the AI alignment problem it was trying to solve.