I expect whatever ends up taking over the lightcone to be philosophically competent.
I agree that conditional on that happening, this is plausible, but also it’s likely that some of the answers from such a philosophically competent being to be unsatisfying to us.
One example is that such a philosophically competent AI might tell you that CEV either doesn’t exist, or if it does is so path-dependent that it cannot resolve moral disagreements, which is actually pretty plausible under my model of moral philosophy.
I agree that conditional on that happening, this is plausible, but also it’s likely that some of the answers from such a philosophically competent being to be unsatisfying to us.
One example is that such a philosophically competent AI might tell you that CEV either doesn’t exist, or if it does is so path-dependent that it cannot resolve moral disagreements, which is actually pretty plausible under my model of moral philosophy.