No, I was referring to the AI reflection problem in the grandparent.
I don’t know if that would make AGI much easier. Even with a good reflective decision theory, you’d still need a more efficient framework for inference than an AIXI-style brute force algorithm. On the other hand, if you could do inference well, you might still make a working AI without solving reflection, but it would be harder to understand its goal system, making it less likely to be friendly. The lack of reflectivity could be an obstacle, but I think that it is more likely that, given a powerful inference algorithm and a lack of concern for the dangers of AI, it wouldn’t be that hard to make something dangerous.
No, I was referring to the AI reflection problem in the grandparent.
I don’t know if that would make AGI much easier. Even with a good reflective decision theory, you’d still need a more efficient framework for inference than an AIXI-style brute force algorithm. On the other hand, if you could do inference well, you might still make a working AI without solving reflection, but it would be harder to understand its goal system, making it less likely to be friendly. The lack of reflectivity could be an obstacle, but I think that it is more likely that, given a powerful inference algorithm and a lack of concern for the dangers of AI, it wouldn’t be that hard to make something dangerous.