Yeah. This family of questions is the most important one we don’t know how to answer. Maybe Eliezer and Marcello have solved it, but they’re keeping mum.
I don’t think so. For one thing, Eliezer keeps talking about how important it is to solve this. I don’t think his personal ethics would let him try to spread awareness of an open problem if he had actually solved it. Also, he seems to think that if the public knew the answer, it would reduce the chance of the creation of uFAI, so, unless the solution was very different than he expected—I’m thinking of it providing a huge clue toward AGI, but that sounds unlikely—he would have incentive to make it public.
Does Eliezer keep talking about the thing I called “the drawback of AIXI”? It seems to me that he keeps talking about the “AI reflection problem”, which is different. And yeah, solving any of these problems would make AGI easier without making FAI much easier.
No, I was referring to the AI reflection problem in the grandparent.
I don’t know if that would make AGI much easier. Even with a good reflective decision theory, you’d still need a more efficient framework for inference than an AIXI-style brute force algorithm. On the other hand, if you could do inference well, you might still make a working AI without solving reflection, but it would be harder to understand its goal system, making it less likely to be friendly. The lack of reflectivity could be an obstacle, but I think that it is more likely that, given a powerful inference algorithm and a lack of concern for the dangers of AI, it wouldn’t be that hard to make something dangerous.
Yeah. This family of questions is the most important one we don’t know how to answer. Maybe Eliezer and Marcello have solved it, but they’re keeping mum.
I don’t think so. For one thing, Eliezer keeps talking about how important it is to solve this. I don’t think his personal ethics would let him try to spread awareness of an open problem if he had actually solved it. Also, he seems to think that if the public knew the answer, it would reduce the chance of the creation of uFAI, so, unless the solution was very different than he expected—I’m thinking of it providing a huge clue toward AGI, but that sounds unlikely—he would have incentive to make it public.
Does Eliezer keep talking about the thing I called “the drawback of AIXI”? It seems to me that he keeps talking about the “AI reflection problem”, which is different. And yeah, solving any of these problems would make AGI easier without making FAI much easier.
No, I was referring to the AI reflection problem in the grandparent.
I don’t know if that would make AGI much easier. Even with a good reflective decision theory, you’d still need a more efficient framework for inference than an AIXI-style brute force algorithm. On the other hand, if you could do inference well, you might still make a working AI without solving reflection, but it would be harder to understand its goal system, making it less likely to be friendly. The lack of reflectivity could be an obstacle, but I think that it is more likely that, given a powerful inference algorithm and a lack of concern for the dangers of AI, it wouldn’t be that hard to make something dangerous.