First off, great thought experiment! I like it, and it was a nice way to view the problem.
The most obvious answer is: “Wow, we sure don’t know how to help. Let’s design a smarter intelligence that’ll know how to help better.”
At that point I think we’re running the risk of passing the buck forever. (Unless we can prove that process terminates.) So we should probably do at least something. Instead of trying to optimize, I’d focus on doing things that are most obvious. Like helping it not to die. And making sure it has food.
First off, great thought experiment! I like it, and it was a nice way to view the problem.
The most obvious answer is: “Wow, we sure don’t know how to help. Let’s design a smarter intelligence that’ll know how to help better.”
At that point I think we’re running the risk of passing the buck forever. (Unless we can prove that process terminates.) So we should probably do at least something. Instead of trying to optimize, I’d focus on doing things that are most obvious. Like helping it not to die. And making sure it has food.
I am inclined to believe that indeed the buck will get passed forever. This idea you raise is remarkably similar to the Procrastination Paradox (which you can read about at https://intelligence.org/files/ProcrastinationParadox.pdf).