An Advisor is a systems that takes a corpus of real-world data and somehow computes the answer to the informal question “what ought we (or I) to do?”. Advisors are FAI-complete because:
Formalizing the ought-question requires a complete formal statement of human values or a formal method for finding them. Answering the ought-question requires a full theory of instrumental decision-making.
There may be ways around having an FAI-complete Advisor if you ask somewhat less generic questions. Maybe questions of the form: “Which of these 3 options would it be best for me to do if I want to satisfy this utility function?” Where “this” is the best you’ve found so far. You probably can’t ask questions like this indefinitely, but with a smart ordering of questions you could probably get a lot done quite safely. This all assumes you can build an initial Advisor AI safely. How do you create a powerful AI without recursive self-improvement or that doesn’t have goals?
There may be ways around having an FAI-complete Advisor if you ask somewhat less generic questions. Maybe questions of the form: “Which of these 3 options would it be best for me to do if I want to satisfy this utility function?” Where “this” is the best you’ve found so far. You probably can’t ask questions like this indefinitely, but with a smart ordering of questions you could probably get a lot done quite safely. This all assumes you can build an initial Advisor AI safely. How do you create a powerful AI without recursive self-improvement or that doesn’t have goals?