We can’t use the universal prior in practice unless physics contains harnessable non-recursive processes. However, this is exactly the situation in which the universal prior doesn’t always work. Thus, one source of the ‘magic’ is through allowing us to have access to higher levels of computation than the phenomena we are predicting (and to be certain of this).
My position is that the uncomputabiilty of the universal prior shouldn’t count against it. I think the fact that it works so well shows that our actual prior is likely also uncomputable, and that means we have to handle uncomputable priors in our decision theory, for example by specifying that we choose the option that we can prove (or just heuristically believe) has the highest expected payoff, instead of actually computing the expected payoffs.
A worse problem is that there seems to be reason to think that our actual prior is not just uncomputable, but unformalizable. See my earlierposts on this.
A worse problem is that there seems to be reason to think that our actual prior is not just uncomputable, but unformalizable. See my earlier posts on this.
If you make ontological claims, you are bound to get in trouble. Decision theory should speak of what the agent’s algorithm should do, in terms of its behavior, not what it means for the agent to do that in terms of consequences in the real world. What the agent’s algorithm does is always formalizable (as that algorithm!).
(For people unfamiliar with the discussion—see “ontology problem” in this sequence.)
If you make ontological claims, you are bound to get in trouble.
It seems that we do tend to get into trouble when we make ontological claims, but why “bound to”? Your proposed FAI, after it has extracted human values, will still have to solve the ontological problem, right? If it can, then why can’t we?
You advocate “being lazy” as FAI programmers and handing off as many problems as we can to the FAI, but I’m still skeptical that any FAI approach will succeed in the near future, and in the mean time, I’d like to try to better understand what my own values are, and how I should make decisions.
Your proposed FAI, after it has extracted human values, will still have to solve the ontological problem, right? If it can, then why can’t we?
I don’t believe even superintelligence can solve the ontology problem completely.
You advocate “being lazy” as FAI programmers and handing off as many problems as we can to the FAI, but I’m still skeptical that any FAI approach will succeed in the near future, and in the mean time, I’d like to try to better understand what my own values are, and how I should make decisions.
A fine goal, but I doubt it can contribute to FAI design (which, even it’ll take more than a century to finish, still has to be tackled to make that possible). Am I right in thinking that you agree with that?
I don’t believe even superintelligence can solve the ontology problem completely.
Why?
A fine goal, but I doubt it can contribute to FAI design (which, even it’ll take more than a century to finish, still has to be tackled to make that possible).
I’m not sure what you’re referring to by “that” here. Do you mean “preserving our preferences”? Assuming you do...
Am I right in thinking that you agree with that?
No, I think we have at least two disagreements here:
If I can figure out what my own values are, there seem to be several ways that could contribute to FAI design. The simplest way is that I program those values into the FAI manually.
I don’t think FAI is necessarily the best way to preserve our values. I can, for example, upload myself, and then carefully increase my intelligence. As you’ve mentioned in previous comments, this is bound to cause value drift, but such drift may be acceptable, compared to the risks involved in implementing de novo FAI.
My guess is that the root cause of these disagreements is my distrust of human math and software engineering abilities, stemming from my experiences in the crypto field. I think there is a good chance that we (unenhanced biological humans) will never find the correct FAI theory, and that in the event we think we’ve found the right FAI theory, it will turn out that we’re mistaken. And even if we manage to get FAI theory right, it’s almost certain that the actual AI code will be riddled with bugs. You seem to be less concerned with these risks.
My position is that the uncomputabiilty of the universal prior shouldn’t count against it. I think the fact that it works so well shows that our actual prior is likely also uncomputable, and that means we have to handle uncomputable priors in our decision theory, for example by specifying that we choose the option that we can prove (or just heuristically believe) has the highest expected payoff, instead of actually computing the expected payoffs.
A worse problem is that there seems to be reason to think that our actual prior is not just uncomputable, but unformalizable. See my earlier posts on this.
If you make ontological claims, you are bound to get in trouble. Decision theory should speak of what the agent’s algorithm should do, in terms of its behavior, not what it means for the agent to do that in terms of consequences in the real world. What the agent’s algorithm does is always formalizable (as that algorithm!).
(For people unfamiliar with the discussion—see “ontology problem” in this sequence.)
It seems that we do tend to get into trouble when we make ontological claims, but why “bound to”? Your proposed FAI, after it has extracted human values, will still have to solve the ontological problem, right? If it can, then why can’t we?
You advocate “being lazy” as FAI programmers and handing off as many problems as we can to the FAI, but I’m still skeptical that any FAI approach will succeed in the near future, and in the mean time, I’d like to try to better understand what my own values are, and how I should make decisions.
I don’t believe even superintelligence can solve the ontology problem completely.
A fine goal, but I doubt it can contribute to FAI design (which, even it’ll take more than a century to finish, still has to be tackled to make that possible). Am I right in thinking that you agree with that?
Why?
I’m not sure what you’re referring to by “that” here. Do you mean “preserving our preferences”? Assuming you do...
No, I think we have at least two disagreements here:
If I can figure out what my own values are, there seem to be several ways that could contribute to FAI design. The simplest way is that I program those values into the FAI manually.
I don’t think FAI is necessarily the best way to preserve our values. I can, for example, upload myself, and then carefully increase my intelligence. As you’ve mentioned in previous comments, this is bound to cause value drift, but such drift may be acceptable, compared to the risks involved in implementing de novo FAI.
My guess is that the root cause of these disagreements is my distrust of human math and software engineering abilities, stemming from my experiences in the crypto field. I think there is a good chance that we (unenhanced biological humans) will never find the correct FAI theory, and that in the event we think we’ve found the right FAI theory, it will turn out that we’re mistaken. And even if we manage to get FAI theory right, it’s almost certain that the actual AI code will be riddled with bugs. You seem to be less concerned with these risks.