If you make ontological claims, you are bound to get in trouble.
It seems that we do tend to get into trouble when we make ontological claims, but why “bound to”? Your proposed FAI, after it has extracted human values, will still have to solve the ontological problem, right? If it can, then why can’t we?
You advocate “being lazy” as FAI programmers and handing off as many problems as we can to the FAI, but I’m still skeptical that any FAI approach will succeed in the near future, and in the mean time, I’d like to try to better understand what my own values are, and how I should make decisions.
Your proposed FAI, after it has extracted human values, will still have to solve the ontological problem, right? If it can, then why can’t we?
I don’t believe even superintelligence can solve the ontology problem completely.
You advocate “being lazy” as FAI programmers and handing off as many problems as we can to the FAI, but I’m still skeptical that any FAI approach will succeed in the near future, and in the mean time, I’d like to try to better understand what my own values are, and how I should make decisions.
A fine goal, but I doubt it can contribute to FAI design (which, even it’ll take more than a century to finish, still has to be tackled to make that possible). Am I right in thinking that you agree with that?
I don’t believe even superintelligence can solve the ontology problem completely.
Why?
A fine goal, but I doubt it can contribute to FAI design (which, even it’ll take more than a century to finish, still has to be tackled to make that possible).
I’m not sure what you’re referring to by “that” here. Do you mean “preserving our preferences”? Assuming you do...
Am I right in thinking that you agree with that?
No, I think we have at least two disagreements here:
If I can figure out what my own values are, there seem to be several ways that could contribute to FAI design. The simplest way is that I program those values into the FAI manually.
I don’t think FAI is necessarily the best way to preserve our values. I can, for example, upload myself, and then carefully increase my intelligence. As you’ve mentioned in previous comments, this is bound to cause value drift, but such drift may be acceptable, compared to the risks involved in implementing de novo FAI.
My guess is that the root cause of these disagreements is my distrust of human math and software engineering abilities, stemming from my experiences in the crypto field. I think there is a good chance that we (unenhanced biological humans) will never find the correct FAI theory, and that in the event we think we’ve found the right FAI theory, it will turn out that we’re mistaken. And even if we manage to get FAI theory right, it’s almost certain that the actual AI code will be riddled with bugs. You seem to be less concerned with these risks.
It seems that we do tend to get into trouble when we make ontological claims, but why “bound to”? Your proposed FAI, after it has extracted human values, will still have to solve the ontological problem, right? If it can, then why can’t we?
You advocate “being lazy” as FAI programmers and handing off as many problems as we can to the FAI, but I’m still skeptical that any FAI approach will succeed in the near future, and in the mean time, I’d like to try to better understand what my own values are, and how I should make decisions.
I don’t believe even superintelligence can solve the ontology problem completely.
A fine goal, but I doubt it can contribute to FAI design (which, even it’ll take more than a century to finish, still has to be tackled to make that possible). Am I right in thinking that you agree with that?
Why?
I’m not sure what you’re referring to by “that” here. Do you mean “preserving our preferences”? Assuming you do...
No, I think we have at least two disagreements here:
If I can figure out what my own values are, there seem to be several ways that could contribute to FAI design. The simplest way is that I program those values into the FAI manually.
I don’t think FAI is necessarily the best way to preserve our values. I can, for example, upload myself, and then carefully increase my intelligence. As you’ve mentioned in previous comments, this is bound to cause value drift, but such drift may be acceptable, compared to the risks involved in implementing de novo FAI.
My guess is that the root cause of these disagreements is my distrust of human math and software engineering abilities, stemming from my experiences in the crypto field. I think there is a good chance that we (unenhanced biological humans) will never find the correct FAI theory, and that in the event we think we’ve found the right FAI theory, it will turn out that we’re mistaken. And even if we manage to get FAI theory right, it’s almost certain that the actual AI code will be riddled with bugs. You seem to be less concerned with these risks.