I would be curious to see more thoughts on this from people who have thought more than I have about stable/reliable self-improvement/tiling. Broadly speaking, I am also somewhat skeptical that it’s the best problem to be working on now. However, here are some considerations in favor:
It seems plausible to me that an AI will be doing most of the design work before it is a “human-level reasoner” in your sense. The scenario I have in mind is a self-improvement cycle by a machine specialized in CS and math, which is either better than humans at these things, or is changing too rapidly for humans to effectively help it. This would create what Bostrom has called (in private correspondence) a “competence gap”, where the AI can and does self-improve, but may not solve the tiling problem or balance risk the way we would have liked it to. In this case, being able to solve this problem for it directly is helpful.
30% efficiency improvement seems quite large, even for major software changes, in machine learning. I’m not sure how much this affects your overall point.
On the value of work now vs. later, I would probably try to determine this mostly by thinking about how much this work will help us grow interest in the area among people who will wield useful skills an influence later. So far, work on the Löbian obstacle has been pretty good on this metric (if you count it as partially responsible for attracting Benja and Nate, attention from mathematicians, its importance to past workshops, Nik Weaver, etc.).
I’ll very quickly remark that I think that the competence gap is indeed the main issue. If we imagine an AI built to a level where it was as smart as all the mathematicians who could work on the problem in advance, but able to do the same work faster, which didn’t use any self-improvement along the way, and it was otherwise within a Friendliness framework that well-decided its preferences among what decision framework would control whatever stability framework it invented, then clearly there’s no advantage to trying to do the work in advance. But I think the competence gap is much larger than that zero level.
Note that we care about the gap between {Ability to design powerful AI} and {Ability to design powerful AI that will do what the original AI wants}. I think the main difference is that you see the second one as a super-hard problem. I don’t see it as a super-hard problem, especially if we have already successfully built one AI that does what we want. I tried to flesh out this disagreement in the post.
I do see a gap as plausible, since I expect capabilities to be uneven and who knows what will come first.
But it would be surprising if an AI was good at figuring out what other AI’s would be effective, but wasn’t able to understand that itself was effective—since presumably these other AI’s would be quite similar to itself, and would be leveraging the same insights. The concern seems to be the case where the AI understands why it is able to do so much cool stuff, but is not able to understand why it is motivated to do the right cool stuff (and can’t figure it out, despite the motivation to do so and the availability of human explainers who do understand).
To me this scenario seems unlikely. I assume you have a different picture than I do.
I think the main disagreement is about whether it’s possible to get an initial system which is powerful in the ways needed for your proposal and which is knowably aligned with our goals; some more about this in my reply to your post, which I’ve finally posted, though there I mostly discuss my own position rather than Eliezer’s.
I would be curious to see more thoughts on this from people who have thought more than I have about stable/reliable self-improvement/tiling. Broadly speaking, I am also somewhat skeptical that it’s the best problem to be working on now. However, here are some considerations in favor:
It seems plausible to me that an AI will be doing most of the design work before it is a “human-level reasoner” in your sense. The scenario I have in mind is a self-improvement cycle by a machine specialized in CS and math, which is either better than humans at these things, or is changing too rapidly for humans to effectively help it. This would create what Bostrom has called (in private correspondence) a “competence gap”, where the AI can and does self-improve, but may not solve the tiling problem or balance risk the way we would have liked it to. In this case, being able to solve this problem for it directly is helpful.
30% efficiency improvement seems quite large, even for major software changes, in machine learning. I’m not sure how much this affects your overall point.
On the value of work now vs. later, I would probably try to determine this mostly by thinking about how much this work will help us grow interest in the area among people who will wield useful skills an influence later. So far, work on the Löbian obstacle has been pretty good on this metric (if you count it as partially responsible for attracting Benja and Nate, attention from mathematicians, its importance to past workshops, Nik Weaver, etc.).
I’ll very quickly remark that I think that the competence gap is indeed the main issue. If we imagine an AI built to a level where it was as smart as all the mathematicians who could work on the problem in advance, but able to do the same work faster, which didn’t use any self-improvement along the way, and it was otherwise within a Friendliness framework that well-decided its preferences among what decision framework would control whatever stability framework it invented, then clearly there’s no advantage to trying to do the work in advance. But I think the competence gap is much larger than that zero level.
Note that we care about the gap between {Ability to design powerful AI} and {Ability to design powerful AI that will do what the original AI wants}. I think the main difference is that you see the second one as a super-hard problem. I don’t see it as a super-hard problem, especially if we have already successfully built one AI that does what we want. I tried to flesh out this disagreement in the post.
I do see a gap as plausible, since I expect capabilities to be uneven and who knows what will come first.
But it would be surprising if an AI was good at figuring out what other AI’s would be effective, but wasn’t able to understand that itself was effective—since presumably these other AI’s would be quite similar to itself, and would be leveraging the same insights. The concern seems to be the case where the AI understands why it is able to do so much cool stuff, but is not able to understand why it is motivated to do the right cool stuff (and can’t figure it out, despite the motivation to do so and the availability of human explainers who do understand).
To me this scenario seems unlikely. I assume you have a different picture than I do.
I think the main disagreement is about whether it’s possible to get an initial system which is powerful in the ways needed for your proposal and which is knowably aligned with our goals; some more about this in my reply to your post, which I’ve finally posted, though there I mostly discuss my own position rather than Eliezer’s.