A comment on your list of questions after reading the whole sequence: unlike John and Tekhne elsewhere in this comment thread, I am pretty comfortable with the hierarchical list of questions you are developing here.
This is a pretty useful set of questions that could be taken as starting points for all kinds of useful paradigmatic research.
I believe that part of John’s lack of comfort with the above list of questions is caused by a certain speculative assumption he makes about AGI alignment, an assumption that is also made by many in MIRI, an assumption popular on this forum. The assumption is that, in order to solve AGI alignment, we first need to have nothing less than a complete scientific and philosophical revolution, a revolution that will make all current paradigms entirely obsolete.
If you believe in that speculative assumption, then your above step of already asking specific questions about AGI would be premature. It distracts from having a scientific revolution first.
John’s speculative assumption is itself of course just another paradigm in the Kunhnian sense. It corresponds to a school of thought which says that AGI safety research must be about inventing entirely new paradigms, as opposed to, say, exploring how existing paradigms taken from many existing disciplines might be applied to the problem.
Myself, I am of the school that sees more value in exploring and combining existing paradigms. I think that approach is more likely to end up with actionable solutions for managing AGI safety risks. That being said, I think all here would agree that both schools could potentially come up with something valuable.
A comment on your list of questions after reading the whole sequence: unlike John and Tekhne elsewhere in this comment thread, I am pretty comfortable with the hierarchical list of questions you are developing here.
This is a pretty useful set of questions that could be taken as starting points for all kinds of useful paradigmatic research.
I believe that part of John’s lack of comfort with the above list of questions is caused by a certain speculative assumption he makes about AGI alignment, an assumption that is also made by many in MIRI, an assumption popular on this forum. The assumption is that, in order to solve AGI alignment, we first need to have nothing less than a complete scientific and philosophical revolution, a revolution that will make all current paradigms entirely obsolete.
If you believe in that speculative assumption, then your above step of already asking specific questions about AGI would be premature. It distracts from having a scientific revolution first.
John’s speculative assumption is itself of course just another paradigm in the Kunhnian sense. It corresponds to a school of thought which says that AGI safety research must be about inventing entirely new paradigms, as opposed to, say, exploring how existing paradigms taken from many existing disciplines might be applied to the problem.
Myself, I am of the school that sees more value in exploring and combining existing paradigms. I think that approach is more likely to end up with actionable solutions for managing AGI safety risks. That being said, I think all here would agree that both schools could potentially come up with something valuable.