I find your answer… to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblemz of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.
No doubt, a one-paragraph list of sub-problems written in English is “unsatisfactory.” That’s why we would “really like to write up explanations of these problems in all their technical detail.”
But it’s not true that the problems are too vague to make progress on them. For example, with regard to the sub-problem of designing an agent architecture capable of having preferences over the external world, recent papers by (SI research associate) Daniel Dewey, Orseau & Ring, and Hibbard each constitute progress.
My general impression is that the SingInst staff have insufficient exposure to technical research to understand how hard it is to answer questions posed at such a level of generality.
I doubt this is a problem. We are quite familiar with technical research, and we know how hard it is for, in my usual example of what needs to be done to solve many of the FAI sub-problems, “Claude Shannon to just invent information theory almost out of nothing.”
In fact, here is a paragraph I wrote months ago for a (not yet released) document called Open Problems in Friendly Artificial Intelligence:
Richard Bellman may have been right that “the very construction of a precise mathematical statement of a verbal problem is itself a problem of major difficulty” (Bellman 1961). Some of the problems in this document have not yet been stated with mathematical precision, and the need for a precise statement of the problem is part of each open problem. But there is reason for optimism. Many times, particular heroes have managed to formalize a previously fuzzy and mysterious concept: see Kolmogorov on complexity and simplicity (Kolmogorov 1965; Li & Vitányi 2008), Solomonoff on induction (Solomonoff 1964a, 1964b; Rathmanner & Hutter 2011), Von Neumann and Morgenstern on rationality (Von Neumann & Morgenstern 1944; Anand 1995), and Shannon on information (Shannon 1948; Arndt 2004).
Also, I regularly say that “Friendly AI might be an incoherent idea, and impossible.” But as Nesov said, “Believing problem intractable isn’t a step towards solving the problem.” Many now-solved problems once looked impossible. But anyway, this is one reason to pursue research in both Friendly AI and on “maxipok” solutions that maximize the chance of an “ok” outcome, like Oracle AI.
No doubt, a one-paragraph list of sub-problems written in English is “unsatisfactory.” That’s why we would “really like to write up explanations of these problems in all their technical detail.”
But it’s not true that the problems are too vague to make progress on them. For example, with regard to the sub-problem of designing an agent architecture capable of having preferences over the external world, recent papers by (SI research associate) Daniel Dewey, Orseau & Ring, and Hibbard each constitute progress.
I doubt this is a problem. We are quite familiar with technical research, and we know how hard it is for, in my usual example of what needs to be done to solve many of the FAI sub-problems, “Claude Shannon to just invent information theory almost out of nothing.”
In fact, here is a paragraph I wrote months ago for a (not yet released) document called Open Problems in Friendly Artificial Intelligence:
Also, I regularly say that “Friendly AI might be an incoherent idea, and impossible.” But as Nesov said, “Believing problem intractable isn’t a step towards solving the problem.” Many now-solved problems once looked impossible. But anyway, this is one reason to pursue research in both Friendly AI and on “maxipok” solutions that maximize the chance of an “ok” outcome, like Oracle AI.