(1) I read the paper carefully, and enjoyed it. Thanks for publishing it!
(2) I only noticed one typo—a missing period on page 3. There may also be an accidental CR at the same location that unintentionally splits a paragraph.
(3) I’m skeptical whether a useful theory of machine intelligence safety can be developed prior to the development of advanced machine intelligence capability. Instead, I think that safety and capability must co-evolve. If so, then a technical research agenda which fails to include monitoring and/or participating in capability development may need to be revised.
(4) My own experience is that having a prototype machine intelligence capability available, even if primitive, is immensely valuable in thinking about safety.
Re: (3), I think that computer chess is a fine analogy to use here. It’s much easier to make a practical chess program when you possess an actual computer capable of implementing a chess program, but the theoretical work done by Shannon (to get an unbounded solution) still constituted great progress over e.g. the state of knowledge held by Poe. FAI research is still at a place where unbounded solutions would likely provide significant insight, and those are much easier to develop without a practical machine intelligence on hand.
Re: (4), I too expect that having a prototype generally intelligent system on hand would be immensely valuable when thinking about safety. However, it is both the case that (a) we don’t have prototype generally intelligent systems, and it may be some time before they are available, and (b) it seems imprudent to neglect safety research until prototype generally intelligent machines are available. (These points are discussed a bit more in section 5.)
The proof-reading and the comments are much appreciated!
You keep using this analogy of Shannon and chess, but I’m not sure what the problem domain of chess has to do with AGI.
EDIT: To be clear, I can think of other examples, e.g. bridge building or aviation, where a foundational understanding did not by itself lead to being able to construct larger, longer bridges or flying machines, but rather practical experimentation was required hand-in-hand with theoretical analysis. This is because even though we knew the foundational laws, there were still higher order complications in material science and air flow which frustrated ab initio theoretical analysis.
Designing a practical chess program before understanding backtracking algorithms and search trees (or analogous conceptual tools) seems difficult. The same concept applies to your other examples (bridge building, aviation) where it is important to have a theoretical understanding of relevant physics before trying to build the Golden Gate bridge or a 747.
I’m not sure what the problem domain of chess has to do with AGI
It’s an analogy: chess is a domain where smart people were confused about it (Poe), and then Shannon developed conceptual tools to resolve many of those confusions (trees, backtracking), and this enabled the creation of practical algorithms (Deep Blue). The claim is that AGI is similar: we are currently confused about it, and conceptual tools / theoretical understanding seem quite important.
This theoretical understanding alone is not enough, of course. As with flight, practical experimentation is necessary both before (e.g., to figure out which physics questions to ask) and after (e.g., to deal with higher-order complications, as you said). The point we’re trying to make with the technical agenda is that the current understanding of AGI is still wanting for conceptual tools. (The rest of the document attempts to back this claim up, by providing examples of confusing questions that we lack the conceptual tools to answer.)
(I definitely agree that significant practical work is necessary, but I don’t think that modern practical systems are close enough to AGI for practical work available today to have a high chance of being relevant in the future. I’ll expand on this when I reply to your other comments :-)
(1) I read the paper carefully, and enjoyed it. Thanks for publishing it! (2) I only noticed one typo—a missing period on page 3. There may also be an accidental CR at the same location that unintentionally splits a paragraph. (3) I’m skeptical whether a useful theory of machine intelligence safety can be developed prior to the development of advanced machine intelligence capability. Instead, I think that safety and capability must co-evolve. If so, then a technical research agenda which fails to include monitoring and/or participating in capability development may need to be revised. (4) My own experience is that having a prototype machine intelligence capability available, even if primitive, is immensely valuable in thinking about safety.
Thanks! Typo has been fixed.
Re: (3), I think that computer chess is a fine analogy to use here. It’s much easier to make a practical chess program when you possess an actual computer capable of implementing a chess program, but the theoretical work done by Shannon (to get an unbounded solution) still constituted great progress over e.g. the state of knowledge held by Poe. FAI research is still at a place where unbounded solutions would likely provide significant insight, and those are much easier to develop without a practical machine intelligence on hand.
Re: (4), I too expect that having a prototype generally intelligent system on hand would be immensely valuable when thinking about safety. However, it is both the case that (a) we don’t have prototype generally intelligent systems, and it may be some time before they are available, and (b) it seems imprudent to neglect safety research until prototype generally intelligent machines are available. (These points are discussed a bit more in section 5.)
The proof-reading and the comments are much appreciated!
You keep using this analogy of Shannon and chess, but I’m not sure what the problem domain of chess has to do with AGI.
EDIT: To be clear, I can think of other examples, e.g. bridge building or aviation, where a foundational understanding did not by itself lead to being able to construct larger, longer bridges or flying machines, but rather practical experimentation was required hand-in-hand with theoretical analysis. This is because even though we knew the foundational laws, there were still higher order complications in material science and air flow which frustrated ab initio theoretical analysis.
Designing a practical chess program before understanding backtracking algorithms and search trees (or analogous conceptual tools) seems difficult. The same concept applies to your other examples (bridge building, aviation) where it is important to have a theoretical understanding of relevant physics before trying to build the Golden Gate bridge or a 747.
It’s an analogy: chess is a domain where smart people were confused about it (Poe), and then Shannon developed conceptual tools to resolve many of those confusions (trees, backtracking), and this enabled the creation of practical algorithms (Deep Blue). The claim is that AGI is similar: we are currently confused about it, and conceptual tools / theoretical understanding seem quite important.
This theoretical understanding alone is not enough, of course. As with flight, practical experimentation is necessary both before (e.g., to figure out which physics questions to ask) and after (e.g., to deal with higher-order complications, as you said). The point we’re trying to make with the technical agenda is that the current understanding of AGI is still wanting for conceptual tools. (The rest of the document attempts to back this claim up, by providing examples of confusing questions that we lack the conceptual tools to answer.)
(I definitely agree that significant practical work is necessary, but I don’t think that modern practical systems are close enough to AGI for practical work available today to have a high chance of being relevant in the future. I’ll expand on this when I reply to your other comments :-)