To stretch the chess analogy, even though Shannon didn’t use any 1950s knowledge of game-playing heuristics, he presumably did use something like the knowledge of chess being a two-player game that’s played by the two taking alternating turns in moving different kinds of pieces on a board.
I agree, and I think it is important to understand computation, logic, foundations of computer science, etc. in doing FAI research. Trying to do FAI theory with no knowledge of computers is surely a foolish endeavor. My point was more along the lines of “modern AI textbooks mostly contain heuristics and strategies for getting good behavior out of narrow systems, and this doesn’t seem like the appropriate place to get the relevant low-level knowledge.”
To continue abusing the chess analogy, I completely agree that Shannon needed to know things about chess, but I don’t think he needed to understand 1950′s-era programming techniques (such as the formal beginnings of assembler languages and the early attempts to construct compilers). It seems to me that the field of modern AI is less like “understanding chess” and more like “understanding assembly languages” in this particular analogy.
That said, I am not trying to say that this is the only way to approach friendliness research. I currently think that it’s one of the most promising methods, but I certainly won’t discourage anyone who wants to try to do friendliness research from a completely different direction.
The only points I’m trying to make here are that (a) I think MIRI’s approach is fairly promising, and (b) within this approach, an understanding of modern AI is not a prerequisite to understanding our active research.
Are there other approaches to FAI that would make significantly more use of modern narrow AI techniques? Yes, of course. (Nick Hay and Stuart Russell are poking at some of those topics today, and we occasionally get together and chat about them.) Would it be nice if MIRI could take a number of different approaches all at the same time? Yes, of course! But there are currently only three of us. I agree that it would be nice to be in a position where we had enough resources to try many different approaches at once, but it is currently a factual point that, in order to understand our active research, you don’t need much narrow AI knowledge.
Thanks, Kaj.
I agree, and I think it is important to understand computation, logic, foundations of computer science, etc. in doing FAI research. Trying to do FAI theory with no knowledge of computers is surely a foolish endeavor. My point was more along the lines of “modern AI textbooks mostly contain heuristics and strategies for getting good behavior out of narrow systems, and this doesn’t seem like the appropriate place to get the relevant low-level knowledge.”
To continue abusing the chess analogy, I completely agree that Shannon needed to know things about chess, but I don’t think he needed to understand 1950′s-era programming techniques (such as the formal beginnings of assembler languages and the early attempts to construct compilers). It seems to me that the field of modern AI is less like “understanding chess” and more like “understanding assembly languages” in this particular analogy.
That said, I am not trying to say that this is the only way to approach friendliness research. I currently think that it’s one of the most promising methods, but I certainly won’t discourage anyone who wants to try to do friendliness research from a completely different direction.
The only points I’m trying to make here are that (a) I think MIRI’s approach is fairly promising, and (b) within this approach, an understanding of modern AI is not a prerequisite to understanding our active research.
Are there other approaches to FAI that would make significantly more use of modern narrow AI techniques? Yes, of course. (Nick Hay and Stuart Russell are poking at some of those topics today, and we occasionally get together and chat about them.) Would it be nice if MIRI could take a number of different approaches all at the same time? Yes, of course! But there are currently only three of us. I agree that it would be nice to be in a position where we had enough resources to try many different approaches at once, but it is currently a factual point that, in order to understand our active research, you don’t need much narrow AI knowledge.
Thanks, that’s a good clarification. May be worth explicitly mentioning something like that in the guide, too.