I’m confused as to why you think that knowledge of search algorithms is important for FAI research, though.
I don’t think he meant to say that “knowledge of search algorithms is important for FAI research”, I think he meant to say “by analogy from search algorithms, you’re going to make progress faster if you research the abstract formal theory and the concrete implementation at the same time, letting progress in one guide work in the other”.
I’m personally sympathetic to your argument, that there’s no point in looking at the concrete implementations before we understand the formal specification in good enough detail to know what to look for in the concrete implementations… but on the other hand, I’m also sympathetic to the argument that if you do not also look at the concrete implementations, you may never hit the formal specifications that are actually correct.
To stretch the chess analogy, even though Shannon didn’t use any 1950s knowledge of game-playing heuristics, he presumably did use something like the knowledge of chess being a two-player game that’s played by the two taking alternating turns in moving different kinds of pieces on a board. If he didn’t have this information to ground his search, and had instead tried to come up with a general formal algorithm for winning in any game (including football, tag, and 20 questions), it seems much less likely that he would have come up with anything useful.
As a more relevant example, consider the discussion about VNM rationality. Suppose that you carry out a long research program focused on understanding how to specify Friendliness in a framework built around VNM rationality, all the while research in practical AI reveals that VNM rationality is a fundamentally flawed approach for looking at decision-making, and discovers a superior framework that’s much more suited for both AI design and Friendliness research. (I don’t expect this to necessarily happen, but I imagine that something like that could happen.) If your work on Friendliness research continues while you remain ignorant of this discovery, you’ll waste time pursuing a direction that can never produce a useful result, even on the level of an “infinite computer” understanding.
To stretch the chess analogy, even though Shannon didn’t use any 1950s knowledge of game-playing heuristics, he presumably did use something like the knowledge of chess being a two-player game that’s played by the two taking alternating turns in moving different kinds of pieces on a board.
I agree, and I think it is important to understand computation, logic, foundations of computer science, etc. in doing FAI research. Trying to do FAI theory with no knowledge of computers is surely a foolish endeavor. My point was more along the lines of “modern AI textbooks mostly contain heuristics and strategies for getting good behavior out of narrow systems, and this doesn’t seem like the appropriate place to get the relevant low-level knowledge.”
To continue abusing the chess analogy, I completely agree that Shannon needed to know things about chess, but I don’t think he needed to understand 1950′s-era programming techniques (such as the formal beginnings of assembler languages and the early attempts to construct compilers). It seems to me that the field of modern AI is less like “understanding chess” and more like “understanding assembly languages” in this particular analogy.
That said, I am not trying to say that this is the only way to approach friendliness research. I currently think that it’s one of the most promising methods, but I certainly won’t discourage anyone who wants to try to do friendliness research from a completely different direction.
The only points I’m trying to make here are that (a) I think MIRI’s approach is fairly promising, and (b) within this approach, an understanding of modern AI is not a prerequisite to understanding our active research.
Are there other approaches to FAI that would make significantly more use of modern narrow AI techniques? Yes, of course. (Nick Hay and Stuart Russell are poking at some of those topics today, and we occasionally get together and chat about them.) Would it be nice if MIRI could take a number of different approaches all at the same time? Yes, of course! But there are currently only three of us. I agree that it would be nice to be in a position where we had enough resources to try many different approaches at once, but it is currently a factual point that, in order to understand our active research, you don’t need much narrow AI knowledge.
I don’t think he meant to say that “knowledge of search algorithms is important for FAI research”, I think he meant to say “by analogy from search algorithms, you’re going to make progress faster if you research the abstract formal theory and the concrete implementation at the same time, letting progress in one guide work in the other”.
I’m personally sympathetic to your argument, that there’s no point in looking at the concrete implementations before we understand the formal specification in good enough detail to know what to look for in the concrete implementations… but on the other hand, I’m also sympathetic to the argument that if you do not also look at the concrete implementations, you may never hit the formal specifications that are actually correct.
To stretch the chess analogy, even though Shannon didn’t use any 1950s knowledge of game-playing heuristics, he presumably did use something like the knowledge of chess being a two-player game that’s played by the two taking alternating turns in moving different kinds of pieces on a board. If he didn’t have this information to ground his search, and had instead tried to come up with a general formal algorithm for winning in any game (including football, tag, and 20 questions), it seems much less likely that he would have come up with anything useful.
As a more relevant example, consider the discussion about VNM rationality. Suppose that you carry out a long research program focused on understanding how to specify Friendliness in a framework built around VNM rationality, all the while research in practical AI reveals that VNM rationality is a fundamentally flawed approach for looking at decision-making, and discovers a superior framework that’s much more suited for both AI design and Friendliness research. (I don’t expect this to necessarily happen, but I imagine that something like that could happen.) If your work on Friendliness research continues while you remain ignorant of this discovery, you’ll waste time pursuing a direction that can never produce a useful result, even on the level of an “infinite computer” understanding.
Thanks, Kaj.
I agree, and I think it is important to understand computation, logic, foundations of computer science, etc. in doing FAI research. Trying to do FAI theory with no knowledge of computers is surely a foolish endeavor. My point was more along the lines of “modern AI textbooks mostly contain heuristics and strategies for getting good behavior out of narrow systems, and this doesn’t seem like the appropriate place to get the relevant low-level knowledge.”
To continue abusing the chess analogy, I completely agree that Shannon needed to know things about chess, but I don’t think he needed to understand 1950′s-era programming techniques (such as the formal beginnings of assembler languages and the early attempts to construct compilers). It seems to me that the field of modern AI is less like “understanding chess” and more like “understanding assembly languages” in this particular analogy.
That said, I am not trying to say that this is the only way to approach friendliness research. I currently think that it’s one of the most promising methods, but I certainly won’t discourage anyone who wants to try to do friendliness research from a completely different direction.
The only points I’m trying to make here are that (a) I think MIRI’s approach is fairly promising, and (b) within this approach, an understanding of modern AI is not a prerequisite to understanding our active research.
Are there other approaches to FAI that would make significantly more use of modern narrow AI techniques? Yes, of course. (Nick Hay and Stuart Russell are poking at some of those topics today, and we occasionally get together and chat about them.) Would it be nice if MIRI could take a number of different approaches all at the same time? Yes, of course! But there are currently only three of us. I agree that it would be nice to be in a position where we had enough resources to try many different approaches at once, but it is currently a factual point that, in order to understand our active research, you don’t need much narrow AI knowledge.
Thanks, that’s a good clarification. May be worth explicitly mentioning something like that in the guide, too.