I’m not sure to what extent you mean that (i) your research programme was literally a training exercise for harder challenges ahead vs (ii) your research programme was born of despair: looking under a street light had a better chance of success even though the keys were not especially likely to be there.
If you mean (i), then what made you give up on this plan? From my perspective, the training exercise played its role and perhaps outlived its usefulness, why not move on beyond it?
If you mean (ii), then why such pessimism from the get-go? I imagine you reasoning along the lines of: developing the theory of rational agency is a difficult problem with little empirical feedback in early stages, hence it requires nigh impossible precision of reasoning. But, humanity actually has a not-bad track record in this type of questions in the last century. VNM, game theory, the Church-Turing thesis, information theory, complexity theory, Solomonoff induction: all these are examples of similar problems (creating a mathematical theory starting from an imprecise concept without much empirical data to help) in which we made enormous progress. They also look like they are steps towards the theory of rational agents itself. So, we “just” need to add more chapters to this novel, not do something entirely unprecedented[1]. Maybe your position is that the previous parts were done by geniuses who are unmatched in our generation because of lost cultural DNA?
I think that the “street light” was truly useful to better define multiple relevant problems (Newcombian decision problems, Vingean reflection, superrationality...), but it was not where the solutions are.
Another thing is, IMO (certain type of) blundering in the dark is helpful. In practice, science often doesn’t progress in a straight line from problem to solution. People try all sorts of things, guided partly by concrete problems and partly by sheer curiosity, some of those work out, some of those don’t work out, some of those lead to something entirely unexpected. As results accumulate, paradigms crystallize and it becomes clear which models were “True Names”[2] and which were blunders. And, yes, maybe we don’t have time for this. But I’m not so sure.
That is, the theory of rational agency wouldn’t be unprecedented. The project of dodging AI risk as a whole certainly has some “unprecedetedness” about it.
Thanks for responding, Eliezer.
I’m not sure to what extent you mean that (i) your research programme was literally a training exercise for harder challenges ahead vs (ii) your research programme was born of despair: looking under a street light had a better chance of success even though the keys were not especially likely to be there.
If you mean (i), then what made you give up on this plan? From my perspective, the training exercise played its role and perhaps outlived its usefulness, why not move on beyond it?
If you mean (ii), then why such pessimism from the get-go? I imagine you reasoning along the lines of: developing the theory of rational agency is a difficult problem with little empirical feedback in early stages, hence it requires nigh impossible precision of reasoning. But, humanity actually has a not-bad track record in this type of questions in the last century. VNM, game theory, the Church-Turing thesis, information theory, complexity theory, Solomonoff induction: all these are examples of similar problems (creating a mathematical theory starting from an imprecise concept without much empirical data to help) in which we made enormous progress. They also look like they are steps towards the theory of rational agents itself. So, we “just” need to add more chapters to this novel, not do something entirely unprecedented[1]. Maybe your position is that the previous parts were done by geniuses who are unmatched in our generation because of lost cultural DNA?
I think that the “street light” was truly useful to better define multiple relevant problems (Newcombian decision problems, Vingean reflection, superrationality...), but it was not where the solutions are.
Another thing is, IMO (certain type of) blundering in the dark is helpful. In practice, science often doesn’t progress in a straight line from problem to solution. People try all sorts of things, guided partly by concrete problems and partly by sheer curiosity, some of those work out, some of those don’t work out, some of those lead to something entirely unexpected. As results accumulate, paradigms crystallize and it becomes clear which models were “True Names”[2] and which were blunders. And, yes, maybe we don’t have time for this. But I’m not so sure.
That is, the theory of rational agency wouldn’t be unprecedented. The project of dodging AI risk as a whole certainly has some “unprecedetedness” about it.
Borrowing the term from John Wentworth.