Thanks for the link. I don’t think I’ve seen that comment before. Steve raises the examples of Bayesian decision theory and Solomonoff induction to support his position, but to me both of these are examples of philosophical ideas that looked really good at some point but then turned out to be incomplete / not quite right. If the FAI team comes up with new ideas that are in the same reference class as Bayesian decision theory and Solomonoff induction, then I don’t know how they can gain enough confidence that those ideas can be the last words in their respective subjects.
Wei Dai, I noticed on the MIRI website that you’re slotted to appear at some future MIRI workshop. I find this a little bit strange—given your reservations, aren’t you worried about throwing fuel on the fire?
Well I’m human which means I have multiple conflicting motivations. I’m going because I’m really curious what direction the participants will take decision theory.
Thanks for the link. I don’t think I’ve seen that comment before. Steve raises the examples of Bayesian decision theory and Solomonoff induction to support his position, but to me both of these are examples of philosophical ideas that looked really good at some point but then turned out to be incomplete / not quite right. If the FAI team comes up with new ideas that are in the same reference class as Bayesian decision theory and Solomonoff induction, then I don’t know how they can gain enough confidence that those ideas can be the last words in their respective subjects.
Well I’m human which means I have multiple conflicting motivations. I’m going because I’m really curious what direction the participants will take decision theory.