Ack! I’m not sure what to think. When I wrote that comment, I had the impression that we had some sort of philosophical conflict, and I felt like I should make the case for my side. However, now I worry the comment was too aggressive. Moreover, it seems like we agree on most of the questions we can state precisely. I’m not sure how to deal with this situation.
I suppose I could turn some assumptions into questions: To what extent is it your goal in this inquiry to figure out ‘naturalized induction’? Do you think ‘naturalized induction’ is something humans naturally do when thinking, perhaps imperfectly?
However, now I worry the comment was too aggressive.
No worries :-)
To what extent is it your goal in this inquiry to figure out ‘naturalized induction’?
Zero. To be honest, I don’t spend much time thinking about AIXI. My inclination with regards to AIXI is to shrug and say “it’s not ideal for all the obvious reasons, and I can’t use it to study self-modification”, and then move on.
However, it turns out that what I think are the “obvious reasons” aren’t so obvious to some. While I’m not personally confident that AIXI can be modified to be useful for studying self-modification, ignoring AIXI entirely isn’t the most cunning strategy for forming relationships with other AGI researchers (who are researching different parts of the problem, and for whom AIXI may indeed be quite interesting and relevant).
If anything, my “goal with this inquiry” is to clearly sketch specific problems with AIXI that make it less useful to me and point towards directions where I’d be happy to discuss collaboration with researchers who are interested in AIXI.
It is not the case that I’m working on these problems in my free time: left to my own devices, I just use (or develop) toy models that better capture the part of the problem space I care about.
Do you think ‘naturalized induction’ is something humans naturally do when thinking, perhaps imperfectly?
I really don’t want to get dragged into a strategy discussion here. I’ll state a few points that I expect we both agree upon, but forgive me if I don’t answer further questions in this vein during this discussion.
Solomonoff induction would have trouble (or, at least, be non-optimal) in an uncomputable universe.
We’ve been pretty wrong about the rules of the universe in the past. (I wouldn’t have wanted scientists in 1750 to gamble on the universe being deterministic/single-branch, and I similarly don’t want scientists today to gamble on the universe being computable.)
Intuitively, it seems like there should be a computable program that can discover it’s inside an exotic universe (where ‘exotic’ includes ‘uncomputable’, but is otherwise a vague placeholder word).
I don’t think discussing how humans deal with this problem is relevant. Are there ways the universe could be that I can’t conceive of? Almost certainly. Can I figure out the laws of my universe as well as a perfect Solomonoff inductor? Probably not. Yet it does feel like I could be convinced that the universe is uncomputable, and so Solomonoff induction is probably not an idealization of whatever it is that I’m trying to do.
I don’t personally view this as an induction problem, but rather as a priors problem. And though I do indeed think it’s a problem, I’ll note that this does not imply that the problem captures any significant fraction of my research efforts.
Ack! I’m not sure what to think. When I wrote that comment, I had the impression that we had some sort of philosophical conflict, and I felt like I should make the case for my side. However, now I worry the comment was too aggressive. Moreover, it seems like we agree on most of the questions we can state precisely. I’m not sure how to deal with this situation.
I suppose I could turn some assumptions into questions: To what extent is it your goal in this inquiry to figure out ‘naturalized induction’? Do you think ‘naturalized induction’ is something humans naturally do when thinking, perhaps imperfectly?
No worries :-)
Zero. To be honest, I don’t spend much time thinking about AIXI. My inclination with regards to AIXI is to shrug and say “it’s not ideal for all the obvious reasons, and I can’t use it to study self-modification”, and then move on.
However, it turns out that what I think are the “obvious reasons” aren’t so obvious to some. While I’m not personally confident that AIXI can be modified to be useful for studying self-modification, ignoring AIXI entirely isn’t the most cunning strategy for forming relationships with other AGI researchers (who are researching different parts of the problem, and for whom AIXI may indeed be quite interesting and relevant).
If anything, my “goal with this inquiry” is to clearly sketch specific problems with AIXI that make it less useful to me and point towards directions where I’d be happy to discuss collaboration with researchers who are interested in AIXI.
It is not the case that I’m working on these problems in my free time: left to my own devices, I just use (or develop) toy models that better capture the part of the problem space I care about.
I really don’t want to get dragged into a strategy discussion here. I’ll state a few points that I expect we both agree upon, but forgive me if I don’t answer further questions in this vein during this discussion.
Solomonoff induction would have trouble (or, at least, be non-optimal) in an uncomputable universe.
We’ve been pretty wrong about the rules of the universe in the past. (I wouldn’t have wanted scientists in 1750 to gamble on the universe being deterministic/single-branch, and I similarly don’t want scientists today to gamble on the universe being computable.)
Intuitively, it seems like there should be a computable program that can discover it’s inside an exotic universe (where ‘exotic’ includes ‘uncomputable’, but is otherwise a vague placeholder word).
I don’t think discussing how humans deal with this problem is relevant. Are there ways the universe could be that I can’t conceive of? Almost certainly. Can I figure out the laws of my universe as well as a perfect Solomonoff inductor? Probably not. Yet it does feel like I could be convinced that the universe is uncomputable, and so Solomonoff induction is probably not an idealization of whatever it is that I’m trying to do.
I don’t personally view this as an induction problem, but rather as a priors problem. And though I do indeed think it’s a problem, I’ll note that this does not imply that the problem captures any significant fraction of my research efforts.