Ah, I see. I think we were answering different questions. (I had this feeling earlier but couldn’t pin down why.) I read the original question as being something like “what kind of hypotheses should a hypothetical AI hypothetically entertain” whereas I think you read the original question as being more like “what kind of hypotheses can you currently program an AI to entertain.” Does this sound right?
I was reading a lesswrong post and I found this paragraph which lines up with what I was trying to say
Some boxes you really can’t think outside. If our universe really is Turing computable, we will never be able to concretely envision anything that isn’t Turing-computable—no matter how many levels of halting oracle hierarchy our mathematicians can talk about, we won’t be able to predict what a halting oracle would actually say, in such fashion as to experimentally discriminate it from merely computable reasoning.
Ah, I see. I think we were answering different questions. (I had this feeling earlier but couldn’t pin down why.) I read the original question as being something like “what kind of hypotheses should a hypothetical AI hypothetically entertain” whereas I think you read the original question as being more like “what kind of hypotheses can you currently program an AI to entertain.” Does this sound right?
Yes, I agree. I can imagine some reasoning being concieving of things that are trans-turing complete, but I don’t see how I could make an AI do so.
I was reading a lesswrong post and I found this paragraph which lines up with what I was trying to say