But frequency can be strong evidence of importance.
Yes. But probably not above the importance of sex...
I suspect many people would experience significant psychological trauma if they were unable to rationalize for a week.
Interesting. This suggests a method or measure of the importance of compartmentalization. Maybe rationalization is even neccessary for dealing rationally with real life (the word kind of gives it away). Could it be that is needed (in one way or the other) for AI to work in the face of uncertainty?
I read that. I agree with the argument. But it doesn’t really address my intuition behind my argument.
The idea is that you have concurrent processes creating partial models of partial but overlapping aspects of reality. These models a) help making predictions for each aspect (descriptively), b) may help acting in the context of the aspect (operational/prescriptively) and c) may be on the symbolic layer inconsistent.
Do you want to kick out all the benefits to gain consistency? It could be that you can’t achieve consistency of overlapping models at all without some super all encompassing model. Or it could be that such a super-model is horribly big and slow.
If we’re going to be building a Seed AI, I really don’t think a good design would involve the AI reasoning using multiple, partially overlapping, possibly inconsistent models, especially since I’m not sure how the AI would go about updating those models if it made contradictory observations. For example, upon receiving contradictory evidence, which of its models would it update? One? Two? All of them? If you decide to work with ad hoc hypotheses that contradict not only reality, but each other, just because it’s useful to do so, the price you pay is throwing the entire idea of updating out the window.
If it’s uncertainty you’re concerned about, you don’t need to go to the trouble of having multiple models; good old Bayesian reasoning is designed to deal with uncertainties in reasoning—no overlapping models required. Moreover, I have a difficult time believing that a sufficiently intelligent AI would face much of an issue with regard to processing speed or memory capacity; if anything, working with multiple models might actually take longer in some situations, e.g. when dealing with a scenario in which several different models could apply. In short, the “super all encompassing model” would seem to work just fine.
Bayesianism works well with known unknowns. But it doesn’t work any better than any other system else with unknown unknowns. I would say that while Bayesian reasoning can deal well with risk, it’s not great with uncertainty—that’s not to say uncertainty invalidates Bayesianism, only to say that Bayesianism is not so spectacularly strong it is able to overwhelm such fundamental difficulties of epistemology.
To my mind, using multiple models of reality is more or less essential. My reasons for thinking this are difficult to articulate because they’re mired in deep intuitions of mine I don’t understand very well, but an analogy might help somewhat.
Think of the universe’s workings as a large and enormously complicated jigsaw puzzle. At least for human beings, when trying to solve a jigsaw puzzle, focusing exclusively on the overall picture and how each individual puzzle piece integrates into it is an inefficient process. You’re better off thinking of the puzzle as several separate puzzles instead, and working with clusters of pieces.
By doing this, you’ll make mistakes—one of your clusters might actually be upside down or sideways, in a way that won’t be consistent with the overall picture’s orientation. However, this drawback can be countered as long as you don’t look at the puzzle exclusively in terms of the individual clustered pieces. A mixed view is best.
Maybe a sufficiently advanced AI would be able to most efficiently sort through the puzzle of the universe in a more rigid manner. But IMO, what evidence we currently have about intelligence suggests the opposite. AI that’s worthy of the name will probably heuristically optimize on multiple levels at once, as that capability’s one of the greatest strengths machine-learning has so far offered us.
Your points are valid. But the question remains whether a pure approach is efficient enought to work at all. Once it does it could scale as it sees fit.
Frequency is not importance. I think this quote has more humorous than practical merit.
But frequency can be strong evidence of importance.
I suspect many people would experience significant psychological trauma if they were unable to rationalize for a week.
Yes. But probably not above the importance of sex...
Interesting. This suggests a method or measure of the importance of compartmentalization. Maybe rationalization is even neccessary for dealing rationally with real life (the word kind of gives it away). Could it be that is needed (in one way or the other) for AI to work in the face of uncertainty?
Only in the sense that lying can be called “truthization”.
I read that. I agree with the argument. But it doesn’t really address my intuition behind my argument.
The idea is that you have concurrent processes creating partial models of partial but overlapping aspects of reality. These models a) help making predictions for each aspect (descriptively), b) may help acting in the context of the aspect (operational/prescriptively) and c) may be on the symbolic layer inconsistent.
Do you want to kick out all the benefits to gain consistency? It could be that you can’t achieve consistency of overlapping models at all without some super all encompassing model. Or it could be that such a super-model is horribly big and slow.
If we’re going to be building a Seed AI, I really don’t think a good design would involve the AI reasoning using multiple, partially overlapping, possibly inconsistent models, especially since I’m not sure how the AI would go about updating those models if it made contradictory observations. For example, upon receiving contradictory evidence, which of its models would it update? One? Two? All of them? If you decide to work with ad hoc hypotheses that contradict not only reality, but each other, just because it’s useful to do so, the price you pay is throwing the entire idea of updating out the window.
If it’s uncertainty you’re concerned about, you don’t need to go to the trouble of having multiple models; good old Bayesian reasoning is designed to deal with uncertainties in reasoning—no overlapping models required. Moreover, I have a difficult time believing that a sufficiently intelligent AI would face much of an issue with regard to processing speed or memory capacity; if anything, working with multiple models might actually take longer in some situations, e.g. when dealing with a scenario in which several different models could apply. In short, the “super all encompassing model” would seem to work just fine.
Bayesianism works well with known unknowns. But it doesn’t work any better than any other system else with unknown unknowns. I would say that while Bayesian reasoning can deal well with risk, it’s not great with uncertainty—that’s not to say uncertainty invalidates Bayesianism, only to say that Bayesianism is not so spectacularly strong it is able to overwhelm such fundamental difficulties of epistemology.
To my mind, using multiple models of reality is more or less essential. My reasons for thinking this are difficult to articulate because they’re mired in deep intuitions of mine I don’t understand very well, but an analogy might help somewhat.
Think of the universe’s workings as a large and enormously complicated jigsaw puzzle. At least for human beings, when trying to solve a jigsaw puzzle, focusing exclusively on the overall picture and how each individual puzzle piece integrates into it is an inefficient process. You’re better off thinking of the puzzle as several separate puzzles instead, and working with clusters of pieces.
By doing this, you’ll make mistakes—one of your clusters might actually be upside down or sideways, in a way that won’t be consistent with the overall picture’s orientation. However, this drawback can be countered as long as you don’t look at the puzzle exclusively in terms of the individual clustered pieces. A mixed view is best.
Maybe a sufficiently advanced AI would be able to most efficiently sort through the puzzle of the universe in a more rigid manner. But IMO, what evidence we currently have about intelligence suggests the opposite. AI that’s worthy of the name will probably heuristically optimize on multiple levels at once, as that capability’s one of the greatest strengths machine-learning has so far offered us.
Your points are valid. But the question remains whether a pure approach is efficient enought to work at all. Once it does it could scale as it sees fit.