I’m thinking of being unable to reach a better solution to a problem because what you know conflicts with arriving at the solution.
Say your data leads you to an inaccurate initial conclusion. Everybody agrees on this conclusion. Wouldn’t that conclusion be data for more inaccurate conclusions?
So I thought that there would need to be some bias that was put on your reasoning so that occasionally you didn’t go with the inaccurate claim. That way if some of the data is wrong you still have rationalists who arrive at a more accurate map.
Tried to unpack it. Noticed that I seem to expect this “exact art” of rationality to be a system that can stand on its own when it doesn’t. What I mean by that is that I seem to have assumed that you could built some sort of AI on top of this system which would always arrive at an accurate perception of reality. But if that was the case, wouldn’t Elizer already have done it?
I feel like I’m making mistakes and being foolish right now, so I’m going to stop writing and eagerly await your corrections.
I think even a perfect implementation of Bayes would not in and of itself be an AI. By itself, the math doesn’t have anything to work on, or any direction to do so. Agency is hard to build, I think.
Would a “perfect implementation of Bayes”, in the sense you meant here, be a Solomonoff inductor (or similar, perhaps modified to work better with anthropic problems), or something perfect at following Bayesian probability theory but with no prior specified (or a less universal one)? If the former, you are in fact most of the way to an agent, at least some types of agents, e.g. AIXI.
Well, I’m not personally capable of building AI’s, and I’m not as deeply versed as I’m sure many people here are, but, I see an implementation of Bayes theorem as a tool for finding truth, in the mind of a human or an AI or whatever sort of person you care to conceive of / display, whereas the mind behind it is an agent with a quality we might called directedness, or intentionality, or simply an interest to go out and poke the universe with a stick where it doesn’t make sense. Bayes is in itself already math, easy to put into code, but we don’t understand internally directed behavior well enough to model it, yet.
There’s nothing in being a rationalist that prevents you from considering multiple hypotheses. One thing I’ve not seen elaborated on a lot on this site (but maybe I’ve just missed it) is that you don’t need to commit to one theory or the other, the only time you’re forced to commit yourself is if you need to make a choice in your actions. And then you only need to commit for that choice, not for the rest of your life. So a bunch of perfect rationalists who have observed exactly the same events/facts (which of course doesn’t happen in real life) would ascribe exactly the same probabilities to a bunch of theories. If new evidence came in they would all switch to the new hypothesis because they were all already contemplating it but considering it less likely than the old hypothesis.
The only thing preventing you from considering all possible hypotheses is lack of brain power. This limited resource should probably be divided among the possible theories in the same ratio that you’re certain about them, so if you think theory A has a probability of 50% of being right, theory B a probability of 49% and theory C a probability of 1%, you should spend 99% of your efforts on theory A and B. But if the probabilities are 35%, 33% and 32% you should spend almost a third of your resources on theory C. (Assuming the goal is just to find truth, if the theories have other utilities that should be weighted in as well.)
The only thing preventing you from considering all possible hypotheses is lack of brain power. This limited resource should probably be divided among the possible theories in the same ratio that you’re certain about them
Likelyhood is one consideration when determining how much to investigate a possible hypotheses but it isn’t the only consideration. Quite often the ratio of attention should be different to the ratio of credibility.
I’m thinking of being unable to reach a better solution to a problem because what you know conflicts with arriving at the solution.
Say your data leads you to an inaccurate initial conclusion. Everybody agrees on this conclusion. Wouldn’t that conclusion be data for more inaccurate conclusions?
So I thought that there would need to be some bias that was put on your reasoning so that occasionally you didn’t go with the inaccurate claim. That way if some of the data is wrong you still have rationalists who arrive at a more accurate map.
Tried to unpack it. Noticed that I seem to expect this “exact art” of rationality to be a system that can stand on its own when it doesn’t. What I mean by that is that I seem to have assumed that you could built some sort of AI on top of this system which would always arrive at an accurate perception of reality. But if that was the case, wouldn’t Elizer already have done it?
I feel like I’m making mistakes and being foolish right now, so I’m going to stop writing and eagerly await your corrections.
I think even a perfect implementation of Bayes would not in and of itself be an AI. By itself, the math doesn’t have anything to work on, or any direction to do so. Agency is hard to build, I think.
As always, of course, I could be wrong.
Would a “perfect implementation of Bayes”, in the sense you meant here, be a Solomonoff inductor (or similar, perhaps modified to work better with anthropic problems), or something perfect at following Bayesian probability theory but with no prior specified (or a less universal one)? If the former, you are in fact most of the way to an agent, at least some types of agents, e.g. AIXI.
Well, I’m not personally capable of building AI’s, and I’m not as deeply versed as I’m sure many people here are, but, I see an implementation of Bayes theorem as a tool for finding truth, in the mind of a human or an AI or whatever sort of person you care to conceive of / display, whereas the mind behind it is an agent with a quality we might called directedness, or intentionality, or simply an interest to go out and poke the universe with a stick where it doesn’t make sense. Bayes is in itself already math, easy to put into code, but we don’t understand internally directed behavior well enough to model it, yet.
There’s nothing in being a rationalist that prevents you from considering multiple hypotheses. One thing I’ve not seen elaborated on a lot on this site (but maybe I’ve just missed it) is that you don’t need to commit to one theory or the other, the only time you’re forced to commit yourself is if you need to make a choice in your actions. And then you only need to commit for that choice, not for the rest of your life. So a bunch of perfect rationalists who have observed exactly the same events/facts (which of course doesn’t happen in real life) would ascribe exactly the same probabilities to a bunch of theories. If new evidence came in they would all switch to the new hypothesis because they were all already contemplating it but considering it less likely than the old hypothesis.
The only thing preventing you from considering all possible hypotheses is lack of brain power. This limited resource should probably be divided among the possible theories in the same ratio that you’re certain about them, so if you think theory A has a probability of 50% of being right, theory B a probability of 49% and theory C a probability of 1%, you should spend 99% of your efforts on theory A and B. But if the probabilities are 35%, 33% and 32% you should spend almost a third of your resources on theory C. (Assuming the goal is just to find truth, if the theories have other utilities that should be weighted in as well.)
Likelyhood is one consideration when determining how much to investigate a possible hypotheses but it isn’t the only consideration. Quite often the ratio of attention should be different to the ratio of credibility.