But how can someone be trustworthey without intelligence? Even if they want to do what’s best, they can’t be relied to do what’s best if they can’t figure out what the best thing to do is. Generally speaking, the more intelligent someone is, the more predictable they are (there are a few exceptions, such s where a mixed strategy is optimal). The fact is that idiots can often screw things up more than selfish people. With a selfish person, all you have to worry about is “Is it in their interests to do what I want?” And if you’re not worrying about that to begin with, then perhaps you are being selfish.
Intelligence is clearly correlated with expected value, and it’s definitely better than nothing at all. Furthermore, smart people are better than stupid people at convincing you that they’re smart. But honest people are often worse than dishonest people at convincing people that they’re honest.
A lot of this seems extremely contrary to my intuitions.
Poor performance (for instance on tests) isn’t the result of having a high rate of random errors, but of exhibiting repeatable bugs. This means that people with worse performance will be more predictable, not less — in order to predict the better performance, you have to actually look at the universe more, whereas to predict the worse performance you only have to look at the agent’s move history.
(For that matter, we can expect this from Bayes: if you’re learning poorly from your environment, you’re not updating, which means you’re generating behaviors based more on your priors alone.)
The fact is that idiots can often screw things up more than selfish people.
This seems to be a political tenet or tribal banner, not a self-evident fact.
(Worse, it borders on the “intelligence leads necessarily to goodness” meme, which is a serious threat to AI safety. A more intelligent agent is better equipped to achieve its goals, but is not necessarily better to have around to achieve your goals if those are not the same.)
By more predictable, I meant greater accuracy in predicting, not that less computing power is required to predict. Someone who performs well on tests is perfectly predictable: they always get the right answer. Someone with poor performance can’t be any more predictable than that, and is often less.
Just because the bug model has some value doesn’t mean that the error model has none. I would be surprised if a poorly performing student, given a test twice, were to give exactly the same wrong answers both times. I don’t understand you claim that people with worse performance would be more predictable. Given that someone is a good performer, all you need to do is solve the problem yourself, and assuming you did it correctly, you now know how that person would answer. To predict the worse performer, the move history is woefully inadequate. Poor performance is deterministic like a dice throw is deterministic. You need to know what their bugs are, what the exact conditions are, and how they’re approaching the problem. Someone who is using math will correctly evaluate 5(2+8) regardless of whether they find 2+8 first and then multiply by 5, or find 52 and 58 and add them together. But someone who doesn’t understand math will likely not only get the wrong answer, but get a different wrong answer depending on how they do the problem. Or just give up and give a random number. Just knowing how they did the problem before doesn’t tell you how they will do that exact problem in the future, and it certainly doesn’t allow you to extrapolate how they will do on other problems. If someone is doing math correctly, it doesn’t matter how they are implementing the math. But if they are doing it incorrectly, there are lots of different ways they can be doing it correctly, and given any particular problem, there are different wrong ways that get the same answer on that problem, but different answers on different problems. So just knowing what they got on one problem doesn’t distinguish between different wrong implementations.
Learning poorly from your environment does not mean not updating, it means that you are updating poorly. Given the problem “d = rt, d = 20, r = 5”, if you tell a poor learner that the correct procedure is to divide 20 by 5 and get t = 4, then given the problem “d = rt, r = 6, t = 2″, they will likely divide 6 by 2 and get d = 3. They have observed that “divide the first number by the second one” is the correct procedure in one case, and incorrectly updated the prior on “always divide the first number by the second one”. To know what rule they’ve “learned”, you have to know what cases they’ve previously seen.
Good learners don’t learn rules by Bayesian updating. They don’t learn “if you’re given d and r, you get t by dividing” by mindlessly observing instances and updating every time it gives the right answer. They learn it by understanding it. To know what rule a good learner has learned, you just need to know the correct rule; you don’t need to know what cases they’re seen.
That there are some cases where idiots can screw things up more than selfish people is rather self-evident. “Can” does not border on “necessarily will”. Intelligence doesn’t lead to goodness in the sense of more desire to do good, but it does generally lead to goodness in the sense of more good being done.
The whole point of an alliance is that you’re supposed to work together towards a common goal. If you’re trying to find stupid people so that you can have the upper hand in your dealings with them, that suggests that this isn’t really an “alliance”.
But how can someone be trustworthey without intelligence? Even if they want to do what’s best, they can’t be relied to do what’s best if they can’t figure out what the best thing to do is. Generally speaking, the more intelligent someone is, the more predictable they are (there are a few exceptions, such s where a mixed strategy is optimal). The fact is that idiots can often screw things up more than selfish people. With a selfish person, all you have to worry about is “Is it in their interests to do what I want?” And if you’re not worrying about that to begin with, then perhaps you are being selfish.
Intelligence is clearly correlated with expected value, and it’s definitely better than nothing at all. Furthermore, smart people are better than stupid people at convincing you that they’re smart. But honest people are often worse than dishonest people at convincing people that they’re honest.
A lot of this seems extremely contrary to my intuitions.
Poor performance (for instance on tests) isn’t the result of having a high rate of random errors, but of exhibiting repeatable bugs. This means that people with worse performance will be more predictable, not less — in order to predict the better performance, you have to actually look at the universe more, whereas to predict the worse performance you only have to look at the agent’s move history.
(For that matter, we can expect this from Bayes: if you’re learning poorly from your environment, you’re not updating, which means you’re generating behaviors based more on your priors alone.)
This seems to be a political tenet or tribal banner, not a self-evident fact.
(Worse, it borders on the “intelligence leads necessarily to goodness” meme, which is a serious threat to AI safety. A more intelligent agent is better equipped to achieve its goals, but is not necessarily better to have around to achieve your goals if those are not the same.)
By more predictable, I meant greater accuracy in predicting, not that less computing power is required to predict. Someone who performs well on tests is perfectly predictable: they always get the right answer. Someone with poor performance can’t be any more predictable than that, and is often less.
Just because the bug model has some value doesn’t mean that the error model has none. I would be surprised if a poorly performing student, given a test twice, were to give exactly the same wrong answers both times. I don’t understand you claim that people with worse performance would be more predictable. Given that someone is a good performer, all you need to do is solve the problem yourself, and assuming you did it correctly, you now know how that person would answer. To predict the worse performer, the move history is woefully inadequate. Poor performance is deterministic like a dice throw is deterministic. You need to know what their bugs are, what the exact conditions are, and how they’re approaching the problem. Someone who is using math will correctly evaluate 5(2+8) regardless of whether they find 2+8 first and then multiply by 5, or find 52 and 58 and add them together. But someone who doesn’t understand math will likely not only get the wrong answer, but get a different wrong answer depending on how they do the problem. Or just give up and give a random number. Just knowing how they did the problem before doesn’t tell you how they will do that exact problem in the future, and it certainly doesn’t allow you to extrapolate how they will do on other problems. If someone is doing math correctly, it doesn’t matter how they are implementing the math. But if they are doing it incorrectly, there are lots of different ways they can be doing it correctly, and given any particular problem, there are different wrong ways that get the same answer on that problem, but different answers on different problems. So just knowing what they got on one problem doesn’t distinguish between different wrong implementations.
Learning poorly from your environment does not mean not updating, it means that you are updating poorly. Given the problem “d = rt, d = 20, r = 5”, if you tell a poor learner that the correct procedure is to divide 20 by 5 and get t = 4, then given the problem “d = rt, r = 6, t = 2″, they will likely divide 6 by 2 and get d = 3. They have observed that “divide the first number by the second one” is the correct procedure in one case, and incorrectly updated the prior on “always divide the first number by the second one”. To know what rule they’ve “learned”, you have to know what cases they’ve previously seen.
Good learners don’t learn rules by Bayesian updating. They don’t learn “if you’re given d and r, you get t by dividing” by mindlessly observing instances and updating every time it gives the right answer. They learn it by understanding it. To know what rule a good learner has learned, you just need to know the correct rule; you don’t need to know what cases they’re seen.
That there are some cases where idiots can screw things up more than selfish people is rather self-evident. “Can” does not border on “necessarily will”. Intelligence doesn’t lead to goodness in the sense of more desire to do good, but it does generally lead to goodness in the sense of more good being done.
The whole point of an alliance is that you’re supposed to work together towards a common goal. If you’re trying to find stupid people so that you can have the upper hand in your dealings with them, that suggests that this isn’t really an “alliance”.