Suppose that humans had the ability to correctly intuit things in the presence of inadequate or misleading evidence. That ability would require that humans not follow first-order logic in drawing all of their conclusions. Therefore, if did not follow perfect logic it would be (very weak) evidence that they have superior ability to draw correct conclusions from inadequate or misleading evidence.
Humans do not always follow perfect logic.
I don’t have good evidence, but I haven’t searched the available space yet.
This is negligibly weak evidence, not even strong enough to raise the hypothesis to the level of conscious consideration. (Good evidence would be e.g. humans being actually observed to deduce things better than the evidence available to them would seem to allow.)
Consider that there are much much better reasons for humans not to follow logic perfectly. The stronger these are, the less evidence your approach generates, because the fact humans are not logical does not require additional explanation.
Logic is hard (and unlikely to be perfect when evolving an existing complex brain). Logic is expensive (in time taken to think, calories used, maybe brain size, etc.) Existing human adaptations interfere with logic (e.g. use of opinions as signalling; the difficulty of lying without coming to believe the lie; various biases). Existing human adaptations which are less good than perfect logic would be, but are good enough to make the development of perfect logic a bad value proposition. There are others.
Ever known someone to jump to the correct conclusion? Ever tried to determine how likely it is, given that someone is jumping to a conclusion with the available evidence, that the conclusion that they reach is correct?
Consider that several people have asserted, basically, that they have done the math, and more people than expected do better than expected at reaching correct conclusions with inadequate information. I haven’t gathered empirical data, so I neither support nor refute their empirical claim about the world; do your empirical data agree, or disagree?
In my personal experience I can’t think offhand of people who guessed a correct answer when a random guess, given available evidence, would have been very unlikely to be correct.
Sometimes people do guess correctly; far more often they guess wrong, and I expect the two to be balanced appropriately, but I haven’t done studies to check this.
Can you please point me to these people who have done the math?
I played the Calibration Game for a while and I got right more than 60% of the questions to which I had guessed with “50%”. It kind-of freaked me out. (I put that down to having heard about one of the things in the question but not consciously remembering it, and subconsciously picking it, and since the bigger something is the more likely I am to have heard about it… or something like that.)
I have like 53% − 55% in the 50% category. 60% seems high. Since I have some knowledge of the questions I would expect to answer above above 50% correctly.
The human-superiorists make the claim that they have done the math; I haven’t checked their work, because I find the question of whether humans can be better than a machine can be are irrelevant; the relevant fact is whether a given human us better than a given machine is, and the answer there is relatively easy to find and very hard to generalize.
So, can you point me to some specific claims made by these human-superiorists? I know of several, but not any that claim to back up their claims with data or evidence.
The best human Go players remain better than the best computer Go players. In a finite task which is exclusively solvable logic, humans are superior. Until recently, that was true of Chess as well.
There seems to be a misunderstanding here. We all know of lots of tasks where machines currently perform much worse than humans do (and vice versa). What I thought we were discussing was humans who could arrive at correct answers without apparently having sufficient information to do so, and research which failed to turned up explanations based on data available to these people.
What would the methodology of such research look like? One could easily claim that poker players vary in skill and luck, or one could claim that poker players vary in their ability to make correct guesses about the state of the table based on the finite information available to them. How well do you think a perfect machine would do in a large poker tournament?
What would the methodology of such research look like?
I don’t know, you’re the one who said you’ve seen people claiming they’ve done this research.
How well do you think a perfect machine would do in a large poker tournament?
Machines are not nearly good enough yet at recognizing facial and verbal clues to do as well as humans in poker. And poker requires relatively little memorization and calculations, and humans do no worse than machines. So a machine (with a camera and microphone) would lose to the best human players right now.
OTOH, if a poker game is conducted over the network, and no information (like speech / video) is available of other players, just the moves they make, then I would expect a well written poker-playing machine to be better than almost all human players (who are vulnerable to biases) and no worse than the best human players.
The best computer player on KGS is currently ranked 6 dan, having risen from 1 dan since 2009. I’d expect the best programs to beat the best amateur players within the next five years, and professionals in 10–15.
Suppose that humans had the ability to correctly intuit things in the presence of inadequate or misleading evidence. That ability would require that humans not follow first-order logic in drawing all of their conclusions. Therefore, if did not follow perfect logic it would be (very weak) evidence that they have superior ability to draw correct conclusions from inadequate or misleading evidence.
Humans do not always follow perfect logic.
I don’t have good evidence, but I haven’t searched the available space yet.
This is negligibly weak evidence, not even strong enough to raise the hypothesis to the level of conscious consideration. (Good evidence would be e.g. humans being actually observed to deduce things better than the evidence available to them would seem to allow.)
Consider that there are much much better reasons for humans not to follow logic perfectly. The stronger these are, the less evidence your approach generates, because the fact humans are not logical does not require additional explanation.
Logic is hard (and unlikely to be perfect when evolving an existing complex brain). Logic is expensive (in time taken to think, calories used, maybe brain size, etc.) Existing human adaptations interfere with logic (e.g. use of opinions as signalling; the difficulty of lying without coming to believe the lie; various biases). Existing human adaptations which are less good than perfect logic would be, but are good enough to make the development of perfect logic a bad value proposition. There are others.
Ever known someone to jump to the correct conclusion? Ever tried to determine how likely it is, given that someone is jumping to a conclusion with the available evidence, that the conclusion that they reach is correct?
Consider that several people have asserted, basically, that they have done the math, and more people than expected do better than expected at reaching correct conclusions with inadequate information. I haven’t gathered empirical data, so I neither support nor refute their empirical claim about the world; do your empirical data agree, or disagree?
In my personal experience I can’t think offhand of people who guessed a correct answer when a random guess, given available evidence, would have been very unlikely to be correct.
Sometimes people do guess correctly; far more often they guess wrong, and I expect the two to be balanced appropriately, but I haven’t done studies to check this.
Can you please point me to these people who have done the math?
I played the Calibration Game for a while and I got right more than 60% of the questions to which I had guessed with “50%”. It kind-of freaked me out. (I put that down to having heard about one of the things in the question but not consciously remembering it, and subconsciously picking it, and since the bigger something is the more likely I am to have heard about it… or something like that.)
I have like 53% − 55% in the 50% category. 60% seems high. Since I have some knowledge of the questions I would expect to answer above above 50% correctly.
The human-superiorists make the claim that they have done the math; I haven’t checked their work, because I find the question of whether humans can be better than a machine can be are irrelevant; the relevant fact is whether a given human us better than a given machine is, and the answer there is relatively easy to find and very hard to generalize.
So, can you point me to some specific claims made by these human-superiorists? I know of several, but not any that claim to back up their claims with data or evidence.
The best human Go players remain better than the best computer Go players. In a finite task which is exclusively solvable logic, humans are superior. Until recently, that was true of Chess as well.
There seems to be a misunderstanding here. We all know of lots of tasks where machines currently perform much worse than humans do (and vice versa). What I thought we were discussing was humans who could arrive at correct answers without apparently having sufficient information to do so, and research which failed to turned up explanations based on data available to these people.
What would the methodology of such research look like? One could easily claim that poker players vary in skill and luck, or one could claim that poker players vary in their ability to make correct guesses about the state of the table based on the finite information available to them. How well do you think a perfect machine would do in a large poker tournament?
I don’t know, you’re the one who said you’ve seen people claiming they’ve done this research.
Machines are not nearly good enough yet at recognizing facial and verbal clues to do as well as humans in poker. And poker requires relatively little memorization and calculations, and humans do no worse than machines. So a machine (with a camera and microphone) would lose to the best human players right now.
OTOH, if a poker game is conducted over the network, and no information (like speech / video) is available of other players, just the moves they make, then I would expect a well written poker-playing machine to be better than almost all human players (who are vulnerable to biases) and no worse than the best human players.
A brief search indicates that the issue was unresolved five years ago.
The best computer player on KGS is currently ranked 6 dan, having risen from 1 dan since 2009. I’d expect the best programs to beat the best amateur players within the next five years, and professionals in 10–15.
That long? How long until you believe Go is solved?