when there is no possibility of evidence against a proposition, then a possibility of evidence in favour of the proposition would violate the Bayes theorem.
I’m not sure how you could have such a situation, given that absence of expected evidence is evidence of the absence. Do you have an example?
Well, the probabilities wouldn’t be literally zero. What I mean is that lack of a possibility of strong evidence against something, and only a possibility of very weak evidence against it (via absence of evidence) implies that strong evidence in favour of it must be highly unlikely. Worse, such evidence just gets lost among the more probable ‘evidence that looks strong but is not’.
Absence of evidence isn’t necessarily a weak kind of evidence.
If I tell you there’s a dragon sitting on my head, and you don’t see a dragon sitting on my head, then you can be fairly sure there’s not a dragon on my head.
On the other hand, if I tell you I’ve buried a coin somewhere in my magical 1cm deep garden—and you dig a random hole and don’t find it—not finding the coin isn’t strong evidence that I’ve not buried one. However, there there’s so much potential weak evidence against. If you’ve dug up all but a 1cm square of my garden—the coin’s either in that 1cm or I’m telling porkies, and what are the odds that—digging randomly—you wouldn’t have come across it by then? You can be fairly sure, even before digging up that square, that I’m fibbing.
Was what you meant analogous to one of those scenarios?
Yes, like the latter scenario. Note that the expected utility of digging is low when the evidence against from one dig is low.
edit: Also. In the former case, not seeing a dragon sitting on your head is very strong evidence against there being a dragon. Unless you invoke un-testable invisible dragons which may be transparent to x-rays, let dust pass through it unaffected, and so on. In which case, I should have a very low likelihood of being convinced that there is a dragon on your head, if I know that the evidence against would be very weak.
edit2: Russel’s teapot in the Kuiper belt is a better example still. When there can be only very weak evidence against it, the probability of encountering or discovering strong evidence in favour of it must be low also, making it not worth while to try to come up with evidence that there is a teapot in the Kuiper belt (due to low probability of success), even when the prior probability for the teapot is not very low.
Then, to extend the analogy: Imagine that digging has potentially negative utility as well as positive. I claim to have buried both a large number of nukes and a magical wand in the garden.
In order to motivate you to dig, you probably want some evidence of magical wands. In this context that would probably be recursively improving systems where, occasionally, local variations rapidly acquire super-dominance over their contemporaries when they reach some critical value. Evolution probably qualifies there—other bipedal frames with fingers aren’t particularly dominant over other creatures in the same way that we are, but at some point we got smart enough to make weapons (note that I’m not saying that was what intelligence was for though) and from then on, by comparison to all other macroscopic land-dwelling forms of life, we may as well have been god.
And since then that initial edge in dominance has only ever allowed us to become more dominant. Creatures afraid of wild animals are not able to create societies with guns and nuclear weapons—you’d never have the stability for long enough.
In order to motivate you not to dig, you probably want some evidence of nukes. In this context, recursively—I’m not sure improving is the right word here—systems with a feedback state, that create large amounts of negative value. Well, to a certain extent that’s a matter of perspective—from the perspective of extinct species the ascendancy of humanity would probably not be anything to cheer about, if they were in a position to appreciate it. But I suspect it can at least stand on its own that it tends to be the case that failure cascades are easier to make than cascade successes. One little thing goes wrong on your rocket and then the situation multiplies; a small error in alignment rapidly becomes a bigger one; or the timer on your patriot battery is losing a fraction of a second and over time your perception of where the missiles are is off significantly. - it’s only with significant effort that we create systems where errors don’t multiply.
(This is analogous to altering your expected value of information—like if earlier you’d said you didn’t want to dig and I’d said, ‘well there’s a million bucks there’ instead—you’d probably want some evidence that I had a million bucks, but given such evidence the information you’d gain from digging would be worth more.)
This seems to be fairly closely analogous to Elizer’s claims about AI, at least if I’ve understood them correctly, that we have to hit an extremely small target and it’s more likely that we’re going to blow ourselves to itty-bitty pieces/cover the universe in paperclips if we’re just fooling around hoping to hit on it by chance.
If you believe that such is the case, then the only people you’re going to want looking for that magic wand—if you let anyone do it at all—are specialists with particle detectors—indeed if your garden is in the middle of a city you’ll probably make it illegal for kids to play around anywhere near the potential bomb site.
Now, we may argue over quite how strongly we have to believe in the possible existence of magitech nukes to justify the cost of fencing off the garden—personally I think the statement:
if you take a thorough look at actually existing creatures, it’s not clear that smarter creatures have any tendency to increase their intelligence.
Is to constrain what you’ll accept for potential evidence pretty dramatically—we’re talking about systems in general, not just individual people, and recursively improving systems with high asymptotes relative to their contemporaries have happened before.
It’s not clear to me that the second claim he makes is even particularly meaningful:
In the real-world, self-reinforcing processes eventually asymptote. So even if smarter creatures were able to repeatedly increase their own intelligence, we should expect the incremental increases to get smaller and smaller over time, not skyrocket to infinity.
Sure, I think that they probably won’t go to infinity—but I don’t see any reason to suspect that they won’t converge on a much higher value than our own native ability. Pretty much all of our systems do, from calculators to cars.
We can even argue over how you separate the claims that something’s going to foom from the false claims of such (I’d suggest, initially, just seeing how many claims that something was going to foom have actually been made within the domain of technological artefacts, it may be that the base-line credibility is higher than we think.) But that’s a body of research that Caplan, as far as I’m aware, hasn’t forwarded. It’s not clear to me that it’s a body of research with the same order of difficulty as creating an actual AI either. And, in its absence, it’s not clear to me that to answer in effect, “I’ll believe it when I see the mushroom cloud.” is a particularly rational response.
I was mostly referring to the general lack of interest in the discussion of un-falsifiable propositions by the scientific community. The issue is that un-falsifiable proposition are also the ones for which it is unlikely that in the discussion you will be presented with evidence in favour of them.
The space of propositions is the garden I am speaking of. And digging up false propositions is not harmless.
With regards to the argument of yours, I think you vastly under-estimate the size of the high-dimensional space of possible software, and how distant in this space are the little islands of software that actually does something interesting, as distant from each other as Bolzmann minds are within our universe (Albeit, of course, depending on the basis, possible software is better clustered).
Those spatial analogies, they are a great fallacy generator, a machine for getting quantities off by mind-bogglingly huge factors. In your mental image, you have someone create those nukes and put them in the sand, for the hapless individuals to find. In the reality that’s not how you find nuke. You venture into this enormous space of possible designs, as vast as the distance from here to the closest exact replica of The Gadget which spontaneously formed from a supernova by the random movement of uranium atoms. When you have to look in the space this big, you don’t find this replica of The Gadget without knowing what you’re looking for quite well.
With regards to listing biases to help arguments, given that I have no expectation that one could not handwave up a fairly plausible bias that would work in the direction of a specific argument, the direct evidential value of listing biases in such manner, on the proposition, is zero (or an epsilon). You could have just as well argued that the individuals who are not afraid of cave bears get killed by the cave bears; there’s too much “give” in your argument for it to have any evidential value. I can freely ignore it without having to bother to come up with a balancing bias (as people like Caplan rightfully do, without really bothering to outline why).
I’m not sure how you could have such a situation, given that absence of expected evidence is evidence of the absence. Do you have an example?
Well, the probabilities wouldn’t be literally zero. What I mean is that lack of a possibility of strong evidence against something, and only a possibility of very weak evidence against it (via absence of evidence) implies that strong evidence in favour of it must be highly unlikely. Worse, such evidence just gets lost among the more probable ‘evidence that looks strong but is not’.
Ah, I think I follow you.
Absence of evidence isn’t necessarily a weak kind of evidence.
If I tell you there’s a dragon sitting on my head, and you don’t see a dragon sitting on my head, then you can be fairly sure there’s not a dragon on my head.
On the other hand, if I tell you I’ve buried a coin somewhere in my magical 1cm deep garden—and you dig a random hole and don’t find it—not finding the coin isn’t strong evidence that I’ve not buried one. However, there there’s so much potential weak evidence against. If you’ve dug up all but a 1cm square of my garden—the coin’s either in that 1cm or I’m telling porkies, and what are the odds that—digging randomly—you wouldn’t have come across it by then? You can be fairly sure, even before digging up that square, that I’m fibbing.
Was what you meant analogous to one of those scenarios?
Yes, like the latter scenario. Note that the expected utility of digging is low when the evidence against from one dig is low.
edit: Also. In the former case, not seeing a dragon sitting on your head is very strong evidence against there being a dragon. Unless you invoke un-testable invisible dragons which may be transparent to x-rays, let dust pass through it unaffected, and so on. In which case, I should have a very low likelihood of being convinced that there is a dragon on your head, if I know that the evidence against would be very weak.
edit2: Russel’s teapot in the Kuiper belt is a better example still. When there can be only very weak evidence against it, the probability of encountering or discovering strong evidence in favour of it must be low also, making it not worth while to try to come up with evidence that there is a teapot in the Kuiper belt (due to low probability of success), even when the prior probability for the teapot is not very low.
Then, to extend the analogy: Imagine that digging has potentially negative utility as well as positive. I claim to have buried both a large number of nukes and a magical wand in the garden.
In order to motivate you to dig, you probably want some evidence of magical wands. In this context that would probably be recursively improving systems where, occasionally, local variations rapidly acquire super-dominance over their contemporaries when they reach some critical value. Evolution probably qualifies there—other bipedal frames with fingers aren’t particularly dominant over other creatures in the same way that we are, but at some point we got smart enough to make weapons (note that I’m not saying that was what intelligence was for though) and from then on, by comparison to all other macroscopic land-dwelling forms of life, we may as well have been god.
And since then that initial edge in dominance has only ever allowed us to become more dominant. Creatures afraid of wild animals are not able to create societies with guns and nuclear weapons—you’d never have the stability for long enough.
In order to motivate you not to dig, you probably want some evidence of nukes. In this context, recursively—I’m not sure improving is the right word here—systems with a feedback state, that create large amounts of negative value. Well, to a certain extent that’s a matter of perspective—from the perspective of extinct species the ascendancy of humanity would probably not be anything to cheer about, if they were in a position to appreciate it. But I suspect it can at least stand on its own that it tends to be the case that failure cascades are easier to make than cascade successes. One little thing goes wrong on your rocket and then the situation multiplies; a small error in alignment rapidly becomes a bigger one; or the timer on your patriot battery is losing a fraction of a second and over time your perception of where the missiles are is off significantly. - it’s only with significant effort that we create systems where errors don’t multiply.
(This is analogous to altering your expected value of information—like if earlier you’d said you didn’t want to dig and I’d said, ‘well there’s a million bucks there’ instead—you’d probably want some evidence that I had a million bucks, but given such evidence the information you’d gain from digging would be worth more.)
This seems to be fairly closely analogous to Elizer’s claims about AI, at least if I’ve understood them correctly, that we have to hit an extremely small target and it’s more likely that we’re going to blow ourselves to itty-bitty pieces/cover the universe in paperclips if we’re just fooling around hoping to hit on it by chance.
If you believe that such is the case, then the only people you’re going to want looking for that magic wand—if you let anyone do it at all—are specialists with particle detectors—indeed if your garden is in the middle of a city you’ll probably make it illegal for kids to play around anywhere near the potential bomb site.
Now, we may argue over quite how strongly we have to believe in the possible existence of magitech nukes to justify the cost of fencing off the garden—personally I think the statement:
Is to constrain what you’ll accept for potential evidence pretty dramatically—we’re talking about systems in general, not just individual people, and recursively improving systems with high asymptotes relative to their contemporaries have happened before.
It’s not clear to me that the second claim he makes is even particularly meaningful:
Sure, I think that they probably won’t go to infinity—but I don’t see any reason to suspect that they won’t converge on a much higher value than our own native ability. Pretty much all of our systems do, from calculators to cars.
We can even argue over how you separate the claims that something’s going to foom from the false claims of such (I’d suggest, initially, just seeing how many claims that something was going to foom have actually been made within the domain of technological artefacts, it may be that the base-line credibility is higher than we think.) But that’s a body of research that Caplan, as far as I’m aware, hasn’t forwarded. It’s not clear to me that it’s a body of research with the same order of difficulty as creating an actual AI either. And, in its absence, it’s not clear to me that to answer in effect, “I’ll believe it when I see the mushroom cloud.” is a particularly rational response.
I was mostly referring to the general lack of interest in the discussion of un-falsifiable propositions by the scientific community. The issue is that un-falsifiable proposition are also the ones for which it is unlikely that in the discussion you will be presented with evidence in favour of them.
The space of propositions is the garden I am speaking of. And digging up false propositions is not harmless.
With regards to the argument of yours, I think you vastly under-estimate the size of the high-dimensional space of possible software, and how distant in this space are the little islands of software that actually does something interesting, as distant from each other as Bolzmann minds are within our universe (Albeit, of course, depending on the basis, possible software is better clustered).
Those spatial analogies, they are a great fallacy generator, a machine for getting quantities off by mind-bogglingly huge factors. In your mental image, you have someone create those nukes and put them in the sand, for the hapless individuals to find. In the reality that’s not how you find nuke. You venture into this enormous space of possible designs, as vast as the distance from here to the closest exact replica of The Gadget which spontaneously formed from a supernova by the random movement of uranium atoms. When you have to look in the space this big, you don’t find this replica of The Gadget without knowing what you’re looking for quite well.
With regards to listing biases to help arguments, given that I have no expectation that one could not handwave up a fairly plausible bias that would work in the direction of a specific argument, the direct evidential value of listing biases in such manner, on the proposition, is zero (or an epsilon). You could have just as well argued that the individuals who are not afraid of cave bears get killed by the cave bears; there’s too much “give” in your argument for it to have any evidential value. I can freely ignore it without having to bother to come up with a balancing bias (as people like Caplan rightfully do, without really bothering to outline why).