It is still flawed logic if the new data requires you to go substantially outside your estimated range rather than narrow down your uncertainty. (edit: and especially so if you don’t even have a range of some kind).
E.g. we had unproductive argument about whenever random-ish AGI ‘almost certainly’ just eats everyone, it’s not that I have some data that it is almost certain it is not eating everyone, it’s that you shouldn’t have this sort of certainty about such a topic. It’s fine if your estimated probability distribution centres there, it’s not fine if it is ultra narrow.
It is still flawed logic if the new data requires you to go substantially outside your estimated range rather than narrow down your uncertainty.
I gave nothing to indicate that this was the case. While the grandparent is more self deprecating on the behalf of both myself and the species than it is boastful your additional criticism doesn’t build upon it. The flaw you perceive in me (along the lines of disagreeing with you) is a different issue.
(edit: and especially so if you don’t even have a range of some kind).
I have a probability distribution, not a range. Usually not a terribly well specified probability distribution but that is the ideal to be approximated.
E.g. we had unproductive argument about whenever random-ish AGI ‘almost certainly’ just eats everyone, it’s not that I have some data that it is almost certain it is not eating everyone, it’s that you shouldn’t have this sort of certainty about such a topic. It’s fine if your estimated probability distribution centres there, it’s not fine if it is ultra narrow.
No. Our disagreement was not one of me assigning too much certainty. The ‘almost certainly’ was introduced by you, applied to something that I state has well under even chance of happening. (Specifically, regarding the probability of humans developing a worse-than-just-killing-us uFAI in the near vicinity to an FAI.)
You should also note that there is a world of difference between near certainty about what kind of AI will be selected and an unspecified level of certainty that the overwhelming majority of AGI goal systems would result in them killing us. The difference is akin to having 80% confidence that 99.9999% of balls in the jar are red. Don’t equivocate that with 99.9999% confidence. They represent entirely different indicators of assigned probability distributions.
In general meta-criticisms of my own reasoning that are founded on a specific personal disagreement with me should not be expected to be persuasive. Given that you already know I reject the premise (that you were right and I was wrong in some past dispute) why would you expect me to be persuaded by conclusions that rely on that premise?
Nah, my argument was “Well, the crux of the issue is that the random AIs may be more likely to leave us alone than near-misses at FAI.” , by the ‘may’, I meant, there is a notable probability that far less than 99.9999% of the balls in the jar are red, and consequently, far greater than 0.0001% probability of drawing a non-red ball.
edit: Furthermore, suppose we have a jar with 100 balls in which we know there is at least one blue ball (near-FAI space), and a huge jar with 100000 balls, about which we don’t know quite a lot, and which has substantial probability of having a larger fraction of non-red balls than the former jar.
edit: also, re probability distributions, that’s why i said a “range of some sort”. Humans don’t seem to quite do the convolutions and the like on probability distributions when thinking.
It is still flawed logic if the new data requires you to go substantially outside your estimated range rather than narrow down your uncertainty. (edit: and especially so if you don’t even have a range of some kind).
E.g. we had unproductive argument about whenever random-ish AGI ‘almost certainly’ just eats everyone, it’s not that I have some data that it is almost certain it is not eating everyone, it’s that you shouldn’t have this sort of certainty about such a topic. It’s fine if your estimated probability distribution centres there, it’s not fine if it is ultra narrow.
I gave nothing to indicate that this was the case. While the grandparent is more self deprecating on the behalf of both myself and the species than it is boastful your additional criticism doesn’t build upon it. The flaw you perceive in me (along the lines of disagreeing with you) is a different issue.
I have a probability distribution, not a range. Usually not a terribly well specified probability distribution but that is the ideal to be approximated.
No. Our disagreement was not one of me assigning too much certainty. The ‘almost certainly’ was introduced by you, applied to something that I state has well under even chance of happening. (Specifically, regarding the probability of humans developing a worse-than-just-killing-us uFAI in the near vicinity to an FAI.)
You should also note that there is a world of difference between near certainty about what kind of AI will be selected and an unspecified level of certainty that the overwhelming majority of AGI goal systems would result in them killing us. The difference is akin to having 80% confidence that 99.9999% of balls in the jar are red. Don’t equivocate that with 99.9999% confidence. They represent entirely different indicators of assigned probability distributions.
In general meta-criticisms of my own reasoning that are founded on a specific personal disagreement with me should not be expected to be persuasive. Given that you already know I reject the premise (that you were right and I was wrong in some past dispute) why would you expect me to be persuaded by conclusions that rely on that premise?
Nah, my argument was “Well, the crux of the issue is that the random AIs may be more likely to leave us alone than near-misses at FAI.” , by the ‘may’, I meant, there is a notable probability that far less than 99.9999% of the balls in the jar are red, and consequently, far greater than 0.0001% probability of drawing a non-red ball.
edit: Furthermore, suppose we have a jar with 100 balls in which we know there is at least one blue ball (near-FAI space), and a huge jar with 100000 balls, about which we don’t know quite a lot, and which has substantial probability of having a larger fraction of non-red balls than the former jar.
edit: also, re probability distributions, that’s why i said a “range of some sort”. Humans don’t seem to quite do the convolutions and the like on probability distributions when thinking.