I agree with some of what you wrote although I’m not sure why you wrote it. Anyway, I was giving an argumentative inverse of what Yudkowsky asserted and hereby echoed his own rhetoric. Someone claimed A and in return Yudkowsky claimed that A is a bare assertion, therefore ¬A, whereupon I claimed that ¬A is a bare assertion therefore the truth-value of A is again ~unknown. This of course could have been inferred from Yudkowskys statement alone, if interpreted as a predictive inverse (antiprediction), if not for the last sentence which states, “[...] and perhaps there are yet more things like that, such as, oh, say, self-modification of code.” [1] Perhaps yes, perhaps not. Given that his comment already scored 16 when I replied, I believed that highlighting that it offered no convincing evidence for or against A would be justified by one sentence alone. Here we may disagree, but note that my comment included more information than that particular sentence alone.
Self-modification of code does not necessarily amount to a superhuman level of abstract reasoning similar to that between humans and chimps but might very well be unfeasible as it demands self-knowledge requiring resources exceeding that of any given intelligence. This would agree with the line of argumentation in the original post, namely that the next step (e.g. an improved AGI created by the existing AGI) will require a doubling of resources. Hereby we are on par again, two different predictions canceling out each other.
You should keep track of whose beliefs you are talking about, as it’s not always useful or possible to work with the actual truth of informal statements where you analyze correctness of debate. A person holding a wrong belief for wrong reasons can still be correct in rejecting an incorrect argument for incorrectness of those wrong reasons.
If A believes X, then (NOT X) is a “bare assertion”, not enough to justify A changing their belief. For B, who believes (NOT X), stating “X” is also a bare assertion, not enough to justify changing the belief. There is no inferential link between refuted assertions and beliefs that were held all along. A believes X not because “(NOT X) is a bare assertion”, even though A believes both that “(NOT X) is a bare assertion” (correctly) and X (of unknown truth).
There is no inferential link between refuted assertions and beliefs that were held all along.
That is true. Yet for a third party, one that is unaware of any additional substantiation not featured in the debate itself, a prediction and its antipredication cancel out each other. As a result no conclusion can be drawn by an uninformed bystander. This I tried to highlight without having to side with one party.
Yet for a third party, one that is unaware of any additional substantiation not featured in the debate itself, a prediction and its antipredication cancel out each other. As a result no conclusion can be drawn by an uninformed bystander.
They don’t cancel out each other, as they both lack convincing power, equally irrelevant. It’s an error to state as arguments what you know your audience won’t agree with (change their mind in response to). At the same time, explicitly rejecting an argument that failed to convince is entirely correct.
They don’t cancel out each other, as they both lack convincing power, equally irrelevant.
Let’s assume that you contemplate the possibility of an outcome Z. Now you come across a discussion between agent A and agent B discussing the prediction that Z is true. If agent B does proclaim the argument X in favor of Z being true and you believe that X is not convincing then this still gives you new information about agent B and the likelihood of Z being true. You might now conclude that Z is slightly more likely to be true because of additional information in favor of Z and the confidence of agent B necessary to proclaim that Z is true. Agent A does however proclaim argument Y in favor of Z being false and you believe that Y is equally unconvincing than argument X in favor of Z being true. You might now conclude again that the truth-value of Z is ~unknown as each argument and the confidence of its facilitator ~outweigh each other.
Therefore no information is irrelevant if it is the only information about any given outcome in question. Your judgement might weigh less than the confidence of an agent with possible unknown additional substantiation in favor of its argument. If you are unable to judge the truth-value of an exclusive disjunction then that any given argument about it is not compelling does tell more about you than the agent that does proclaim it.
Any argument alone has to be taken into account, if only due to its logical consequence. Every argument should be incorporated into your probability estimations for that it signals a certain confidence (for that it is proclaimed at all) of the agent that is uttering it. Yet if there exists a counterargument that is inverse to the original argument you’ll have to take that into account as well. This counterargument might very well outweigh the original argument. Therefore there are no arguments that lack the power to convince, however small, yet arguments can outweigh and trump each other.
I think it became more confused now. With C and D unrelated, what do you care for (C XOR D)? For the same reason, you can’t now expect evidence for C to always be counter-evidence for D.
Whoops, I’m just learning the basics (some practise here). I took NOT Z as an independent proposition. I guess there is no simple way to express this if you do not assign the negotation of Z its own variable, in case you want it to be an indepedent proposition?
If agent B does proclaim the argument X in favor of Z being true and you believe that X is not convincing then this still gives you new information about agent B and the likelihood of Z being true. You might now conclude that Z is slightly more likely to be true because of additional information in favor of Z
B believes that X argues for Z, but you might well believe that X argues against Z. (You are considering a model of a public debate, while this comment was more about principles for an argument between two people.)
Also, it’s strange that you are contemplating levels of belief in Z, while A and B assert it being purely true or false. How overconfident of them.
(Haven’t yet got around to a complete reply rectifying the model, but will do eventually.)
I agree with some of what you wrote although I’m not sure why you wrote it. Anyway, I was giving an argumentative inverse of what Yudkowsky asserted and hereby echoed his own rhetoric. Someone claimed A and in return Yudkowsky claimed that A is a bare assertion, therefore ¬A, whereupon I claimed that ¬A is a bare assertion therefore the truth-value of A is again ~unknown. This of course could have been inferred from Yudkowskys statement alone, if interpreted as a predictive inverse (antiprediction), if not for the last sentence which states, “[...] and perhaps there are yet more things like that, such as, oh, say, self-modification of code.” [1] Perhaps yes, perhaps not. Given that his comment already scored 16 when I replied, I believed that highlighting that it offered no convincing evidence for or against A would be justified by one sentence alone. Here we may disagree, but note that my comment included more information than that particular sentence alone.
Self-modification of code does not necessarily amount to a superhuman level of abstract reasoning similar to that between humans and chimps but might very well be unfeasible as it demands self-knowledge requiring resources exceeding that of any given intelligence. This would agree with the line of argumentation in the original post, namely that the next step (e.g. an improved AGI created by the existing AGI) will require a doubling of resources. Hereby we are on par again, two different predictions canceling out each other.
You should keep track of whose beliefs you are talking about, as it’s not always useful or possible to work with the actual truth of informal statements where you analyze correctness of debate. A person holding a wrong belief for wrong reasons can still be correct in rejecting an incorrect argument for incorrectness of those wrong reasons.
If A believes X, then (NOT X) is a “bare assertion”, not enough to justify A changing their belief. For B, who believes (NOT X), stating “X” is also a bare assertion, not enough to justify changing the belief. There is no inferential link between refuted assertions and beliefs that were held all along. A believes X not because “(NOT X) is a bare assertion”, even though A believes both that “(NOT X) is a bare assertion” (correctly) and X (of unknown truth).
That is true. Yet for a third party, one that is unaware of any additional substantiation not featured in the debate itself, a prediction and its antipredication cancel out each other. As a result no conclusion can be drawn by an uninformed bystander. This I tried to highlight without having to side with one party.
They don’t cancel out each other, as they both lack convincing power, equally irrelevant. It’s an error to state as arguments what you know your audience won’t agree with (change their mind in response to). At the same time, explicitly rejecting an argument that failed to convince is entirely correct.
Let’s assume that you contemplate the possibility of an outcome Z. Now you come across a discussion between agent A and agent B discussing the prediction that Z is true. If agent B does proclaim the argument X in favor of Z being true and you believe that X is not convincing then this still gives you new information about agent B and the likelihood of Z being true. You might now conclude that Z is slightly more likely to be true because of additional information in favor of Z and the confidence of agent B necessary to proclaim that Z is true. Agent A does however proclaim argument Y in favor of Z being false and you believe that Y is equally unconvincing than argument X in favor of Z being true. You might now conclude again that the truth-value of Z is ~unknown as each argument and the confidence of its facilitator ~outweigh each other.
Therefore no information is irrelevant if it is the only information about any given outcome in question. Your judgement might weigh less than the confidence of an agent with possible unknown additional substantiation in favor of its argument. If you are unable to judge the truth-value of an exclusive disjunction then that any given argument about it is not compelling does tell more about you than the agent that does proclaim it.
Any argument alone has to be taken into account, if only due to its logical consequence. Every argument should be incorporated into your probability estimations for that it signals a certain confidence (for that it is proclaimed at all) of the agent that is uttering it. Yet if there exists a counterargument that is inverse to the original argument you’ll have to take that into account as well. This counterargument might very well outweigh the original argument. Therefore there are no arguments that lack the power to convince, however small, yet arguments can outweigh and trump each other.
ETA: Fixed the logic, thanks Vladimir_Nesov.
Z XOR ¬Z is always TRUE.
(I know what you mean, but it looks funny.)
Fixed it now (I hope), thanks.
I think it became more confused now. With C and D unrelated, what do you care for (C XOR D)? For the same reason, you can’t now expect evidence for C to always be counter-evidence for D.
Thanks for your patience and feedback, I updated it again. I hope it is now somewhat more clear what I’m trying to state.
Whoops, I’m just learning the basics (some practise here). I took NOT Z as an independent proposition. I guess there is no simple way to express this if you do not assign the negotation of Z its own variable, in case you want it to be an indepedent proposition?
B believes that X argues for Z, but you might well believe that X argues against Z. (You are considering a model of a public debate, while this comment was more about principles for an argument between two people.)
Also, it’s strange that you are contemplating levels of belief in Z, while A and B assert it being purely true or false. How overconfident of them.
(Haven’t yet got around to a complete reply rectifying the model, but will do eventually.)