What I’m saying is it doesn’t matter, and therefore doesn’t obviously make sense to ask, what language the world is written in. Even if the world actually runs on paraconsistent logic, we can still do fine with classical logic. It will alter our prior somewhat, but not a while lot, because the encoding of one into the other isn’t so hard.
Eliezer is attempting to make some comparison between the structure of the actual world and the structure of the logic, to determine which logic most seems to be what the world is written in. But the existence of encodings of one logic in another makes this exercise somewhat unnecessary (and also means that the structure of the world is only weak evidence for “what logic it was written in”).
To answer your question: in addition, because having strong negation in addition to weak negation only increases the expressive power of the system. It does not increase the risk you mention, because the system is still choosing what to believe according to its learning algorithm, and so will not choose to believe strong negatives that cause problems.
EDIT:
In response to srn347, I would like to clarify that I do not intend to belittle the importance of comparing the effectiveness of different logics. The existence of an encoding will only get you specific things (depending on the nature of the encoding), and furthermore, encodings to not always exist. So, specifically, I’m only saying that I don’t see a need for paraconsistent logic. Other possibilities need to be judged on their merits. However, looking for encodings is an important tool in that judgement.
it doesn’t matter, and therefore doesn’t obviously make sense to ask [...]
‘Doesn’t matter’ in what sense? ‘Doesn’t make sense’ in what sense?
what language the world is written in.
Be careful not to confuse the literal question ‘Is our world actually a simulation programmed in part using second-order logic?’ from the more generic ‘is our world structured in a second-order-logic-ish way?’. The latter, I believe, is what Eliezer was concerned with—he wasn’t assuming we were in a simulation, he was asking whether our universe’s laws or regularities are property-generalizing in a second-order-style way, or schematic in a first-order-style way, the importance being that schematic laws are more likely to vary across spacetime. Similarly, the important question of whether there are dialetheias should not be confused with the more occult question of whether our universe is a simulation written using paraconsistent logics (which, of course, need not involve any true contradictions).
Even if the world actually runs on paraconsistent logic, we can still do fine with classical logic. It will alter our prior somewhat, but not a while lot, because the encoding of one into the other isn’t so hard.
This is confusing a number of different issues. Paraconsistent logic need not be dialetheist, and not all dialetheists favor paraconsistent logic (though, for obvious reasons, nearly all of them do). You’re reconfusing the issues I disentangled in my earlier posts, while (if I’m not misunderstanding you) trying to make a scrambled version of my earlier point that our choice of logic to have the AGI reason with is partly independent of our choice of logic to have the AGI think the universe structurally resembles.
If you think it doesn’t make sense to ask what logics our universe structurally resembles, remember that the structural similarity between arithmetic operations and physical processes is what makes mathematics useful in the first place, and that we should not be surprised to see similarities between the simplest systems we can come up with, and a universe with remarkably simple and uniform behavior, at the fundamental level.
It does not increase the risk you mention, because the system is still choosing what to believe according to its learning algorithm, and so will not choose to believe strong negatives that cause problems.
That’s assuming it never makes a mistake early on. If it does, and it’s not a paraconsistent reasoner, its epistemology will explode. You underestimate the virtues of paraconsistent logics for complex reasoning systems; they’re very valuable for epistemic self-debugging, as opposed to shutting the whole thing down if even the smallest mistake occurs. And, again, none of this requires the slightest commitment to dialetheias being possible.
That’s assuming it never makes a mistake early on. If it does, and it’s not a paraconsistent reasoner, its epistemology will explode.
I’m imagining a better-designed system than that. Incoming facts together with internal reasoning would constantly be invalidating various theories; the system should have a bunch of commitments which explode on it, but rather, should always be seeking out contradictions in order to invalidate theories.
You underestimate the virtues of paraconsistent logics for complex reasoning systems; [...] none of this requires the slightest commitment to dialetheias being possible.
Perhaps so. I don’t actually understand what it would mean to reason with a paraconsistent logic while still believing a classical logic. Is something like that possible? Or (as it seems) would reasoning via a paraconsistent logic have to reduce the number of tautologies I recognize? I’ve sort of ignored paraconsistent thought merely because I’m not interested in dialetheism...
You’re reconfusing the issues I disentangled in my earlier posts, while (if I’m not misunderstanding you) trying to make a scrambled version of my earlier point that our choice of logic to have the AGI reason with is partly independent of our choice of logic to have the AGI think the universe structurally resembles.
Yes, re-reading your original post, I can see that I missed that point (and was partially re-stating it). The actual point I’m trying to make is basically to respond to this part:
but try as much as possible to keep the AGI from assuming that the axioms and inference rules with which it (initially?) thinks must be the same as the ones that best characterize ultimate reality. Instead, whichever language feels more ‘natural’ to the AGI, we want it to be able to do the same sort of inner-dialogue reasoning that Eliezer himself is doing in this series of vignettes, [...]
by saying that we get much of that automatically. Really, what I should have said is that we can achieve this by choosing as expressive a logic as possible, to ensure that as many other logics as possible have good embeddings in that logic.
This could alternatively be stated as an expression of confusion about why you would want to address this concern separately from the choice of logic, since if we attempted to implement the aforementioned inner-dialog reasoning, we would surely have to provide a logic for the reasoning to take place in.
I don’t actually understand what it would mean to reason with a paraconsistent logic while still believing a classical logic. Is something like that possible?
Not only is it possible, but probably over 99% of people who employ paraconsistent logics believe in classical-logic metaphysics, or at least something a lot closer to classical-logic metaphysics than to dialetheism. Paraconsistent logic is just reasoning in such a way that when you arrive at a contradiction in your belief system, you try to diagnose and discharge (or otherwise quarantine) the contradiction, rather than just concluding that anything follows. Dialetheism is one way (or family of ways) to quarantine the contradiction so that it doesn’t yield explosion, but it isn’t the standard one. Paraconsistent reasoning is a proof methodology, not a metaphysical stance in its own right.
what I should have said is that we can achieve this by choosing as expressive a logic as possible, to ensure that as many other logics as possible have good embeddings in that logic.
Within reason. We may not want the AGI to be so expressive that it can express things we think are categorically meaningless and/or useless. And the power to express some meaningful circumstances might come with costs that outweigh the expected utility. I suppose one way to go about this is to pick the optimal level of expressivity given the circumstances we expect the AGI to run into, but try to make the AGI able to self-modify (or generate nonclassical subsystems with which it can carry on a reasonable, open-minded dialogue) to increase expressivity if it runs into situations that seem especially anomalous given its conception of what a circumstance or fact is.
The basic problem is: How does one assign a probability to there being true propositions that are intrinsically ineffable (for non-complexity-related reasons)? A good starting place is to imagine a being that had far less logical expressivity than we do (e.g., someone who could say ‘p’ / ‘true’ or (as a primitive concept) ‘unknown’ / ‘unproven’ but could not say ‘not-p’ / ‘false’), and reason by analogy from this base case.
if we attempted to implement the aforementioned inner-dialog reasoning, we would surely have to provide a logic for the reasoning to take place in.
Ideally, if two subsystems of an AGI are designed to reason with different logics, and we want the two to argue over the best interpretation of a problem case, we would settle the dispute by some rule like ‘Try to prove to the satisfaction of your opponent that their view leads to too many circumstances we both agree are problems. Avoid question-begging arguments, i.e., ones that assume that your logic is the right one, when that is precisely what is under dispute; seek arguments that both of you can agree are valid, or even arguments that you think are invalid but that you think problematize your opponent’s position.’ Of course, if we need a decision procedure in cases where the AGI arrives at a stalemate, we may need to assume that a certain side ‘wins’ if there’s a draw.
I think you are overestimating the difficulty of the mathematical problem here! To quote JoshuaFox:
(Two impossible things before breakfast … and maybe a few more? Eliezer seems to be rebuilding logic, set theory, ontology, epistemology, axiology, decision theory, and more, mostly from scratch. That’s a lot of impossibles.)
But once those problems are solved, we do not need to additionally solve the problem you are highlighting, I think...
‘Try to prove to the satisfaction of your opponent that their view leads to too many circumstances we both agree are problems. Avoid question-begging arguments, i.e., ones that assume that your logic is the right one, when that is precisely what is under dispute; seek arguments that both of you can agree are valid, or even arguments that you think are invalid but that you think problematize your opponent’s position.’
When it comes down to it, wouldn’t this be just like some logic that is the common subset of the two, or perhaps some kind of average (between the probability distributions on observations induced by each logic)? Again, I think this will be handled well enough (handled better, to be precise) by a more powerful logic which can express each of the two narrower logics as a hypothesis about the structure in which the environment is best defined. Then the honest argument you describe will be a result of the honest attempt of the agent to estimate probabilities and find plans of action which yield utility regardless of the status of the unknowns.
What I’m saying is it doesn’t matter, and therefore doesn’t obviously make sense to ask, what language the world is written in. Even if the world actually runs on paraconsistent logic, we can still do fine with classical logic. It will alter our prior somewhat, but not a while lot, because the encoding of one into the other isn’t so hard.
Eliezer is attempting to make some comparison between the structure of the actual world and the structure of the logic, to determine which logic most seems to be what the world is written in. But the existence of encodings of one logic in another makes this exercise somewhat unnecessary (and also means that the structure of the world is only weak evidence for “what logic it was written in”).
To answer your question: in addition, because having strong negation in addition to weak negation only increases the expressive power of the system. It does not increase the risk you mention, because the system is still choosing what to believe according to its learning algorithm, and so will not choose to believe strong negatives that cause problems.
EDIT:
In response to srn347, I would like to clarify that I do not intend to belittle the importance of comparing the effectiveness of different logics. The existence of an encoding will only get you specific things (depending on the nature of the encoding), and furthermore, encodings to not always exist. So, specifically, I’m only saying that I don’t see a need for paraconsistent logic. Other possibilities need to be judged on their merits. However, looking for encodings is an important tool in that judgement.
‘Doesn’t matter’ in what sense? ‘Doesn’t make sense’ in what sense?
Be careful not to confuse the literal question ‘Is our world actually a simulation programmed in part using second-order logic?’ from the more generic ‘is our world structured in a second-order-logic-ish way?’. The latter, I believe, is what Eliezer was concerned with—he wasn’t assuming we were in a simulation, he was asking whether our universe’s laws or regularities are property-generalizing in a second-order-style way, or schematic in a first-order-style way, the importance being that schematic laws are more likely to vary across spacetime. Similarly, the important question of whether there are dialetheias should not be confused with the more occult question of whether our universe is a simulation written using paraconsistent logics (which, of course, need not involve any true contradictions).
This is confusing a number of different issues. Paraconsistent logic need not be dialetheist, and not all dialetheists favor paraconsistent logic (though, for obvious reasons, nearly all of them do). You’re reconfusing the issues I disentangled in my earlier posts, while (if I’m not misunderstanding you) trying to make a scrambled version of my earlier point that our choice of logic to have the AGI reason with is partly independent of our choice of logic to have the AGI think the universe structurally resembles.
If you think it doesn’t make sense to ask what logics our universe structurally resembles, remember that the structural similarity between arithmetic operations and physical processes is what makes mathematics useful in the first place, and that we should not be surprised to see similarities between the simplest systems we can come up with, and a universe with remarkably simple and uniform behavior, at the fundamental level.
That’s assuming it never makes a mistake early on. If it does, and it’s not a paraconsistent reasoner, its epistemology will explode. You underestimate the virtues of paraconsistent logics for complex reasoning systems; they’re very valuable for epistemic self-debugging, as opposed to shutting the whole thing down if even the smallest mistake occurs. And, again, none of this requires the slightest commitment to dialetheias being possible.
I’m imagining a better-designed system than that. Incoming facts together with internal reasoning would constantly be invalidating various theories; the system should have a bunch of commitments which explode on it, but rather, should always be seeking out contradictions in order to invalidate theories.
Perhaps so. I don’t actually understand what it would mean to reason with a paraconsistent logic while still believing a classical logic. Is something like that possible? Or (as it seems) would reasoning via a paraconsistent logic have to reduce the number of tautologies I recognize? I’ve sort of ignored paraconsistent thought merely because I’m not interested in dialetheism...
Yes, re-reading your original post, I can see that I missed that point (and was partially re-stating it). The actual point I’m trying to make is basically to respond to this part:
by saying that we get much of that automatically. Really, what I should have said is that we can achieve this by choosing as expressive a logic as possible, to ensure that as many other logics as possible have good embeddings in that logic.
This could alternatively be stated as an expression of confusion about why you would want to address this concern separately from the choice of logic, since if we attempted to implement the aforementioned inner-dialog reasoning, we would surely have to provide a logic for the reasoning to take place in.
Not only is it possible, but probably over 99% of people who employ paraconsistent logics believe in classical-logic metaphysics, or at least something a lot closer to classical-logic metaphysics than to dialetheism. Paraconsistent logic is just reasoning in such a way that when you arrive at a contradiction in your belief system, you try to diagnose and discharge (or otherwise quarantine) the contradiction, rather than just concluding that anything follows. Dialetheism is one way (or family of ways) to quarantine the contradiction so that it doesn’t yield explosion, but it isn’t the standard one. Paraconsistent reasoning is a proof methodology, not a metaphysical stance in its own right.
Within reason. We may not want the AGI to be so expressive that it can express things we think are categorically meaningless and/or useless. And the power to express some meaningful circumstances might come with costs that outweigh the expected utility. I suppose one way to go about this is to pick the optimal level of expressivity given the circumstances we expect the AGI to run into, but try to make the AGI able to self-modify (or generate nonclassical subsystems with which it can carry on a reasonable, open-minded dialogue) to increase expressivity if it runs into situations that seem especially anomalous given its conception of what a circumstance or fact is.
The basic problem is: How does one assign a probability to there being true propositions that are intrinsically ineffable (for non-complexity-related reasons)? A good starting place is to imagine a being that had far less logical expressivity than we do (e.g., someone who could say ‘p’ / ‘true’ or (as a primitive concept) ‘unknown’ / ‘unproven’ but could not say ‘not-p’ / ‘false’), and reason by analogy from this base case.
Ideally, if two subsystems of an AGI are designed to reason with different logics, and we want the two to argue over the best interpretation of a problem case, we would settle the dispute by some rule like ‘Try to prove to the satisfaction of your opponent that their view leads to too many circumstances we both agree are problems. Avoid question-begging arguments, i.e., ones that assume that your logic is the right one, when that is precisely what is under dispute; seek arguments that both of you can agree are valid, or even arguments that you think are invalid but that you think problematize your opponent’s position.’ Of course, if we need a decision procedure in cases where the AGI arrives at a stalemate, we may need to assume that a certain side ‘wins’ if there’s a draw.
I think you are overestimating the difficulty of the mathematical problem here! To quote JoshuaFox:
But once those problems are solved, we do not need to additionally solve the problem you are highlighting, I think...
When it comes down to it, wouldn’t this be just like some logic that is the common subset of the two, or perhaps some kind of average (between the probability distributions on observations induced by each logic)? Again, I think this will be handled well enough (handled better, to be precise) by a more powerful logic which can express each of the two narrower logics as a hypothesis about the structure in which the environment is best defined. Then the honest argument you describe will be a result of the honest attempt of the agent to estimate probabilities and find plans of action which yield utility regardless of the status of the unknowns.
If I may interject (assuming it isn’t too early to start proposing solutions), it does turn out to be the case that computability logic is a superset of linear logic, which encodes resource-boundedness and avoids material entailment paradoxes, intuitionistic logic, which encodes proof/justification, and classical logic, which encodes truth. To accept less is to sacrifice one or more of the above attributes in terms of expressiveness.
Thanks; I certainly didn’t intend to say that all logics are equivalent so it doesn’t matter… edited to clarify...