The question to my mind is why is mathematical logic the right domain? Why not game theory, or solid state physics, or neural networks? I don’t see any reason to privilege mathematical logic – a priori it seems like a non sequitur to me. The only reason that I give some weight to the possibility that it’s relevant is that other people believe that it is.
AI’s do Reasoning. If you can’t see the relevance of logic to reasoning, I can’t help.
Further, do you have some other domain of inquiry that has higher expected return? I’ve seen a lot of stated meta-level skepticism, but no strong arguments either on the meta level (why should MIRI be as uncertain as you) or the object level (are there arguments against studying logic, or arguments for doing something else).
Now I imagine it seems to you that MIRI is privileging the mathematical logic hypothesis, but as above, it looks to me rather obviously relevant such that it would take some evidence against it to put me in your epistemic position.
(Though strictly speaking given strong enough evidence against MIRI’s strategy I would go more towards “I don’t know what’s going on here, everything is confusing” rather than your (I assume) “There’s no good reason one way or the other”)
You seem to be taking a position of normative ignorance (I don’t know and neither can you), in what looks like the face of plenty of information. I would expect rational updating exposed to such information to yield a strong position one way or the other or epistemic panic, not calm (normative!) ignorance.
Note that to take a position of normative uncertainty you have to believe that not only have you seen no evidence, there is no evidence. I’m seeing normative uncertainty and no strong reason to occupy a position of normative uncertainty, so I’m confused.
AI’s do Reasoning. If you can’t see the relevance of logic to reasoning, I can’t help.
Humans do reasoning without mathematical logic. I don’t know why anyone would think that you need mathematical logic to do reasoning.
Further, do you have some other domain of inquiry that has higher expected return? I’ve seen a lot of stated meta-level skepticism, but no strong arguments either on the meta level (why should MIRI be as uncertain as you) or the object level (are there arguments against studying logic, or arguments for doing something else).
See each part of my comment here as well as my response to Kawoobma here.
You seem to be taking a position of normative ignorance (I don’t know and neither can you), in what looks like the face of plenty of information. I would expect rational updating exposed to such information to yield a strong position one way or the other or epistemic panic, not calm (normative!) ignorance.
I want to hedge because I find some of the people involved in MIRI’s Friendly AI research to be impressive, but putting that aside, I think that the likelihood of the research being useful for AI safety is vanishingly small, at the level of the probability of a random conjunctive statement of similar length being true.
Humans do reasoning without mathematical logic. I don’t know why anyone would think that you need mathematical logic to do reasoning.
Right. Humans do reasoning, but don’t really understand reasoning. Since ancient times, when people try to understand something they try to formalize it, hence the study of logic.
If we want to build something that can reason we have to understand reasoning or we basically won’t know what we are getting. We can’t just say “humans reason based on some ad-hoc kludgy nonformal system” and then magically extract an AI design from that. We need to build something we can understand or it won’t work, and right now, understanding reasoning in the abstract means logic and it’s extensions.
It’s a double need, though, because not only do we need to understand reasoning, self-improvement means the created thing needs to understand reasoning. Right now we don’t have a formal theory of reasoning that can handle understanding it’s own reasoning without losing power. So that’s we need to solve that.
If we want to build something that can reason we have to understand reasoning or we basically won’t know what we are getting. We can’t just say “humans reason based on some ad-hoc kludgy nonformal system” and then magically extract an AI design from that. We need to build something we can understand or it won’t work, and right now, understanding reasoning in the abstract means logic and it’s extensions.
Note that this is different from what you were saying before, and that commenting along the lines of “AI’s do Reasoning. If you can’t see the relevance of logic to reasoning, I can’t help” without further explanation doesn’t adhere to the principle of charity.
I’m very familiar with the argument that you’re making, and have discussed it with dozens of people. The reason why I didn’t respond to the argument before you made it is because I wanted to isolate our core point(s) of disagreement, rather than making presumptions. The same holds for my points below.
If we want to build something that can reason we have to understand reasoning or we basically won’t know what we are getting.
This argument has the form “If we want to build something that does X, we have to understand X, or we won’t know what we’re getting.” But this isn’t true in full generality. For example, we can build a window shade without knowing how the window shade blocks light, and still know that we’ll be getting something that blocks light. Why do you think that AI will be different?
We can’t just say “humans reason based on some ad-hoc kludgy nonformal system” and then magically extract an AI design from that.
Why do you think that it’s at all viable to create an AI based on a formal system? (For the moment putting aside safety considerations.)
As to the rest of your comment — returning to my “Chinese economy” remarks — the Chinese economy is a recursively self-improving system with “goal” of maximizing GDP. It could be that there’s goal drift, and that the Chinese economy starts optimizing for something random. But I think that the Chinese economy does a pretty good job of keeping this “goal” intact, and that it’s been doing a better and better job over time. Why do you think that it’s harder to ensure that an AI keeps its goal intact than it is to ensure that the Chinese economy keeps its “goal” intact.
AI have to come to conclusions about the state of the world, where “world” also includes their own being. Model theory is the field that deals with such things formally.
Why not game theory, or solid state physics, or neural networks?
These could be relevant, but it seems to me that “mind of an AI” is an emergent phenomena of the underlying solid state physics, where “emergent” here means “technically explained by, but intractable to study as such.” Game theory and model theory are intrinsically linked at the hip, and no comment on neural networks.
AI have to come to conclusions about the state of the world, where “world” also includes their own being. Model theory is the field that deals with such things formally.
But the most intelligent beings that we know of are humans, and they don’t use mathematical logic.
But the most intelligent beings that we know of are humans, and they don’t use mathematical logic.
Did humans have another choice in inventing the integers? (No. The theory of integers has only one model, up to isomorphism and cardinality.) In general, the ontology a mind creates is still under the aegis of mathematical logic, even if that mind didn’t use mathematical logic to invent it.
Sure, but that’s only one perspective. You can say that it’s under the aegis of particle physics, or chemistry, or neurobiology, or evolutionary psychology, or other things that I’m not thinking of. Why single out mathematical logic.
You can say that it’s under the aegis of particle physics, or chemistry, or neurobiology,
Going back to humans, getting an explanation of minds out of any of these areas requires computational resources that don’t currently exist. (In the case of particle physics, one might rather say “cannot exist.”)
Why single out mathematical logic.
Because we can prove theorems that will apply to whatever ontology AIs end up dreaming up. Unreasonable effectiveness of mathematics, and all that. But now I’m just repeating myself.
Because we can prove theorems that will apply to whatever ontology AIs end up dreaming up. Unreasonable effectiveness of mathematics, and all that. But now I’m just repeating myself.
I’m puzzled by your remark. It sounds like a fully general argument. One could equally well say that one should use mathematical logic to build a successful marriage, or fly an airplane, or create a political speech. Would you say this? If not, why do you think that studying mathematical logic is the best way to approach AI safety in particular?
I’m puzzled by your remark. It sounds like a fully general argument.
No, a fully general argument is something like “well, that’s just one perspective.” Mathematical logic will not tell you anything about marriage, other than the fact that it is an relation of variable arity (being kind to the polyamorists for the moment).
One could equally well say that one should use mathematical logic to build a successful marriage, or fly an airplane, or create a political speech. Would you say this?
I have no idea why a reasonable person would say any of these things.
If not, why do you think that studying mathematical logic is the best way to approach AI safety in particular?
I’d call it the best currently believed way with a chance of developing something actionable without probably requiring more computational power than a matryoshka brain. That’s because it’s the formal study of models and theories in general. Unless you’re willing to argue that AIs will have neither cognitive feature? That’s kind of rhetorical, though—I’m growing tired.
Given that the current Lob paper is non-constructive (invoking the axiom of choice) and hence is about as uncomputable as possible, I don’t understand why you think mathematical logic will help with computational concerns.
The paper on probabilistic reflection in logic is non-constructive, but that’s only sec. 4.3 of the Lob paper. Nothing non-constructive about T-n or TK.
I believe one of the goals this particular avenue of research is to make this result constructive. Also, He was talking about the study of mathematical logic in general not just this paper.
That was rather rude. I certainly don’t claim that proofs involving choice are useless, merely that they don’t address the particular criterion of computational feasibility.
I say something about this here.
Okay; why specifically isn’t mathematical logic the right domain?
EDIT: Or, to put it another way, there’s nothing in the linked comment about mathematical logic.
The question to my mind is why is mathematical logic the right domain? Why not game theory, or solid state physics, or neural networks? I don’t see any reason to privilege mathematical logic – a priori it seems like a non sequitur to me. The only reason that I give some weight to the possibility that it’s relevant is that other people believe that it is.
AI’s do Reasoning. If you can’t see the relevance of logic to reasoning, I can’t help.
Further, do you have some other domain of inquiry that has higher expected return? I’ve seen a lot of stated meta-level skepticism, but no strong arguments either on the meta level (why should MIRI be as uncertain as you) or the object level (are there arguments against studying logic, or arguments for doing something else).
Now I imagine it seems to you that MIRI is privileging the mathematical logic hypothesis, but as above, it looks to me rather obviously relevant such that it would take some evidence against it to put me in your epistemic position.
(Though strictly speaking given strong enough evidence against MIRI’s strategy I would go more towards “I don’t know what’s going on here, everything is confusing” rather than your (I assume) “There’s no good reason one way or the other”)
You seem to be taking a position of normative ignorance (I don’t know and neither can you), in what looks like the face of plenty of information. I would expect rational updating exposed to such information to yield a strong position one way or the other or epistemic panic, not calm (normative!) ignorance.
Note that to take a position of normative uncertainty you have to believe that not only have you seen no evidence, there is no evidence. I’m seeing normative uncertainty and no strong reason to occupy a position of normative uncertainty, so I’m confused.
Humans do reasoning without mathematical logic. I don’t know why anyone would think that you need mathematical logic to do reasoning.
See each part of my comment here as well as my response to Kawoobma here.
I want to hedge because I find some of the people involved in MIRI’s Friendly AI research to be impressive, but putting that aside, I think that the likelihood of the research being useful for AI safety is vanishingly small, at the level of the probability of a random conjunctive statement of similar length being true.
Right. Humans do reasoning, but don’t really understand reasoning. Since ancient times, when people try to understand something they try to formalize it, hence the study of logic.
If we want to build something that can reason we have to understand reasoning or we basically won’t know what we are getting. We can’t just say “humans reason based on some ad-hoc kludgy nonformal system” and then magically extract an AI design from that. We need to build something we can understand or it won’t work, and right now, understanding reasoning in the abstract means logic and it’s extensions.
It’s a double need, though, because not only do we need to understand reasoning, self-improvement means the created thing needs to understand reasoning. Right now we don’t have a formal theory of reasoning that can handle understanding it’s own reasoning without losing power. So that’s we need to solve that.
There is no viable alternate path.
Note that this is different from what you were saying before, and that commenting along the lines of “AI’s do Reasoning. If you can’t see the relevance of logic to reasoning, I can’t help” without further explanation doesn’t adhere to the principle of charity.
I’m very familiar with the argument that you’re making, and have discussed it with dozens of people. The reason why I didn’t respond to the argument before you made it is because I wanted to isolate our core point(s) of disagreement, rather than making presumptions. The same holds for my points below.
This argument has the form “If we want to build something that does X, we have to understand X, or we won’t know what we’re getting.” But this isn’t true in full generality. For example, we can build a window shade without knowing how the window shade blocks light, and still know that we’ll be getting something that blocks light. Why do you think that AI will be different?
Why do you think that it’s at all viable to create an AI based on a formal system? (For the moment putting aside safety considerations.)
As to the rest of your comment — returning to my “Chinese economy” remarks — the Chinese economy is a recursively self-improving system with “goal” of maximizing GDP. It could be that there’s goal drift, and that the Chinese economy starts optimizing for something random. But I think that the Chinese economy does a pretty good job of keeping this “goal” intact, and that it’s been doing a better and better job over time. Why do you think that it’s harder to ensure that an AI keeps its goal intact than it is to ensure that the Chinese economy keeps its “goal” intact.
AI have to come to conclusions about the state of the world, where “world” also includes their own being. Model theory is the field that deals with such things formally.
These could be relevant, but it seems to me that “mind of an AI” is an emergent phenomena of the underlying solid state physics, where “emergent” here means “technically explained by, but intractable to study as such.” Game theory and model theory are intrinsically linked at the hip, and no comment on neural networks.
But the most intelligent beings that we know of are humans, and they don’t use mathematical logic.
Did humans have another choice in inventing the integers? (No. The theory of integers has only one model, up to isomorphism and cardinality.) In general, the ontology a mind creates is still under the aegis of mathematical logic, even if that mind didn’t use mathematical logic to invent it.
Sure, but that’s only one perspective. You can say that it’s under the aegis of particle physics, or chemistry, or neurobiology, or evolutionary psychology, or other things that I’m not thinking of. Why single out mathematical logic.
Going back to humans, getting an explanation of minds out of any of these areas requires computational resources that don’t currently exist. (In the case of particle physics, one might rather say “cannot exist.”)
Because we can prove theorems that will apply to whatever ontology AIs end up dreaming up. Unreasonable effectiveness of mathematics, and all that. But now I’m just repeating myself.
I’m puzzled by your remark. It sounds like a fully general argument. One could equally well say that one should use mathematical logic to build a successful marriage, or fly an airplane, or create a political speech. Would you say this? If not, why do you think that studying mathematical logic is the best way to approach AI safety in particular?
No, a fully general argument is something like “well, that’s just one perspective.” Mathematical logic will not tell you anything about marriage, other than the fact that it is an relation of variable arity (being kind to the polyamorists for the moment).
I have no idea why a reasonable person would say any of these things.
I’d call it the best currently believed way with a chance of developing something actionable without probably requiring more computational power than a matryoshka brain. That’s because it’s the formal study of models and theories in general. Unless you’re willing to argue that AIs will have neither cognitive feature? That’s kind of rhetorical, though—I’m growing tired.
Given that the current Lob paper is non-constructive (invoking the axiom of choice) and hence is about as uncomputable as possible, I don’t understand why you think mathematical logic will help with computational concerns.
The paper on probabilistic reflection in logic is non-constructive, but that’s only sec. 4.3 of the Lob paper. Nothing non-constructive about T-n or TK.
I believe one of the goals this particular avenue of research is to make this result constructive. Also, He was talking about the study of mathematical logic in general not just this paper.
I have little patience for people who believe invoking the axiom of choice in a proof makes the resulting theorem useless.
That was rather rude. I certainly don’t claim that proofs involving choice are useless, merely that they don’t address the particular criterion of computational feasibility.
What do you mean by “something actionable” ?