Because we can prove theorems that will apply to whatever ontology AIs end up dreaming up. Unreasonable effectiveness of mathematics, and all that. But now I’m just repeating myself.
I’m puzzled by your remark. It sounds like a fully general argument. One could equally well say that one should use mathematical logic to build a successful marriage, or fly an airplane, or create a political speech. Would you say this? If not, why do you think that studying mathematical logic is the best way to approach AI safety in particular?
I’m puzzled by your remark. It sounds like a fully general argument.
No, a fully general argument is something like “well, that’s just one perspective.” Mathematical logic will not tell you anything about marriage, other than the fact that it is an relation of variable arity (being kind to the polyamorists for the moment).
One could equally well say that one should use mathematical logic to build a successful marriage, or fly an airplane, or create a political speech. Would you say this?
I have no idea why a reasonable person would say any of these things.
If not, why do you think that studying mathematical logic is the best way to approach AI safety in particular?
I’d call it the best currently believed way with a chance of developing something actionable without probably requiring more computational power than a matryoshka brain. That’s because it’s the formal study of models and theories in general. Unless you’re willing to argue that AIs will have neither cognitive feature? That’s kind of rhetorical, though—I’m growing tired.
Given that the current Lob paper is non-constructive (invoking the axiom of choice) and hence is about as uncomputable as possible, I don’t understand why you think mathematical logic will help with computational concerns.
The paper on probabilistic reflection in logic is non-constructive, but that’s only sec. 4.3 of the Lob paper. Nothing non-constructive about T-n or TK.
I believe one of the goals this particular avenue of research is to make this result constructive. Also, He was talking about the study of mathematical logic in general not just this paper.
That was rather rude. I certainly don’t claim that proofs involving choice are useless, merely that they don’t address the particular criterion of computational feasibility.
I’m puzzled by your remark. It sounds like a fully general argument. One could equally well say that one should use mathematical logic to build a successful marriage, or fly an airplane, or create a political speech. Would you say this? If not, why do you think that studying mathematical logic is the best way to approach AI safety in particular?
No, a fully general argument is something like “well, that’s just one perspective.” Mathematical logic will not tell you anything about marriage, other than the fact that it is an relation of variable arity (being kind to the polyamorists for the moment).
I have no idea why a reasonable person would say any of these things.
I’d call it the best currently believed way with a chance of developing something actionable without probably requiring more computational power than a matryoshka brain. That’s because it’s the formal study of models and theories in general. Unless you’re willing to argue that AIs will have neither cognitive feature? That’s kind of rhetorical, though—I’m growing tired.
Given that the current Lob paper is non-constructive (invoking the axiom of choice) and hence is about as uncomputable as possible, I don’t understand why you think mathematical logic will help with computational concerns.
The paper on probabilistic reflection in logic is non-constructive, but that’s only sec. 4.3 of the Lob paper. Nothing non-constructive about T-n or TK.
I believe one of the goals this particular avenue of research is to make this result constructive. Also, He was talking about the study of mathematical logic in general not just this paper.
I have little patience for people who believe invoking the axiom of choice in a proof makes the resulting theorem useless.
That was rather rude. I certainly don’t claim that proofs involving choice are useless, merely that they don’t address the particular criterion of computational feasibility.
What do you mean by “something actionable” ?