I think perhaps you underestimate the extent to which I really meant it when I said “just for fun”. This isn’t how I do reasoning when I’m thinking about Friendliness-like problems. When I’m doing that I have the Goedel machine paper sitting in front of me, 15 Wikipedia articles on program semantics open, and I’m trying my hardest to be as precise as possible. That is a different skillset. The skill that I (poorly) attempted to demonstrate is a different one, that of steel-manning another’s epistemic position in order to engage with it in meaningful ways, as opposed to assuming that the other’s thinking is fuzzy simply because their language is one you’re not used to. But I never use that style of syncretic “reasoning” when I’m actually, ya know, thinking. No worries!
I don’t mean to be insulting. I know you’re smart. I know you’re a good reasoner. I only worry that you might not be using your 1337 reasoning skillz everywhere, which can be extremely bad because wrong beliefs can take root in the areas labeled “separate magisterium” and I’ve been bitten by it.
But… I said I was used to it, and remembering it being fuzzy!
(I’m not sure we understand each other, but...) Okay. But I mean, when Leibniz talks about theistic concepts, his reasoning is not very fuzzy. Insofar as smart theists use memes descended from Leibniz—which they do, and they also use memes descended from other very smart people—it becomes necessary that I am able to translate their concepts into concepts that I can understand and use my normal rationality skillz on.
I don’t think this is compartmentalization. Compartmentalization as I understand it is when you have two contradictory pieces of information about the world and you keep them separate for whatever reason. I’m talking about two different skills. My actual beliefs stay roughly constant no matter what ontology/language I use to express them. Think of it like Solomonoff induction. The universal machine you choose only changes things by at most a constant. (Admittedly, for humans that constant can be the difference between seeing or not seeing a one step implication, but such matters are tricky and would need their own post. But imagine if I was to try to learn category theory in Russian using a Russian-English dictionary.) And anyway I don’t actually think in terms of theism except for when I either want to troll people, understand philosophers, or play around in others’ ontologies for kicks.
I am not yet convinced that it isn’t misplaced, but I do thank you for your concern.
I think perhaps you underestimate the extent to which I really meant it when I said “just for fun”. This isn’t how I do reasoning when I’m thinking about Friendliness-like problems. When I’m doing that I have the Goedel machine paper sitting in front of me, 15 Wikipedia articles on program semantics open, and I’m trying my hardest to be as precise as possible. That is a different skillset. The skill that I (poorly) attempted to demonstrate is a different one, that of steel-manning another’s epistemic position in order to engage with it in meaningful ways, as opposed to assuming that the other’s thinking is fuzzy simply because their language is one you’re not used to. But I never use that style of syncretic “reasoning” when I’m actually, ya know, thinking. No worries!
But… I said I was used to it, and remembering it being fuzzy!
Compartimentalization is wonderful.
I don’t mean to be insulting. I know you’re smart. I know you’re a good reasoner. I only worry that you might not be using your 1337 reasoning skillz everywhere, which can be extremely bad because wrong beliefs can take root in the areas labeled “separate magisterium” and I’ve been bitten by it.
(I’m not sure we understand each other, but...) Okay. But I mean, when Leibniz talks about theistic concepts, his reasoning is not very fuzzy. Insofar as smart theists use memes descended from Leibniz—which they do, and they also use memes descended from other very smart people—it becomes necessary that I am able to translate their concepts into concepts that I can understand and use my normal rationality skillz on.
I don’t think this is compartmentalization. Compartmentalization as I understand it is when you have two contradictory pieces of information about the world and you keep them separate for whatever reason. I’m talking about two different skills. My actual beliefs stay roughly constant no matter what ontology/language I use to express them. Think of it like Solomonoff induction. The universal machine you choose only changes things by at most a constant. (Admittedly, for humans that constant can be the difference between seeing or not seeing a one step implication, but such matters are tricky and would need their own post. But imagine if I was to try to learn category theory in Russian using a Russian-English dictionary.) And anyway I don’t actually think in terms of theism except for when I either want to troll people, understand philosophers, or play around in others’ ontologies for kicks.
I am not yet convinced that it isn’t misplaced, but I do thank you for your concern.