When Will talks about hell, or anything that sounds like a religious concept, you should suppose that in his mind it also has a computational-transhumanist meaning. I hear that in Catholicism, Hell is separation from God, and for Will, God might be something like the universal moral attractor for all post-singularity intelligences in the multiverse, so he may be saying (in the great-grandparent comment) that if you are insufficiently attentive to the question of right and wrong, your personal algorithm may never be re-instantiated in a world remade by friendly AI. To round out this guide for the perplexed, one should not think that Will is just employing a traditional language in order to express a very new concept, you need to entertain the idea that there really is significant referential overlap between what he’s talking about and what people like Aquinas were talking about—that all that medieval talk about essences, and essences of essences, and all this contemporary talk about programs, and equivalence classes of programs, might actually be referring to the same thing. One could also say something about how Will feels when he writes like this—I’d say it sometimes comes from an advanced state of whimsical despair at ever being understood—but the idea that his religiosity is a double reverse metaphor for computational eschatology is the important one. IMHO.
Thank you for the clarification, and my apologies to Will. I do have some questions, but writing a full post from the smartphone I am currently using would be tedious. I’ll wait until I get to a proper computer.
No need to apologize! If I were to get upset about being misunderstood after being purposefully rather cryptic, then I’d clearly be in the wrong. Maybe it would make some sense to apologize if you got angry at me for purposefully being cryptic, because perhaps it would be hasty to make such judgments without first trying harder to understand what sort of constraints I may be under;—but I have no idea what sort of constraints you’re under, so I have no idea whether or not it would be egregiously bad or, alternatively, supereragatorily good for you to get angry at me for not writing so as to be understood or not trying harder to be understood. But my intuition says there’s no need to apologize.
I do apologize for not being able to escape the constraints that have led me to fail to reliably communicate and thus generate a lot of noise/friction.
in his mind it also has a computational-transhumanist meaning
And a cybernetic/economic/ecological/signal-processing meaning, ethical meaning, sometimes a quantum information theoretic meaning, et cetera. I would not be justified in drawing a conclusion about the validity of a concept based on merely a perceived correspondence between two models. That’d be barely any better than talking acausal simulation seriously simply because computational metaphysics and modal-realist-like-ideas are somewhat intuitively attractive and superintelligences seem theoretically possible. One’s inferences should be based on significantly more solid foundations. I just don’t have a way to talk about equivalence classes of things while still being at all understood—then not even people like muflax could reliably understand me, and much of why I write here is to communicate with people like muflax, or angels.
Your talk of God involves a concept of “God” that applies to things that are in some major sense infinite, like a an abstract telos for all superintelligences, and things that are in some sense finite, like all particular superintelligences. Any such perceived crossing of that gap would have to be very carefully justified—e.g., I can’t think of any kind of argument that could prove that a human was the incarnation of the Word. Unlike, say, my model of that Catholics, who explicitly make such inferences on the basis of the theological virtues of faith and hope and not unaided reason as such, I don’t think you’d do be so careless in your own thinking, but I want to signal that I am not nearly so careless in my own, and that you shouldn’t think I am so careless. I think there are decent metaphysical arguments that such interactions are possible in principle, but of course such arguments would have to be made explicit and any particular mechanism (e.g. “simulation” of a human in a (finite? finite-but-perceived-to-be-infinte? infinite?) “heaven” by a finite god approximating an infinitely good telos) should not be a priori assumed to be possible. Only a moron would be so sloppy in his metaphysics and epistemology.
Not to say that you’re implying that I’m trying to convert atheists, but I’m not. I am not to be shepherd, I am not to be a gravedigger; no longer will I speak unto the people, for the last time have I spoken to the dead.
Comments like this are better for creating atheists, as opposed to converting them.
When Will talks about hell, or anything that sounds like a religious concept, you should suppose that in his mind it also has a computational-transhumanist meaning. I hear that in Catholicism, Hell is separation from God, and for Will, God might be something like the universal moral attractor for all post-singularity intelligences in the multiverse, so he may be saying (in the great-grandparent comment) that if you are insufficiently attentive to the question of right and wrong, your personal algorithm may never be re-instantiated in a world remade by friendly AI. To round out this guide for the perplexed, one should not think that Will is just employing a traditional language in order to express a very new concept, you need to entertain the idea that there really is significant referential overlap between what he’s talking about and what people like Aquinas were talking about—that all that medieval talk about essences, and essences of essences, and all this contemporary talk about programs, and equivalence classes of programs, might actually be referring to the same thing. One could also say something about how Will feels when he writes like this—I’d say it sometimes comes from an advanced state of whimsical despair at ever being understood—but the idea that his religiosity is a double reverse metaphor for computational eschatology is the important one. IMHO.
Thank you for the clarification, and my apologies to Will. I do have some questions, but writing a full post from the smartphone I am currently using would be tedious. I’ll wait until I get to a proper computer.
No need to apologize! If I were to get upset about being misunderstood after being purposefully rather cryptic, then I’d clearly be in the wrong. Maybe it would make some sense to apologize if you got angry at me for purposefully being cryptic, because perhaps it would be hasty to make such judgments without first trying harder to understand what sort of constraints I may be under;—but I have no idea what sort of constraints you’re under, so I have no idea whether or not it would be egregiously bad or, alternatively, supereragatorily good for you to get angry at me for not writing so as to be understood or not trying harder to be understood. But my intuition says there’s no need to apologize.
I do apologize for not being able to escape the constraints that have led me to fail to reliably communicate and thus generate a lot of noise/friction.
And a cybernetic/economic/ecological/signal-processing meaning, ethical meaning, sometimes a quantum information theoretic meaning, et cetera. I would not be justified in drawing a conclusion about the validity of a concept based on merely a perceived correspondence between two models. That’d be barely any better than talking acausal simulation seriously simply because computational metaphysics and modal-realist-like-ideas are somewhat intuitively attractive and superintelligences seem theoretically possible. One’s inferences should be based on significantly more solid foundations. I just don’t have a way to talk about equivalence classes of things while still being at all understood—then not even people like muflax could reliably understand me, and much of why I write here is to communicate with people like muflax, or angels.
Your talk of God involves a concept of “God” that applies to things that are in some major sense infinite, like a an abstract telos for all superintelligences, and things that are in some sense finite, like all particular superintelligences. Any such perceived crossing of that gap would have to be very carefully justified—e.g., I can’t think of any kind of argument that could prove that a human was the incarnation of the Word. Unlike, say, my model of that Catholics, who explicitly make such inferences on the basis of the theological virtues of faith and hope and not unaided reason as such, I don’t think you’d do be so careless in your own thinking, but I want to signal that I am not nearly so careless in my own, and that you shouldn’t think I am so careless. I think there are decent metaphysical arguments that such interactions are possible in principle, but of course such arguments would have to be made explicit and any particular mechanism (e.g. “simulation” of a human in a (finite? finite-but-perceived-to-be-infinte? infinite?) “heaven” by a finite god approximating an infinitely good telos) should not be a priori assumed to be possible. Only a moron would be so sloppy in his metaphysics and epistemology.
Not to say that you’re implying that I’m trying to convert atheists, but I’m not. I am not to be shepherd, I am not to be a gravedigger; no longer will I speak unto the people, for the last time have I spoken to the dead.