Afterthought
antigonus
When I told people about the plan in #1, though, it was because I wanted them to listen to me. I was back off the brink for some reaon, and I wanted to talk about where I’d been. Somebody who tells you they’re suicidal isn’t asking you to talk him out of it; he’s asking you to listen.
Just wanted to say that I relate very strongly to this. When I was heavily mentally ill and suicidal, I was afraid of reaching out to other people precisely because that might mean I only wanted emotional support rather than being serious about killing myself. People who really wanted to end their lives, I reasoned, would avoid deliberately setting off alarm bells in others that might lead to interference. That I eventually chose to open up about my psychological condition at all (and thereby deviate from the “paradigmatic” rational suicidal person) gave me evidence that I didn’t want to kill myself and helped me come to terms with recovering. Sorry if this is rambling.
Of course it depends on the specific papers and the nature of the publications. “Publish more papers” seems like shorthand for “Demonstrate that you are capable of rigorously defending your novel/controversial ideas well enough that very many experts outside of the transhumanism movement will take them seriously.” It seems to me that doing this would change a lot of people’s behavior.
I don’t imagine it would have nearly as much of an effect on people who aren’t familiar with anime. But I would read that study in a heartbeat if it existed.
Genie AI?
One is the asymmetry, which is the better one, but it has weird assumptions about personhood—reasonable views either seem to suggest immediate suicide (if there is no continuity of self and future person-moments are thus brought into existence, you are harming future-you by living)
I’m not sure I remember his arguments relying on those assumptions in his asymmetry argument. Maybe he needs them to justify not committing suicide, but I thought the badness of suicide wasn’t central to his thesis.
I’m reading Benatar’s Better Never To Have Been and I noticed that the actual arguments for categorical antinatalism aren’t as strong as I thought and seem to hinge on either a pessimistic view of technological progress (which might well be justified)
I don’t think this is true. Benatar’s position is that any being that ever suffers is harmed by being created. This is not something that technological progress is very likely to relieve. Or are you thinking of some sort of wireheading?
or confusions about identity and personhood.
That sounds like an interesting criticism.
I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my “degree of belief” in a possible statement A is 2, I can be Dutch booked. But now that I’m licensed to disbelieve entailments (so long as I take myself to be ignorant that they’re entailments), perhaps I justifiably believe that I can’t be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, …, Pn, I can always potentially justifiably believe the conditional “If the premises P1, …, Pn are true, then C is correct” has low probability—even if the argument is purely deductive.
Logical omniscience comes from probability “statics,” not conditionalization. When A is any propositional tautology, P(A) (note the lack of conditional) can be algebraically manipulated via the three Kolmogorov axioms to yield 1. Rejecting one of the axioms to avoid this result leaves you vulnerable to Dutch books. (Perhaps this is not so surprising, since reasoning about Dutch books assumes classical logic. I have no idea how one would handle Dutch book arguments if we relax this assumption.)
Could you explain in more detail why Bayesian epistemology can’t be built without such an assumption?
Well, could you explain how to build it that way? Bayesian epistemology begins by interpreting (correct) degrees of beliefs as probabilities satisfying the Kolmogorov axioms, which implies logical omniscience. If we don’t assume our degrees of belief ought to satisfy the Kolmogorov axioms (or assume they satisfy some other axioms which entail Kolmogorov’s), then we are no longer doing Bayesian epistemology.
One of the reasons given against peer review is that it takes a long time to articles to be published after acceptance. Is it not possible to make them available on your own website before they appear in the article? (I really have barely any idea how these things work; but I know that in some fields you can do this.)
You mentioned recently that SIAI is pushing toward publishing an “Open Problems in FAI” document. How much impact do you expect this document to have? Do you intend to keep track? If so, and if it’s less impactful than expected, what lesson(s) might you draw from this?
I’m interested in what you have to say, and I’m sympathetic (I think), but I was hoping you could restate this in somewhat clearer terms. Several of your sentences are rather difficult to parse, like “And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid.”
Sorry, I’m not sure I understand what you mean. Could you elaborate?
I think a lot of the replies here suggesting that Bayesian epistemology easily dissolves the puzzles are mistaken. In particular, the Bayesian-equivalent of (1) is the problem of logical omniscience. Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic. But (1), suitably understood, provides a plausible scenario where logical omniscience fails.
I do agree that the correct understanding of the puzzles is going to come from formal epistemology, but at present there are no agreed-upon solutions that handle all instances of the puzzles.
scroll to 4:40 I like his one argument: if we have finite neurons and thus cannot construct an infinite set in our “map” what makes you think that you can make it correspond to a (hypothetical) infinity in the territory?
I don’t really see what this argument comes to. The map-territory metaphor is a metaphor; neural structures do not have to literally resemble the structures they have beliefs about. In fact, if they did, then the objection would work for any finite structure that had more members than there are synapses (or whatever) in the brain.
In that case, I’d say that your response involves special pleading. SI priors are uncomputable. If the fine structure constant is uncomputable, then any uncomputable prior that assigns probability 1 to the constant having its actual value will beat SI in the long run. What is illicit about the latter sort of uncomputable prior that doesn’t apply to SI priors? Or am I simply confused somehow? (I’m certainly no expert on this subject.)
You will find that even if you’re endowed with the privileged knowledge that the fine structure constant is a halting oracle, that knowledge provably can’t help you win a prediction game against SI
We can frequently compute the first several terms in a non-computable sequence, so this statement seems false.
I’m having trouble seeing your point in the context of the rest of the discussion. Tyrrell claimed that the pre-theoretic notion of an infinite set—more charitably, perhaps, the notion of an infinite cardinality—is captured by Dedekind’s formal definition. Here, “capture” presumably means something like “behaves sufficiently similarly so as to preserve the most basic intuitive properties of.” Your response appears to be that there is a good metaphorical analysis of infinitude that accounts for this pre-theoretic usage as well as some others simultaneously. And by “accounts for X,” I take it you mean something like “acts as a cognitive equivalent, i.e., is the actual subject of mental computation when we think about X.” What is this supposed to show? Does anyone really maintain that human brains are actually processing terms like “bijection” when they think intuitively about infinity?
Something has gone horribly wrong here.