Sorry, my mistake!
antigonus
No, it was supposed to be a response to its actual parent. I assumed that you were (somewhat but not entirely) humorously suggesting that the problem can somehow be solved by some appeal to natural selection or the like.
If I’m reading correctly, the argument you appear to present in your paper is:
We (Thomas Pogge) want to end poverty.
An AI could end poverty.
Therefore, we should build an AI.
This isn’t a strong argument. Probably Pogge thinks that ending poverty is perfectly feasible without building AI, so if you want to change his mind, you need to show that an AI solution can likely be implemented faster than a non-AI one in addition to being sufficiently safe.
It seems like your paper just sets out to establish that there might be some strong arguments for Singularity activism as a response to global poverty somewhere in the vicinity without trying very hard to spell them out.
I don’t see the relevance. Nelson’s problem is about the general validity of inductive inference. Do you have a solution that doesn’t depend upon inductive inferences?
To be honest, I don’t have a clear sense of what he’s saying. However, from a snippet like this:
As long as the line doesn’t move, and we know what symbols we are using for what results of what random variables, observing many green emeralds is evidence that the next one will be grue, as long as it is before time T, after time T, every record of an observation of a green emerald is evidence against the next one being grue in the same way it is evidence against the next emerald being not being green.
it sounds like he’s trying to draw some conclusion from an assumption (“the line doesn’t/won’t move”) that ultimately rests on inductive support. Is that not the case? If so, how does that supposed support not fall victim to the new problem of induction?
It may be that the rule that the emerald construction sites use to get either a green or non-green emerald change at time T, but there is no reason to believe that the rule will change if there has never been any change demonstrated in the position of the line before
There’s your error! You think that the line is in the middle of the table through the entire experiment, but actually it’s in the riddle of the table, where “riddle” means “in the middle of the table before time T and on the right side of the table afterward.” All of our experience before time T has confirmed this.
There are also many things someone might have in mind when they refer to a ‘technological Singularity’ (Sandberg 2010). Below, we’ll explain just three of them (Yudkowsky 2007):
Are there really more than those? Significantly more? I personally don’t think I’ve come across any others, and your wording makes it sound like you have several readily at hand.
I didn’t mean to suggest there’s anything wrong with identifying as part of an online community. I just don’t don’t think we should identify commitment to rationality with outward displays of membership in any given community. It seems to me like committing to rationality is the sort of thing you do while reading a book, not the sort of thing you do while walking around in a mall.
Even so, though, in my opinion people should in general be careful about trying to turn this place into too much of a subculture. There are a lot of people it would alienate to varying degrees, myself included.
You probably don’t want to get rationality mixed up in tribal identification, which is the ostensive purpose of such a symbol.
- Nov 17, 2011, 1:48 PM; 2 points) 's comment on Less Wrong/Rationality Symbol or Seal? by (
Hmm, on second thought, I’m not sure this is a big deal. Even if the vast majority of civilization-enabling utility functions are xenophobic, we can still play PD with those that aren’t. And if Everett is correct, there are presumably still lots of altruistic, isolated civilizations.
I’m not sure if this qualifies as a mistake per se, but it seems very implausible to me that the only advanced civilization-enabling utility functions are altruistic towards aliens. Is there evidence in favor of that hypothesis?
Where are you hoping to have this published?
I anticipate that preference for current gender system to be approximately the same across the sexes (and also fairly widespread).
I’d imagine it’s virtually universal. Transhumanists are a tiny population, and I can’t think of anyone outside that population who would even consider revising such a basic facet of human life. Those few who’ve been posed the question of “Should we add or remove a gender?” in earnest would assuredly respond with an incredulous stare. Maybe some feminist academics have discussed it, though.
Philosophy courses did, seminar-style analytic philosophy classes in particular. (I wouldn’t say that history of philosophy classes altered the way I thought, though I can totally see how Hume might be shocking to someone very new to the subject.) Aside from the actual content I learned, I got the following out of them:
The mental habit of condensing complicated lines of reasoning into minimal, fairly linear syllogisms, so that all of the logical dependencies and likely points of failure among the premises/inferences become much more obvious.
Relatedly, an eagerness to search for ambiguities in arguments and to enumerate all their possible disambiguations, with an eye for the most charitable/defensible contenders.
An appreciation for fine distinctions underlying seemingly straightforward concepts. (E.g., there are several related but distinct concepts that map onto the notion of a word or sentence’s meaning.) These often have unexpected implications and/or vitiate seemingly plausible inferences.
Not being allowed, on pain of embarrassment or a bad grade, to get away with BSing or relying on unacknowledged, controversial assumptions. You have to be up-front about precisely what you mean and what’s at stake.
Realization of the extreme rarity of knock-down arguments for any view, and the subsequent adjustment to the fact that assessing pretty much every philosophical question involves a robust trade-off of good and bad consequences. Sometimes every view on the table seems to imply something crazy, and you have to learn to accept that. And to accept that sometimes reality is crazy. (Yes, I know that the map is not the territory, etc.)
If arguments are soldiers, then at least learning to let some of your soldiers die—and sometimes even putting them out of their misery, yourself! It’s very common in philosophical writing to go through all of the failed arguments for your view before moving on to the ones you find more promising. Even then, it’s expected that you highlight their most vulnerable spots.
Epistemic humility. I’m much slower to draw hasty conclusions with high certainty on a given topic before I find out the best of what all sides have to offer. I definitely still form fast and intuitive judgments before investigating disputed subjects deeply, but I don’t pretend that they’re likely to be the last word or even novel contributions lacking high-level criticism.
A much richer sense of the space of philosophical views. But then again, probably something analogous holds for most other disciplines (biologists are presumably better-tuned to the space of biological hypotheses). Still, though, philosophical-view-space intersects an unusually large number of things.
I don’t know if you’re in need of any of these things, or if you’re likely to acquire them through a small handful of philosophy classes. Even if you are, whether or not you’d succeed greatly depends on the quality of your teachers and classmates.
A key sentence in your conclusion is this:
Anthropic decision theory is a new way of dealing with anthropic problems, focused exclusively on finding the correct decision to make, rather than the correct probabilities to assign.
You then describe ADT as solving the Sleeping Beauty problem. This may be the case if we re-formulate the latter as a decision problem, as you of course do in your paper. But Sleeping Beauty isn’t a decision problem, so I’m not sure if you take yourself to be actually solving it, or if you just think it’s unimportant once we solve the decision problem.
Because I don’t think that most deaf people love their deafness
I honestly can’t cite any statistics, but there are many, many, many congenitally deaf people who view their condition as a fundamental part of who they are and don’t want it to change. Maybe that attitude is pathological or something, but there it is.
I don’t believe that the existence of deafness or of blindness or of leprosy or of AIDS is a net good for humanity.
I think the existence of deaf people who want to be deaf is arguably a net good for humanity. Deaf culture is as real as black culture. Few people with AIDS or leprosy are glad they have it, however.
I guess that depends on their motivations for this choice, and whether it’s for the perceived benefit of the child or the perceived benefit of their own selves, or their culture. If they perceived blackness as an inherent disability on the level of deafness, that’d be wrong, yes.
They believe that a black child will face certain social difficulties that a white child wouldn’t, but that he’d nevertheless lead a happy, flourishing life and love his culture and skin color, and moreover that the added diversity would be a net good for humanity.
Can you explain why you believe that makes a moral difference?
“Us”? I’ve not created any black children, and most black people don’t have the capacity to create white children. And child-creation hasn’t been collectivized yet, it’s still an individual process.
I think you’re missing the point. Please substitute the word “you” with whoever would be faced with such a situation (a black couple deciding whether or not to conceive of a black baby, a deaf couple deciding whether or not to conceive of a deaf baby, etc.).
What does worthiness have to do with anything? This is about allowing children to hear, not about who is “worth” what. About quality of life, not about justice.
I am using “worthiness” to refer to an informal measure of how much we should actualize certain lives relative to others, which includes considerations like quality of life. Maybe “choiceworthiness” would’ve been a better word.
The problem is to justify any inductively-obtained statement vulnerable to a grue-like variant. “X will remain in the same place” is one such statement. (Namely: Any evidence that X will remain in a given place is prima facie evidence that it will same in the same place’, where place’ refers to its current location at T and some other location afterward.) Grue is just an example.