I liked “Diaspora” more.
lukstafi
Let me get this straight. You want to promote the short-circuiting the mental circuit of promotion?
If God created the universe, then that’s some evidence that He knows a lot. Not overwhelming evidence, since some models of creation might not require of the creator to know much.
Set up automatic filters.
As a function of how long the universe will exist? ETA: a short period of time might be significantly located.
The absurd claim is “there is nothing you ought to do or ought to not do”. The claim “life is tough” is not absurd. ETA: existentialism in the absurdist flavor (as opposed to for example the Christian flavor) is a form of value anti-realism which is not nihilism. It denies that there are values that could guide choices, but puts intrinsic value into making choices.
I would still be curious how much I can get out of life in billions of years.
I do not strongly believe the claim, just lay it out for discussion. I do not claim that experiences do not supervene on computations: they have observable, long-term behavioral effects which follow from the computable laws of physics. I just claim that in practice, not all processes in a brain will ever be reproduced in WBEs due to computational resource constraints and lack of relevance to rationality and the range of reported experiences of the subjects. Experiences can be different yet have roughly the same heterophenomenology (with behavior diverging only statistically or over long term).
Isn’t it sufficient for computationalism that WBEs are conscious and that experience would be identical in the limit of behavioral identity? My intent with the claim is to weaken computationalism—accommodate some aspects of identity theory—but not to directly deny it.
The truth of the claim, or the degree of difference? The claim is that identity obtains in the limit, i.e. in any practical scenario there wouldn’t be identity between experiences of a biological brain and WBE, only similarity. OTOH identity between WBEs can obviously be obtained.
The relevant notion of consciousness we are concerned with is technically called phenomenal experience. Whole Brain Emulations will necessarily leave out some of the physical details, which means the brain processes will not unfold in exactly the same manner as in biological brains. Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness. I expect we will learn more about consciousness to address the broader and more interesting issue of what kinds and degrees of consciousness are possible.
I’d like to add that if the curriculum has a distinction between “probability” and “statistics”, it is taught in the “probability” class. Much later, the statistics class has “frequentist” part and “bayesian” part.
Inflationary multiverse is essentially infinite. But as you take a slice through (a part of) the multiverse, there is way more young universes. The proportion of universes of given age is inversely (exponentially, as in memoryless distribution) proportional to the age. This resolves the doomsday paradox (because our universe is very young relative to its lifespan). http://youtu.be/qbwcrEfQDHU?t=32m10s
Another argument to similar effect would be to consider a measure over possible indices. Indices pointing into old times would be less probable—by needing more bits to encode—than indices pointing to young times.
Our universe might be very old on this picture (relative to the measure), so the conclusion regarding Fermi paradox is to update towards the “great filter in the past” hypothesis. (It’s more probable to be the first observer-philosopher having these considerations in one’s corner of a universe.)
I’m glad to see Mark Waser cited and discussed, I think he was omitted in a former draft but I might misremember. ETA: I misremembered, I’ve confused it with http://friendly-ai.com/faq.html which has an explicitly narrower focus.
We should continue growing so that we join the superintelligentsia.
Although I wouldn’t say this, I don’t see how my comment contradicts this.
Let’s take “the sexual objectification of women in some advertisement” as an example. Do you mean that sexual objectification takes place when the actress feels bad about playing in an erotic context, and agreed only because of commercial incentive, or something similar? ETA: I guess objectification generally means not treating someone as a person. With a focus on this explication, objectification in (working on) a film (advertisement is a short film) would be when the director does not collaborate with the actors, but rather is authoritarian in demanding that the actors fit his vision. ETA2: and objectification in the content of a film would be depicting an act of someone not treating another as a person; in case of “sexual objectification” depicting sexual violence.
I see it this way. It is “objectification” when it’s used to attract attention. It’s “for the purpose of appreciation” when it’s used to enrich emotional reaction (usually of the aesthetic evaluation, but sometimes of the moral evaluation). So it is hard to say just by the content, but if the content is both erotic and boring it’s objectification.
You might be interested in reading TDT chapter 5 “Is Decision-Dependency Fair” if you haven’t already.
It is not necessary for Nazis hating Jews to be rational that there are reasons for hating Jews, only that the reasons for not hating Jews do not outweigh the reasons for hating Jews. But their reasons for hating Jews are either self-contradictory or in fact support not hating Jews when properly worked out.