Habit. It helps to get enough sleep.
TheOtherDave
Don’t worry, it will have been available in 2017 one of these days.
So, on one level, my response to this is similar to the one I gave (a few years ago) [http://lesswrong.com/lw/qx/timeless_identity/9trc]… I agree that there’s a personal relationship with BtVS, just like there’s a personal relationship with my husband, that we’d want to preserve if we wanted to perfectly preserve me.
I was merely arguing that the bitlength of that personal information is much less than the actual information content of my brain, and there’s a great deal of compression leverage to be gained by taking the shared memories of BtVS out of both of your heads (and the other thousands of viewers) and replacing them with pointers to a common library representation of the show and then have your personal relationship refer to the common library representation rather than your private copy.
The personal relationship remains local and private, but it takes up way less space than your mind currently does.
That said… coming back to this conversation after three years, I’m finding I just care less and less about preserving whatever sense of self depends on these sorts of idiosyncratic judgments.
I mean, when you try to recall a BtVS episode, your memory is imperfect… if you watch it again, you’ll uncover all sorts of information you either forgot or remembered wrong. If I offered to give you perfect eideitic recall of BtVS—no distortion of your current facts about the goodness of it, except insofar as those facts turn out to be incompatible with an actual perception (e.g., you’d have changed your mind if you watched it again on TV, too) -- would you take it?
I would. I mean, ultimately, what does it matter if I replace my current vague memory of the soap opera Spike was obsessively watching with a more specific memory of its name and whatever else we learned about it? Yes, that vague memory is part of my unique identity, I guess, in that nobody else has quite exactly that vague memory… but so what? That’s not enough to make it worth preserving.
And for all I know, maybe you agree with me… maybe you don’t want to preserve your private “facts” about what kind of tie Giles was wearing when Angel tortured him, etc., but you draw the line at losing your private “facts” about how good the show was. Which is fine, you care about what you care about.
But if you told me right now that I’m actually an upload with reconstructed memories, and that there was a glitch such that my current “facts” about BTVS being a good show for its time is mis-reconstructed, and Dave before he died thought it was mediocre… well, so what?
I mean, before my stroke, I really disliked peppers. After my stroke, peppers tasted pretty good. This was startling, but it posed no sort of challenge to my sense of self.
Apparently (Me + likes peppers) ~= (Me + dislikes peppers) as far as I’m concerned.
I suspect there’s a million other things like that.
“So long as your preferences are coherent, stable, and self-consistent then you should be fine.”
Yes, absolutely.
And yes, the fact that my preferences are not coherent, stable, and self-consistent is probably the sort of thing I was concerned about… though it was years ago.
You mean that it didn’t happen here or in the global society?
I mean that it’s unlikely that “the site [would] end up with a similar “rational” political consensus if political discussion went through”.
Discussions about religion seems to me to be equally unproductive in general.
In the global society? I agree.
I can imagine that if the site endorsed a political ideology its readers would may become biased forward it (even if just by selection of readers).
Sure, that’s possible.
But there is a possibility that that happened with the religion issue.
Sure, that’s possible.
Also, let me cut to the chase a little bit, here.
The subtext I’m picking up from our exchange is that you object to the site’s endorsement of atheism, but are reluctant to challenge it overtly for fear of social sanction (downvotes, critical comments, etc.). So instead of challenging it, you are raising the overt topic of the site’s unwillingness to endorse a specific political ideology, and taking opportunities as they arise to implicitly establish equivalences between religion and politics, with the intention of implicitly arguing that the site’s willingness to endorse a specific religious ideology (atheism) is inconsistent.
Have I correctly understood your subtext?
Yup, agreed with all of this. (Well, I do think we have had discussions about which political ideology is correct, but I agree that we shy away from them and endorse political discussions about issues.)
Aren’t people on LessWrong quite good at solving their own problems?
Nah, not necessarily. Merely interested in better ways of doing so. (Among other things.)
Yeah, there’s a communally endorsed position on which religion(s) is/are correct (“none of them are correct”), but there is no similar communally endorsed position on which political ideology(ies) is/are correct.
There’s also no similar communally endorsed position on which brand of car is best, but there’s no ban on discussion of cars, because in our experience discussions of car brands, unlike discussions of political ideologies, tend to stay relatively civil and productive.
What do you think? Would the site end up with a similar “rational” political consensus if political discussion went through?
I find it extremely unlilkely. It certainly hasn’t in the past.
This comment taken out of context kind of delighted me.
When you see the word “morals” used without further clarification, do you take it to mean something different from “values” or “terminal goals”?
Depends on context.
When I use it, it means something kind of like “what we want to happen.” More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.
I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.
I know people who, when they use it, mean something more like “complying with the rules tagged ‘moral’ in the social structure I’m embedded in.” I know people who, when they use it, mean something more like “complying with the rules implicit in the nonsocial structure of the world.” In both cases, I try to understand by it what I expect them to mean.
For my part, it’s difficult for me to imagine a set of observations I could make that would provide sufficient evidence to justify belief in many of the kinds of statements that get tossed around in these sorts of discussions. I generally just assume Omega adjusts my priors directly.
The current open thread is here:
http://lesswrong.com/r/discussion/lw/nns/open_thread_may_30_june_5_2016/A new one will be started soon.
Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?
There are three possibilities worth disambiguating here.
1) Mary predicts that she will do X given some assumed set S1 of knowledge, memories, experiences, etc., AND S1 includes Mary’s knowledge of this prediction.
2) Mary predicts that she will do X given some assumed set S2 of knowledge, memories, experiences, etc., AND S2 does not include Mary’s knowledge of this prediction.
3) Mary predicts that she will do X independent of her knowledge, memories, experiences, etc.
Along some dimensions I consider salient, at least. PM me for spoilers if you want them. (It’s not a bad book, but not worth reading just for this if you wouldn’t otherwise.)
Have you ever read John Brunner’s “Stand on Zanzibar”? A conversation not unlike this is a key plot point.
I’m not exactly sure what you mean by “as random.”
It may well be that there are discernable patterns in a sequence of manually simulated coin-flips that would allow us to distinguish such sequences from actual coinflips. The most plausible hypothetical examples I can come up with would result in a non-1:1 ratio… e.g., humans having a bias in favor of heads or tails.
Or, if each person is laying a coin down next to the previous coin, such that they are able to see the pattern thus far, we might find any number of pattern-level biases… e.g., if told to simulate randomness, humans might be less than 50% likely to select heads if they see a series of heads-up coins, whereas if not told to do so, they might be more than 50%.
It’s kind of an interesting question, actually. I know there’s been some work on detecting test scores by looking for artificial-pattern markers in the distribution of numbers, but I don’t know if anyone’s done equivalent things for coinflips.
“simply the university” ⇒ “simplify the universe”?
Hm. Let me try to restate that to make sure I follow you.
Consider three categories of environments: (Er) real environments, (Esa) simulated environments that closely resemble Er, aka “ancestral simulations”, and (Esw) simulated environments that dont’t closely resemble Er, aka “weird simulations.”
The question is, is my current environment E in Er or not?
Bostrom’s argument as I understand it is that if post-human civilizations exist and create many Esa-type environments, then for most E, (E in Esa) and not (E in Er). Therefore, given that premise I should assume (E in Esa).
Your counterargument as I understand it is that if (E in Esw) then I can draw no sensible conclusions about Er or Esa, because the logic I use might not apply to those domains, so given that premise I should assume nothing.
Have I understood you?
I don’t think it is a sidetrack, actually… at least, not if we charitably assume your initial comment is on-point.
Let me break this down in order to be a little clearer here.
Lumifer asserted that omniscience and free will are incompatible, and you replied that as the author of a story you have the ability to state that a character will in the future make a free choice. “The same thing would apply,” you wrote, “to a situation where you are created free by an omnipotent being.”
I understand you to mean that just like the author of a story can state that (fictional) Peter has free will and simultaneously know Peter’s future actions, an omniscient being can know that (actual) Peter has free will and simultaneously know Peter’s future actions.
Now, consider the proposition A: the author of a story can state that incompatible things occur simultaneously.
If A is true, then the fact that the author can state these things has nothing to do with whether free will and omniscience are incompatible… the author can make those statements whether free will and omniscience are incompatible or not. Consequently, that the author can make those statements does not provide any evidence, one way or the other, as to whether free will and omniscience are incompatible.
In other words, if A is true, then your response to Lucifer has nothing whatsoever to do with Lucifer’s claim, and your response to Lucifer is entirely beside Lucifer’s point. Whereas if A is false, your response may be on-point, and we should charitably assume that it is.
So I’m asking you: is A true? That is: can an author simultaneously assert incompatible things in a story? I asked it in the form of concrete examples because I thought that would be clearer, but the abstract question works just as well.
Your response was to dismiss the question as a sidetrack, but I hope I have now clarified sufficiently what it is I’m trying to clarify.
Back when this was a big part of my professional life, my reply was “everything takes a month.”