The question is not if you should care, but if you do care. Rationality does not tell you what to value, or how much utility you are supposed to assign to what goals. Rationality just helps you to recognize the facts and achieve your goals.
Your values are in the territory. You can’t just make shit up based on naive introspection, in any domain. Simple things like verbal overshadowing are counter-intuitive. Treating your map of your values as your actual values leads to extreme overconfidence, often supported by self-righteousness. It is a very common failure mode everywhere.
One way to get around this is to claim that shouldness flows from God (around here the equivalent is CEV) and then argue about the nature of God’s will. I think this is a step up even if often the result of confusions of various kinds. At least people aren’t as likely to treat their naive introspection or simplistic empiricism as gospel. Thinking of justification as flowing from timeless attractors rather than causal coincidences is a similar but less popular perspective. Obviously ‘right view’ would see the causal/teleological equivalence but conceptual flavors matter for correct connotations.
I don’t understand half of what you are saying, I only know that at the end of the day I will do what I want, whatever that might be. Either I want what God or CEV wants, or I don’t.
“Timeful/timeless” was what I meant, I confused my terminology. I’m confused because it seems to me that there are an infinite number of logical nodes you could put in your causal graphs that constrain the structure of your causal graphs (in a way that is of course causally explicable, but whose simplest explanation sans logical nodes may have suspiciously high K complexity in some cases), and that ‘cause’ ferns to grow fractally, say, and certainly ‘cause’ more interesting events. Arising or passing away events particularly ‘caused’ by these timeless properties of the universe aren’t exactly teleological in that they’re not necessarily determined by the future (or the past); but because those timeless properties create patterns that constrain possible futures it’s like the structures in the imaginable futures that are timelessly privileged as possible/probable are themselves causing their fruition by their nature. So in my fuzzy thinking there’s this conceptual soup of attractors, causality, teleology, timeless properties, and the like, and mish-mashing them is all too easy since they’re different perspectives that highlight different aspects rather than compartmentalized belief structures. If I just stick to switching between timeful and timeless perspectives (without confusing myself with attractors) then I have my feet firmly on the ground.
Anyway, to (poorly) elaborate on what I was originally talking about: if might makes right (or to put it approximately, morality flows forward from the past, like naive causal validity semantics in all their arbitrariness), but right enables might (very roughly, morality flows backward from the future, logical truths constrain CDT-like optimizers and at some level of organization the TDT-like optimizers win out because cooperation just wins; an “I” that is big like a human has fewer competitors and greater rewards than an “I” that is small like a paramecium; (insert something about ant colonies?); (insert something about thinking of morality as a Pareto frontier itself moving over time (moving along a currently-hidden dimension in a way that can’t be induced from seeing trends in past movements along then-hidden dimensions, say, though that’s probably a terrible abstraction), and discount rates of cooperative versus non-cooperative deal-making agents up through the levels of organization and over time, with hyperbolic discounters willingly yielding to exponential discounters and so on over time)), then seeing only one and not the other is a kind of blindness.
Emphasizing “might makes right” may cause one to see the universe as full of (possibly accidental) adversaries, where diverging preferences burn up most of the cosmic commons and the winners (who one won’t identify with and whose preferences one won’t transitively value) take whatever computronium is left. This sort of thinking is associated with reductionism, utilitarianism/economics, and disinterest (either passive or active) in morality qua morality or shouldness qua shouldness. I won’t hastily list any counterarguments here for fear of giving them a bad name but will instead opine that the counterarguments seem to me vastly underappreciated (when noticed) by what I perceive to be the prototypical median mind of Less Wrong.
Emphasizing “right enables might” may cause one to see the future as timelessly determined to be perfect, full of infinitely intricate patterns of interaction and with every sacrifice seen in hindsight as a discordant note necessary for the enabling of greatest beauty, the unseen object of sehnsucht revealed as the timeless ensemble itself. This sort of thinking is associated with “objective morality” in all its vagueness, “God” in all its/His vagueness, and in a milder form a certain conception of the acausal economy in all its vagueness. Typical objections include: What if this timeless perfection is determined to be the result of your timeful striving? Moral progress isn’t inevitable. How does knowing that it will work out help you help it work out? What if, as is likely, this perfection is something that may be approached but never reached? Won’t there always be an edge, a Pareto frontier, a front behind which remaining resources should be placed, a causal bottleneck, somewhere in time? Then why not renormalize, and see that war, that battle, that moment where things are still up in the air, as the only world? Are you yourself not almost entirely a conglomeration of imperfections that will get burned away in Pentecostal fire, as will most everything you now love? Et cetera. There are many similar and undoubtedly stronger arguments to gnaw at the various parts of an optimist’s mind; the arguments I gave are interior to those the Less Wrong community has cached.
Right view cuts through the drama of either tinted lens to find Tao if doing so is De. It’s a hypothesis.
What do you think that I think that it means? That post is one of many that are implicitly used to reject any criticism about AI going FOOM. I especially thought about you and your awkward responses about predictions and falsification. In and of itself I agree with the post, but used selectively it is a soldier against unwanted criticism. Reading the last few posts of the sequences rerun made me again more confident that the whole purpose of this project is to brainwash people into buying the AI FOOM idea.
I doubt that’s the main point of the project, i hope i would know as i have lurked it in great detail since it was first envisioned. That being said i agree that wedrifid’s answer is surprisingly terse.
Well, Eliezer Yudkowsky is making a living telling people that they ought to donate money to his “charity”.
Almost a year ago I posted my first submission here. I have been aware of OB/LW for longer than that but didn’t care until I read about the Roko incident. That made me really curious.
I am too tired to go into any detail right now, but what I learnt since then didn’t make me particularly confident of the epistemic standard of LW, despite the solemn assertion of its members.
The short version, the assurance that you are an aspiring rationalist might mislead some people to assign some credence to your extraordinary claims, but it won’t make them less wrong.
There are many reasons for why I am skeptic. As I said in another comment above, reading some of the posts linked to by the sequences rerun made Eliezer Yudkowsky much more untrustworthy in my opinion. Before I thought that some of his statements, e.g. “If you don’t sign up your kids for cryonics then you are a lousy parent.”, are negligible lapses of sanity, but it now appears to me that such judgmental statements are the rule.
I think i understand your point of view and i agree with your sentiments, but do you honestly believe that Eliezer does this all for the money? I think that he likes being able to spend all his time working on this and the singularity institute definitely treats him well but the majority of people on less wrong including him really do want to save the world from what I’ve seen.
As for his statement about cryonics, if hes passive about i don’t think many of the lurkers would consider signing up. Cryonics seems like a long shot to me but i think its reasonable to assume that he writes so emotionally about it because he honestly just wants more people to be vitrified in case we do manage to create an FAI.
I would love to hear more about your reasons for skepticism because i share many of the same concerns, but so far Ive hear lo to the contrary wisdom on LW/OB.
i agree that wedrifid’s answer is surprisingly terse
Not surprisingly, to those who have experience with wedrifid. Merely annoyingly. Though in this case he is making an allusion to a well known trope. Google on the phrase “does not mean what you think it means”. If XiXiDu has referenced that “You’re Entitled …” posting before, for roughly the same debunking purpose, then wedrifid’s terse putdown strikes me as rather clever.
The question is not if you should care, but if you do care. Rationality does not tell you what to value, or how much utility you are supposed to assign to what goals. Rationality just helps you to recognize the facts and achieve your goals.
Your values are in the territory. You can’t just make shit up based on naive introspection, in any domain. Simple things like verbal overshadowing are counter-intuitive. Treating your map of your values as your actual values leads to extreme overconfidence, often supported by self-righteousness. It is a very common failure mode everywhere.
What is the alternative?
I don’t understand.
One way to get around this is to claim that shouldness flows from God (around here the equivalent is CEV) and then argue about the nature of God’s will. I think this is a step up even if often the result of confusions of various kinds. At least people aren’t as likely to treat their naive introspection or simplistic empiricism as gospel. Thinking of justification as flowing from timeless attractors rather than causal coincidences is a similar but less popular perspective. Obviously ‘right view’ would see the causal/teleological equivalence but conceptual flavors matter for correct connotations.
I don’t understand half of what you are saying, I only know that at the end of the day I will do what I want, whatever that might be. Either I want what God or CEV wants, or I don’t.
Could you elaborate on this?
“Timeful/timeless” was what I meant, I confused my terminology. I’m confused because it seems to me that there are an infinite number of logical nodes you could put in your causal graphs that constrain the structure of your causal graphs (in a way that is of course causally explicable, but whose simplest explanation sans logical nodes may have suspiciously high K complexity in some cases), and that ‘cause’ ferns to grow fractally, say, and certainly ‘cause’ more interesting events. Arising or passing away events particularly ‘caused’ by these timeless properties of the universe aren’t exactly teleological in that they’re not necessarily determined by the future (or the past); but because those timeless properties create patterns that constrain possible futures it’s like the structures in the imaginable futures that are timelessly privileged as possible/probable are themselves causing their fruition by their nature. So in my fuzzy thinking there’s this conceptual soup of attractors, causality, teleology, timeless properties, and the like, and mish-mashing them is all too easy since they’re different perspectives that highlight different aspects rather than compartmentalized belief structures. If I just stick to switching between timeful and timeless perspectives (without confusing myself with attractors) then I have my feet firmly on the ground.
Anyway, to (poorly) elaborate on what I was originally talking about: if might makes right (or to put it approximately, morality flows forward from the past, like naive causal validity semantics in all their arbitrariness), but right enables might (very roughly, morality flows backward from the future, logical truths constrain CDT-like optimizers and at some level of organization the TDT-like optimizers win out because cooperation just wins; an “I” that is big like a human has fewer competitors and greater rewards than an “I” that is small like a paramecium; (insert something about ant colonies?); (insert something about thinking of morality as a Pareto frontier itself moving over time (moving along a currently-hidden dimension in a way that can’t be induced from seeing trends in past movements along then-hidden dimensions, say, though that’s probably a terrible abstraction), and discount rates of cooperative versus non-cooperative deal-making agents up through the levels of organization and over time, with hyperbolic discounters willingly yielding to exponential discounters and so on over time)), then seeing only one and not the other is a kind of blindness.
Emphasizing “might makes right” may cause one to see the universe as full of (possibly accidental) adversaries, where diverging preferences burn up most of the cosmic commons and the winners (who one won’t identify with and whose preferences one won’t transitively value) take whatever computronium is left. This sort of thinking is associated with reductionism, utilitarianism/economics, and disinterest (either passive or active) in morality qua morality or shouldness qua shouldness. I won’t hastily list any counterarguments here for fear of giving them a bad name but will instead opine that the counterarguments seem to me vastly underappreciated (when noticed) by what I perceive to be the prototypical median mind of Less Wrong.
Emphasizing “right enables might” may cause one to see the future as timelessly determined to be perfect, full of infinitely intricate patterns of interaction and with every sacrifice seen in hindsight as a discordant note necessary for the enabling of greatest beauty, the unseen object of sehnsucht revealed as the timeless ensemble itself. This sort of thinking is associated with “objective morality” in all its vagueness, “God” in all its/His vagueness, and in a milder form a certain conception of the acausal economy in all its vagueness. Typical objections include: What if this timeless perfection is determined to be the result of your timeful striving? Moral progress isn’t inevitable. How does knowing that it will work out help you help it work out? What if, as is likely, this perfection is something that may be approached but never reached? Won’t there always be an edge, a Pareto frontier, a front behind which remaining resources should be placed, a causal bottleneck, somewhere in time? Then why not renormalize, and see that war, that battle, that moment where things are still up in the air, as the only world? Are you yourself not almost entirely a conglomeration of imperfections that will get burned away in Pentecostal fire, as will most everything you now love? Et cetera. There are many similar and undoubtedly stronger arguments to gnaw at the various parts of an optimist’s mind; the arguments I gave are interior to those the Less Wrong community has cached.
Right view cuts through the drama of either tinted lens to find Tao if doing so is De. It’s a hypothesis.
That post does not mean what you think it means.
Yeah, it does. I especially thought about you when I linked to it.
What do you think that I think that it means? That post is one of many that are implicitly used to reject any criticism about AI going FOOM. I especially thought about you and your awkward responses about predictions and falsification. In and of itself I agree with the post, but used selectively it is a soldier against unwanted criticism. Reading the last few posts of the sequences rerun made me again more confident that the whole purpose of this project is to brainwash people into buying the AI FOOM idea.
I doubt that’s the main point of the project, i hope i would know as i have lurked it in great detail since it was first envisioned. That being said i agree that wedrifid’s answer is surprisingly terse.
Well, Eliezer Yudkowsky is making a living telling people that they ought to donate money to his “charity”.
Almost a year ago I posted my first submission here. I have been aware of OB/LW for longer than that but didn’t care until I read about the Roko incident. That made me really curious.
I am too tired to go into any detail right now, but what I learnt since then didn’t make me particularly confident of the epistemic standard of LW, despite the solemn assertion of its members.
The short version, the assurance that you are an aspiring rationalist might mislead some people to assign some credence to your extraordinary claims, but it won’t make them less wrong.
There are many reasons for why I am skeptic. As I said in another comment above, reading some of the posts linked to by the sequences rerun made Eliezer Yudkowsky much more untrustworthy in my opinion. Before I thought that some of his statements, e.g. “If you don’t sign up your kids for cryonics then you are a lousy parent.”, are negligible lapses of sanity, but it now appears to me that such judgmental statements are the rule.
I think i understand your point of view and i agree with your sentiments, but do you honestly believe that Eliezer does this all for the money? I think that he likes being able to spend all his time working on this and the singularity institute definitely treats him well but the majority of people on less wrong including him really do want to save the world from what I’ve seen. As for his statement about cryonics, if hes passive about i don’t think many of the lurkers would consider signing up. Cryonics seems like a long shot to me but i think its reasonable to assume that he writes so emotionally about it because he honestly just wants more people to be vitrified in case we do manage to create an FAI. I would love to hear more about your reasons for skepticism because i share many of the same concerns, but so far Ive hear lo to the contrary wisdom on LW/OB.
Not surprisingly, to those who have experience with wedrifid. Merely annoyingly. Though in this case he is making an allusion to a well known trope. Google on the phrase “does not mean what you think it means”. If XiXiDu has referenced that “You’re Entitled …” posting before, for roughly the same debunking purpose, then wedrifid’s terse putdown strikes me as rather clever.
Some context.