Psy and John, I think the idea is this: if you want to buy a hundred shares of OB at ten dollars each, because you think it’s going to go way up, you have to buy them from someone else who’s willing to sell at that price. But clearly that person does not likewise think that the price of OB is going to go way up, because if she did, why would she sell it to you now, at the current price? So in an efficient market, situations where everyone agrees on the future movement of prices simply don’t occur. If everyone thought the price of OB was going to go to thirty dollars a share, then said shares would already be trading at thirty (modulo expectations about interest rates, inflation, &c.).
Z._M._Davis
Kellen: “I am looking for some introductory books on rationality. [...] If any of you [...] have any suggestions [...]”
“To care about the public image of any virtue [...] is to limit your performance of that virtue to what all others already understand of it. Therefore those who would walk the Way of any virtue must first relinquish all desire to appear virtuous before others, for this will limit and constrain their understanding of the virtue. ”
Is it possible to quote this without being guilty of trying to foster the public image of not caring about public image? That’s a serious question; I had briefly updated the “Favorite Quotes” section of my Facebook before deciding the potential irony was too great. And does my feeling compelled to ask this have anything to do with the fact that I still don’t understand Löb’s theorem?
“I assume that underlying this is that you love your own minds and despise your own bodies, or are at best indifferent to them.”
Well, duh.
Isn’t the byline usually given as “Stephen Jay Gould”?
Tom: “Hmmm.. Maybe we should put together a play version of 3WC [...]”
That reminds me! Did anyone ever get a copy of the script to Yudkowski Returns? We could put on a benefit performance for SIAI!
ducks
Nick: “Where was it suggested otherwise?”
Oh, no one’s explicitly proposed a “wipe culturally-defined values” step; I’m just saying that we shouldn’t assume that extrapolated human values converge. Cf. the thread following “Moral Error and Moral Disagreement.”
Nick Hay: “[N]either group is changing human values as it is referred to here: everyone is still human, no one is suggesting neurosurgery to change how brains compute value.”
Once again I fail to see how culturally-derived values can be brushed away as irrelevant under CEV. When you convince someone with a political argument, you are changing how their brain computes value. Just because the effect is many orders of magnitude subtler than major neurosurgery doesn’t mean it’s trivial.
I don’t think I see how moral-philosophy fiction is problematic at all. When you have a beautiful moral sentiment that you need to offer to the world, of course you bind it up in a glorious work of high art, and let the work stand as your offering. That makes sense. When you have some info you want to share with the world about some dull ordinary thing that actually exists, that’s when you write a journal article. When you’ve got something to protect, something you need to say, some set of notions that you really are entitled to, then you write a novel.
Just as it is dishonest to fail to be objective in matters of fact, so it is dishonest to feign objectivity where there simply is no fact. Why pretend to make arguments when what you really want to write is a hymn?
“I’m curious if anyone knows of any of EY’s other writings that address the phenomenon of rationality as not requiring consciousness.”
“All right. Open a channel, transmitting my voice only.” [...] Out of sight of the visual frame, Akon gestured [...] [emphasis added]
Erratum?
I suspect it gets worse. Eliezer seems to lean heavily on the psychological unity of humankind, but there’s a lot of room for variance within that human dot. My morality is a human morality, but that doesn’t mean I’d agree with a weighted sum across all possible extrapolated human moralities. So even if you preserve human morals and metamorals, you could still end up with a future we’d find horrifying (albeit better than a paperclip galaxy). It might be said that that’s only a Weirdtopia, that’s you’re horrified at first, but then you see that it’s actually for the best after all. But if “the utility function [really] isn’t up for grabs,” then I’ll be horrified for as long as I damn well please.
Eliezer: ” But if we don’t get good posts from the readership, we (Robin/Eliezer/Nick) may split off OB again.”
I’m worried that this will happen. If we’re not getting main post submissions from non-Robin-and-Eliezer people now, how will the community format really change things? For myself, I like to comment on other people’s posts, but the community format doesn’t appeal to me: to regularly write good main posts, I’d have to commit the time to become a Serious Blogger, and if I wanted to do that, I’d start my own venue, rather than posting to a community site.
“There would never be another Gandhi, another Mandela, another Aung San Suu Kyi—and yes, that was a kind of loss, but would any great leader have sentenced humanity to eternal misery, for the sake of providing a suitable backdrop for eternal heroism? Well, some of them would have. But the down-trodden themselves had better things to do.” —from “Border Guards”
I take it the name is a coincidence.
nazgulnarsil: “What is bad about this scenario? the genie himself [sic] said it will only be a few decades before women and men can be reunited if they choose. what’s a few decades?”
That’s the most horrifying part of all, though—they won’t so choose! By the time the women and men reïnvent enough technology to build interplanetary spacecraft, they’ll be so happy that they won’t want to get back together again. It’s tempting to think that the humans can just choose to be unhappy until they build the requisite technology for reünification—but you probably can’t sulk for twenty years straight, even if you want to, even if everything you currently care about depends on it. We might wish that some of our values are so deeply held that no circumstances could possibly make us change them, but in the face of an environment superinelligently optimized to change our values, it probably just isn’t so. The space of possible environments is so large compared to the narrow set of outcomes that we would genuinely call a win that even the people on the freak planets (see de Blanc’s comment above) will probably be made happy in some way that their preSingularity selves would find horrifying. Scary, scary, scary. I’m donating twenty dollars to SIAI right now.
- 23 Jul 2019 7:21 UTC; 43 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
On second thought, correction: relativity restoring far away lands, yes, preserving intuitions, no.
“preserve/restore human intuitions and emotions relating to distance (far away lands and so on)”
Arguably Special Relativity already does this for us. Although I freely admit that a space opera is kind of the antithesis of a Weirdtopia.
“[...] which begs the question [sic] of how we can experience these invisible hedons [...]”
Wh—wh—you said you were sympathetic!
Doug, I meant ceteris paribus.