Eliezer, you’d have done better to ignore ReadABook’s trash. Hir ignorance of your arguments and expertise was obvious.
Cyan2
Peter de Blanc, I don’t have an example, just a vague memory of reading about minimax-optimal decision rules in J. O. Berger’s Statistical Decision Theory and Bayesian Analysis. (That same text notes that minimax rules are Bayes rules under the assumption that your opponent is out to get you.)
IIRC, there exist minimax strategies in some games that are stochastic. There are some games in which it is in fact best to fight randomness with randomness.
For what it’s worth, Tim Tyler, I’m with you. Utility scripts count as programs in my books.
I mean, we weren’t even designed by a mind, we sprung from simple selection!
This is backwards, isn’t it? Reverse engineering a system designed by a (human?) intelligence is a lot easier than reverse engineering an evolved system.
Emile, you’ve mixed up “optimization process” and “intelligence”. According to your post, Eliezer wouldn’t consider evolution an optimization process. He does; he doesn’t consider it intelligent.
...it seems to me much of the beautiful LaTex equations and formulas are only to give the impression of rigor.
I didn’t suggest equations to enforce some false notion of rigor—I suggested them as an aid to clear communication.
Jef Allbright, it seems to me that if you want Eliezer to take your criticisms seriously, you’re going to need more equations and fewer words. (It would be nice if Eliezer produced some equations too.)
“But I still suspect that there’s a little distance there, that wouldn’t be there otherwise, and I wish my brain would stop doing that.”
A finely crafted recursion. I salute you.
So in my posts on this topic, I proceeded to (attempt to) convey a larger and more coherent context making sense of the ostensible issue.
Right! Now we’re communicating. My point is that the context you want to add is tangential (or parallel...? pick your preferred geometric metaphor) to Eliezer’s point. That doesn’t mean it’s without value, but it does mean that it fails to engage Eliezer’s argument.
But it seems to me that I addressed this head-on at the beginning of my initial post, saying “Of course the ends justify the means—to the extent that any moral agent can fully specify the ends.
Eliezer’s point is that humans can’t fully specify the ends due to “hostile hardware” issues if for no other reason. The hostile hardware part is key, but you never mention it or anything like it in your original comment. So, no, in my judgment you don’t address it head-on. In contrast, consider Phil Goetz’s first comment (the second of this thread), which attacks the hostile hardware question directly.
Since you said you didn’t know what to do with my statement, I’ll add, just replace the phrase “limit the universe of discourse to” with “consider only” and see if that helps. But I think we’re using the same words to talk about different things, so your original comment may not mean what I think it means, and that’s why my criticism looks wrong-headed to you.
Jef Allbright,
By subsequent discussion, I meant Phil Goetz’s comment about Eliezer “neglecting that part accounted for by the unpredictability of the outcome”. I’m with him on not understanding what “a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences” means; I also found your reply to me utterly incomprehensible. In fact, it’s incredible to me that the same mind that could formulate that reply to me would come shuddering to a halt upon encountering the unexceptionable phrase “universe of discourse”.
But in an interesting world of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term specific consequences.
This point and the subsequent discussion are tangential to the point of the post, to wit, evolutionary adaptations can cause us to behave in ways that undermine our moral intentions. To see this, limit the universe of discourse to actions which have predictable effects and note that Eliezer’s argument still makes strong claims about how humans should act.
1) Do you believe this is true for you, or only other people?
I don’t fit the premise of the statement—my cherished spouse is not yet late, so it’s hard to say.
2) If you know that someone’s cherished late spouse cheated on them, are you justified in keeping silent about the fact?
Mostly yes.
3) Are you justified in lying to prevent the other person from realizing?
Mostly no.
4) If you suspect for yourself (but are not sure) that the cherished late spouse might have been unfaithful, do you think that you will be better off, both for the single deed, and as a matter of your whole life, if you refuse to engage in any investigation that might resolve your doubts one way or the other?
Depends on the person. Some people would be able to leave their doubts unresolved and get on with their life—others would find their quality of life affected by their persistent doubts.
If there is no resolving investigation, do you think that exerting some kind of effort to “persuade yourself”, will leave you better off?
No. You can count that as a win if you like—“deluding myself” is too strong. “I am better off remaining deluded …” is more likely to be true for some people.
5) Would you rather associate with friends who would (a) tell you if they discovered previously unsuspected evidence that your cherished late spouse had been unfaithful, or who would (b) remain silent about it?
Supposing I am emotionally fragile and might harm myself if I discovered that my spouse had been unfaithful, (b). Supposing that I am emotionally stable and that I place great weight on having an accurate view of the circumstances of my life, (a). Other situations, other judgment calls.
Which would be a better human being in your eyes, and which would be a better friend to you?
Depends on how I can reasonably be expected to react.
Fact check: MDL is not Bayesian. Done properly, it doesn’t even necessarily obey the likelihood principle. Key term: normalized maximum likelihood distribution.
...the language and the arts with my comparison...
I thought about going this way, but I decided to stick with what I know.
Since sarcasm seems to have failed, let me just state flatly that all of the cultures we’ve mentioned have enough members and enough diversity that blanket assertions such as, “Japanese martial arts are worse than Chinese ones,” or “American football is a cheap knockoff of rugby” are reductive and parochial to the point of not-even-wrongness.
Most American culture seems like a reinvention of British culture demanded by national pride. My impression is that their versions are like cheap knock-offs of the originals. Their beers are worse. They even managed to mess up the game of football. America faces limitations due to their vast tracts of underpopulated flyover country. That’s a problem Britain doesn’t have.
For those who are interested, a fellow named Kevin Van Horne has compiled a nice unofficial errata page for PT:LOS here. (Check the acknowledgments for a familiar name.)
This is a false dichotomization. Everything is reality!
“Quotation mode” is analogous to an escape character. There’s no dualism here.
Jeff, if you search for my pseudonym in the comments of the “Natural Selection’s Speed Limit and Complexity Bound” post, you will see that I have already brought MacKay’s work to Eliezer’s attention. Whatever conclusions he’s come to have already factored MacKay in.