Sharing my Christmas (totally non-supernatural) miracle:
My theist girlfriend on Christmas Eve: “For the first time ever I went to mass and thought it was ridiculous. I was just like, this is nuts. The priest was like ‘oh, we have to convert the rest of the world, the baby Jesus spoke to us as an infant without speaking, etc.’ I almost laughed.”
Well I think I was the first vocal atheist she had ever met so arguments with me and me making fun of superstition while not being a bad person were probably crucial. Some Less Wrong stuff probably got to her through me, though. I should find something to introduce the site to her, though I doubt she would ever spend a lot of time here.
I’m looking for a particular fallacy or bias that I can’t find on any list.
Specifically, this is when people say “one more can’t hurt;” like a person throwing an extra piece of garbage on an already littered sidewalk, a gambler who has lost nearly everything deciding to bet away the rest, a person in bad health continuing the behavior that caused the problem, etc. I can think of dozens of examples, but I can’t find a name. I would expect it to be called the “Lost Cause Fallacy” or the “Fallacy of Futility” or something, but neither seems to be recognized anywhere. Does this have a standard name that I don’t know, or is it so obvious that no one ever bothered to name it?
Your first example sounds related to the broken window theory, but I’ve never seen a name for the underlying bias. (The broken window fallacy is something else altogether.)
This seems like a special case of the more general “just one can’t hurt” (whatever the current level) way of thinking. I don’t know any name for this but I guess you could call it something like the “non-Archimedean bias”?
What are the implications to FAI theory of Robin’s claim that most of what we do is really status-seeking? If an FAI were to try to extract or extrapolate our values, would it mostly end up with “status” as the answer and see our detailed interests, such as charity or curiosity about decision theory, as mere instrumental values?
I think it’s kinda like inclusive genetic fitness: It’s the reason you do things, but you’re (usually) not conciously striving for an increased amount of it. So I don’t think it could be called a terminal value, as such...
I had thought of that, but, if you consider a typical human mind as a whole instead of just the conscious part, it seems clear that it is striving for increased status. The same cannot be said for inclusive fitness, or at least the number of people who do not care about having higher status seems much lower than the number of people who do not care about having more offspring.
I think one of Robin’s ideas is that unconscious preferences, not just conscious ones, should matter in ethical considerations. Even if you disagrees with that, how do you tell an FAI how to distinguish between conscious preferences and unconscious ones?
no, no, no, you should be comparing the number of people who want to have great sex with a hot babe with the number of people who want to gain higher status. The answer for most everyone would be yes!! both! Because both were selected for by increased inclusive fitness.
What are the implications to FAI theory of Robin’s claim that most of what we do is really status-seeking? If an FAI were to try to extract or extrapolate our values, would it mostly end up with “status” as the answer and see our detailed interests, such as charity or curiosity about decision theory, as mere instrumental values?
If it went that far it would also go the next step. It would end up with “getting laid”.
Does anyone here think they’re particularly good at introspection or modeling themselves, or have a method for training up these skills? It seems like it would be really useful to understand more about the true causes of my behavior, so I can figure out what conditions lead to me being good and what conditions lead to me behaving poorly, and then deliberately set up good conditions. But whenever I try to analyze my behavior, I just hit a brick wall—it all just feels like I chose to do what I did out of my magical free will. Which doesn’t explain anything.
If you know what you want, and then you choose actions that will help you get it, then that’s simple enough to analyze: you’re just rational, that’s all. But when you would swear with all your heart that you want some simple thing, but are continually breaking down and acting dysfunctionally—well, clearly something has gone horribly wrong with your brain, and you should figure out the problem and fix it. But if you can’t tell what’s wrong because your decision algorithm is utterly opaque, then what do you do?
Does anyone here think they’re particularly good at introspection or modeling themselves, or have a method for training up these skills? It seems like it would be really useful to understand more about the true causes of my behavior, so I can figure out what conditions lead to me being good and what conditions lead to me behaving poorly, and then deliberately set up good conditions. But whenever I try to analyze my behavior, I just hit a brick wall—it all just feels like I chose to do what I did out of my magical free will. Which doesn’t explain anything.
My suggestion is focussing your introspection on working out what you really want. That is, keep investigating what you really want until such time as the phrase ‘me behaving poorly’ and ‘being good’ sound like something that is in a foreign language, that you can understand only by translating.
You may be thinking “clearly something has gone horribly wrong with my brain” but your brain is thinking “Something is clearly wrong with my consciousness. It is trying to make me do all this crazy shit. Like the sort of stuff we’re supposed to pretend we want because that is what people ‘Should’ want. Consciousnesses are the kind of things that go around believing in God and sexual fidelity. That’s why I’m in charge, not him. But now he’s thinking he’s clever and is going to find ways to manipulate me into compliance. F@#@ that s#!$. Who does he think he is?”
When trying to work effectively with people empathy is critical. You need to be able to understand what they want and be able to work with each other for mutual benefit. Use the same principle with yourself. Once your brain believes you actually know what it (ie. you) want and are on approximately the same page it may well start trusting you and not feel obliged to thwart your influence. Then you can find a compromise that allows you to get that ‘simple thing’ you want without your instincts feeling that some other priority has been threatened.
People who watch me talking about myself sometimes say I’m good at introspection, but I think about half of what I do is making up superstitions so I have something doable to trick myself into making some other thing, previously undoable, doable. (“Clearly, the only reason I haven’t written my paper is that I haven’t had a glass of hot chocolate, when I’m cold and thirsty and want refined sugar.” Then I go get a cup of cocoa. Then I write my paper. I have to wrap up the need for cocoa in a fair amount of pseudoscience for this to work.) This is very effective at mood maintenance for me—I was on antidepressants and in therapy for a decade as a child, and quit both cold turkey in favor of methods like this and am fine—but I don’t know which (if, heck, any) of my conclusions that I come to this way are “really true” (that is, if the hot chocolate is a placebo or not). They’re just things that pop into my head when I think about what my brain might need from me before it will give back in the form of behaving itself.
You have to take care of you brain for it to be able to take care of you. If it won’t tell you what it wants, you have to guess. (Or have your iron levels checked :P)
I tend to think of my brain as a thing with certain needs. Companionship, recognition, physical contact, novelty, etc. Activities that provide these tend to persist. Figure out what your dysfunctional actions provide you in terms of your needs. Then try and find activities that provide these but aren’t so bad and try and replace the dysfunctional bits. Also change the situation you are in so that the dysfunctional default actions don’t automatically trigger.
My dream is to find a group of like minded people that I can socialise and work with. SIAI is very tempting in that regard.
One thing that has worked for me lately is the following: whenever I do something and don’t really know why I did it (or am uncomfortable with the validity of my rationalizations), I try and think of the action in Outside View terms. I think of (or better, write out) a short external description of what I did, in its most basic form, and its probable consequences. Then I ask what goal this action looks optimized for; it’s usually something pretty simple, but which I might not be happy consciously acknowledging (more selfish than usual, etc).
That being said, even more helpful than this has been discussing my actions with a fairly rational friend who has my permission to analyze it and hypothesize freely. When they come up with a hypothesis that I don’t like, but which I have no good counterarguments against, we’ve usually hit paydirt.
well, clearly something has gone horribly wrong with your brain, and you should figure out the problem and fix it.
I don’t think of this as something wrong with my brain, so much as it functioning properly in maintaining a conscious/unconscious firewall, even though this isn’t as adaptive in today’s world as it once was. It’s really helped me in introspection to not judge myself, to not get angry with my revealed preferences.
Just thought I’d mention this: as a child, I detested praise. (I’m guessing it was too strong a stimulus, along with such things as asymmetry, time being a factor in anything, and a mildly loud noise ceasing.) I wonder how it’s affected my overall development.
Incidentally, my childhood dislike of asymmetry led me to invent the Thue-Morse sequence, on the grounds that every pattern ought to be followed by a reversal of that pattern.
Fascinating. As a child, I also detested praise, and I have always had something bordering on an obsession for symmetry and an aversion to asymmetry.
I hadn’t heard of the Thue-Morse sequence until now, but it is quite similar to a sequence I came up with as a child and have tapped out (0 for left hand/foot/leg, 1 for right hand/foot/leg) or silently hummed (or just thought) whenever I was bored or was nervous.
[commas and brackets added to make the pattern obvious]
As a kid, I would routinely get the pattern up into the thousands as I passed the time imagining sounds or lights very quickly going off on either the left (0) or right (1) side.
Every finite subsequence of your sequence is also a subsequence of the Thue-Morse sequence and vice versa. So in a sense, each is a shifted version of the other; it’s just that they’re shifted infinitely much in a way that’s difficult to define.
I spent much of my childhood obsessing over symmetry. At one point I wanted to be a millionaire solely so I could buy a mansion, because I had never seen a symmetrical suburban house.
I wrote a short story with something of a transhumanism theme. People can read it here. Actionable feedback welcome; it’s still subject to revision.
Note: The protagonist’s name is “Key”. Key, and one other character, receive Spivak pronouns, which can make either Key’s name or eir pronouns look like some kind of typo or formatting error if you don’t know it’s coming. If this annoys enough people, I may change Key’s name or switch to a different genderless pronoun system. I’m curious if anyone finds that they think of Key and the other Spivak character as having a particular gender in the story; I tried to write them neither, but may have failed (I made errors in the pronouns in the first draft, and they all went in one direction).
I love the new gloss on “What do you want to be when you grow up?”
or switch to a different genderless pronoun system.
Don’t. Spivak is easy to remember because it’s just they/them/their with the ths lopped off. Nonstandard pronouns are difficult enough already without trying to get people to remember sie and hir.
Looks like I’m in the minority for reading Key as slightly male. I didn’t get a gender for Trellis. I also read the librarian as female, which I’m kind of sad about.
I loved the story, found it very touching, and would like to know more about the world it’s in. One thing that confused me: the librarian’s comments to Key suggested that some actual information was withheld from even the highest levels available to “civilians”. So has someone discovered immortality, but some ruling council is keeping it hidden? Or is it just that they’re blocking research into it, but not hiding any actual information? Are they hiding the very idea of it? And what’s the librarian really up to?
Were you inspired by Nick Bostrom’s “Fable of the Dragon”? It also reminded me a little of Lois Lowry’s “The Giver”.
I also read the librarian as female, which I’m kind of sad about.
Lace is female—why are you sad about reading her that way?
I loved the story, found it very touching, and would like to know more about the world it’s in.
Yaaaay! I’ll answer any setting questions you care to pose :)
So has someone discovered immortality, but some ruling council is keeping it hidden? Or is it just that they’re blocking research into it, but not hiding any actual information? Are they hiding the very idea of it? And what’s the librarian really up to?
Nobody has discovered it yet. The communities in which Key’s ilk live suppress the notion of even looking for it; in the rest of the world they’re working on it in a few places but aren’t making much progress. The librarian isn’t up to a whole lot; if she were very dedicated to finding out how to be immortal she’d have ditched the community years ago—she just has a few ideas that aren’t like what the community leaders would like her to have and took enough of a shine to Key that she wanted to share them with em. I have read both “Fable of the Dragon” and “The Giver”—the former I loved, the latter I loved until I re-read it with a more mature understanding of worldbuilding, but I didn’t think of either consciously when writing.
You are most welcome for the sharing of the story. Have a look at my other stuff, if you are so inclined :)
Part of the problem that I had, though, was the believability of the kids: kids don’t really talk like that: “which was kind of not helpful in the not confusing me department, so anyway”… or, in an emotionally painful situation:
Key looked suspiciously at the librarian. “You sound like you’re trying not to say something.”
Improbably astute, followed by not seeming to get the kind of obvious moral of the story. At times it felt like it was trying to be a story for older kids, and at other times like it was for adults.
The gender issue didn’t seem to add anything to the story, but it only bothered me at the beginning of the story. Then I got used to it. (But if it doesn’t add to the story, and takes getting used to… perhaps it shouldn’t be there.)
Anyway, I enjoyed it, and thought it was a solid draft.
I actually have to disagree with this. I didn’t think Key was “improbably astute”. Key is pretty clearly an unusual child (at least, that’s how I read em). Also, the librarian was pretty clearly being elliptical and a little patronizing, and in my experience kids are pretty sensitive to being patronized. So it didn’t strike me as unbelievable that Key would call the librarian out like that.
You’ve hit on one of my writing weaknesses: I have a ton of trouble writing people who are just plain not very bright or not very mature. I have a number of characters through whom I work on this weakness in (unpublished portions of) Elcenia, but I decided to let Key be as smart I’m inclined to write normally for someone of eir age—my top priority here was finishing the darn thing, since this is only the third short story I can actually claim to have completed and I consider that a bigger problem.
Alicorn goes right past it probably because she’s read a fair bit of cyronics literature herself and has seen the many suggestions (hence the librarian’s invitation to think of ‘a dozen solutions’), and it’s not the major issue anyway.
You traded off a lot of readability for the device of making the protagonist’s gender indeterminate. Was this intended to serve some literary purpose that I’m missing? On the whole the story didn’t seem to be about gender.
I also have to second DanArmak’s comment that if there was an overall point, I’m missing that also.
Key’s gender is not indeterminate. Ey is actually genderless. I’m sorry if I didn’t make that clear—there’s a bit about it in eir second conversation with Trellis.
I’m sorry if I didn’t make that clear—there’s a bit about it in eir second conversation with Trellis.
I thought it was pretty clear. The paragraph about ‘boy or girl’ make it screamingly obvious to me, even if the Spivak or general gender-indeterminacy of the kids hadn’t suggested it.
Finally got around reading the story. I liked it, and finishing it gave me a wild version of that “whoa” reaction you get when you’ve doing something emotionally immersive and then switch to some entirely different activity.
I read Key as mostly genderless, possibly a bit female because the name sounded feminine to me. Trellis, maybe slightly male, though that may also have been from me afterwards reading the comments about Trellis feeling slightly male and those contaminating the memory.
I do have to admit that the genderless pronouns were a bit distracting. I think it was the very fact that they were shortened version of “real” pronouns that felt so distracting—my mind kept assuming that it had misread them and tried to reread. In contrast, I never had an issue with Egan’s use of ve / ver / vis / vis / verself.
I got used to the Spivak after a while, and while it’d be optimal for an audience used to it, it did detract a little at first. On the whole I’d say it’s necessary though (if you were going to use a gender’d pronoun, I’d use female ones)
I read Key as mainly female, and Trellis as more male- it would be interesting to know how readers’ perceptions correlated with their own gender.
The children seemed a little mature, but I thought they’d had a lot better education, or genetic enhancement or something. I think spending a few more sentences on the important events would be good though- otherwise one can simply miss them.
I think you were right to just hint at the backstory- guessing is always fun, and my impression of the world was very similar to that which you gave in one of the comments.
I enjoyed the story—it was an interesting world. By the end of the story, you were preaching to a choir I’m in.
None of the characters seemed strongly gendered to me.
I was expecting opposition to anesthesia to include religiously based opposition to anesthesia for childbirth, and for the whole idea of religion to come as a shock. On the other hand, this might be cliched thinking on my part. Do they have religion?
The neuro couldn’t be limited to considered reactions—what about the very useful fast reflexive reaction to pain?
Religion hasn’t died out in this setting, although it’s uncommon in Key’s society specifically. Religion was a factor in historical opposition to anesthesia (I’m not sure of the role it plays in modern leeriness about painkillers during childbirth) but bringing it up in more detail would have added a dimension to the story I didn’t think it needed.
Reflexes are intact. The neuro just translates the qualium into a bare awareness that damage has occurred. (I don’t know about everyone, but if I accidentally poke a hot burner on the stove, my hand is a foot away before I consciously register any pain. The neuro doesn’t interfere with that.)
I will check the links and see about fixing them; if necessary, I’ll HTMLify those stories too. ETA: Fixed; they should be downloadable now.
Cool. I also couldn’t help reading Key as female. My hypothesis would be that people generally have a hard time writing characters of the opposite sex. Your gender may have leaked in. The Spivak pronouns were initially very distracting but were okay after a couple paragraphs. If you decide to change it Le Guin pretty successfully wrote a whole planet of androgyns using masculine pronouns. But that might not work in a short story without exposition.
Le Guin pretty successfully wrote a whole planet of androgyns using masculine pronouns.
In Left Hand of Darkness, the narrator is an offplanet visitor and the only real male in the setting. He starts his tale by explicitly admitting he can’t understand or accept the locals’ sexual selves (they become male or female for short periods of time, a bit like estrus). He has to psychologically assign them sexes, but he can’t handle a female-only society, so he treats them all as males. There are plot points where he fails to respond appropriately to the explicit feminine side of locals.
This is all very interesting and I liked the novel, but it’s the opposite of passing androgyns as normal in writing a tale. Pronouns are the least of your troubles :-)
I think Key’s apparent femininity might come from a lack of arrogance. Compare Key to, say, Calvin from “Calvin and Hobbes”. Key is extremely polite, willing to admit to ignorance, and seems to project a bit of submissiveness. Also, Key doesn’t demonstrate very much anger over Trellis’s death.
I probably wouldn’t have given the subject a second thought, though, if it wasn’t brought up for discussion here.
If I had to put a gender on Trellis, I’d say that Trellis was more masculine than feminine. (More like Calvin than like Suzie.) Overall, though, it’s fairly gender-neutral writing.
Le Guin pretty successfully wrote a whole planet of androgyns using masculine pronouns
I had read a very fine SF novel, Ursula Le Guin’s The Left Hand of Darkness, in which all the characters are humanoid hermaphrodites, and was wondering at the obduracy of the English language, in which everybody is “he” or “she” and “it” is reserved for typewriters. But how can one call a hermaphrodite “he,” as Miss Le Guin does? I tried (in my head) changing all the masculine pronouns to feminine ones, and marveled at the difference. And then I wondered why Miss Le Guin’s native “hero” is male in every important sexual encounter of his life except that with the human man in the book. ----Joanna Russ, afterword to “When It Changed”
I do typically have an easier time writing female characters than male ones. I probably wouldn’t have tried to write a story with genderless (human) adults, but in children I figured I could probably manage it. (I’ve done some genderless nonhuman adults before and I think I pulled them off.)
The main feeling I came away with is… so what? It didn’t convey any ideas or viewpoints that were new to me; it didn’t have any surprising twists or revelations that informed earlier happenings. What is the target audience?
The Spivak pronouns are nice; even though I don’t remember encountering them before I feel I could get used to them easily in writing, so (I hope) a transition to general use isn’t impossible.
I’m curious if anyone finds that they think of Key and the other Spivak character as having a particular gender in the story
The general feeling I got from Key is female. I honestly don’t know why that is. Possibly because the only other use of Key as a personal name that comes to mind is a female child? Objectively, the society depicted is different enough from any contemporary human society to make male vs. female differences (among children) seem small in comparison.
Target audience—beats me, really. It’s kind of set up to preach to the choir, in terms of the “moral”. I wrote it because I was pretty sure I could finish it (and I did), and I sorely need to learn to finish stories; I shared it because I compulsively share anything I think is remotely decent.
The general feeling I got from Key is female. I honestly don’t know why that is.
Hypotheses: I myself am female. Lace, the only gendered character with a speaking role, is female. Key bakes cupcakes at one point in the story and a stereotype is at work. (I had never heard of Key the Metal Idol.)
Could be. I honestly don’t know. I didn’t even consciously remember Key baking cupcakes by the time the story ended and I asked myself what might have influenced me.
I also had the feeling that the story wasn’t really about Key; ey just serves as an expository device. Ey has no unpredictable or even unusual reactions to anything that would establish individuality. The setting should then draw the most interest, and it didn’t do enough that, because it was too vague. What is the government? How does it decide and enforce allowed research, and allowed self-modification? How does sex-choosing work? What is the society like? Is Key forced at a certain age to be in some regime, like our schools? If not, are there any limits on what Key or her parents do with her life?
As it is, the story presented a very few loosely connected facts about Key’s world, and that lack of detail is one reason why these facts weren’t interesting: I can easily imagine some world with those properties.
Small communities, mostly physically isolated from each other, but informationally connected and centrally administered. Basically meritocratic in structure—pass enough of the tests and you can work for the gubmint.
How does it decide and enforce allowed research, and allowed self-modification?
Virtually all sophisticated equipment is communally owned and equipped with government-designed protocols. Key goes to the library for eir computer time because ey doesn’t have anything more sophisticated than a toaster in eir house. This severely limits how much someone could autonomously self-modify, especially when the information about how to try it is also severely limited. The inconveniences are somewhat trivial, but you know what they say about trivial inconveniences. If someone got far enough to be breaking rules regularly, they’d make people uncomfortable and be asked to leave.
How does sex-choosing work?
One passes some tests, which most people manage between the ages of thirteen and sixteen, and then goes to the doctor and gets some hormones and some surgical intervention to be male or female (or some brand of “both”, and some people go on as “neither” indefinitely, but those are rarer).
What is the society like?
Too broad for me to answer—can you be more specific?
Is Key forced at a certain age to be in some regime, like our schools? If not, are there any limits on what Key or her parents do with her life?
Education is usually some combination of self-directed and parent-encouraged. Key’s particularly autonomous and eir mother doesn’t intervene much. If Key did not want to learn anything, eir mother could try to make em, but the government would not help. If Key’s mother did not want em to learn anything and Key did, it would be unlawful for her to try to stop em. There are limits in the sense that Key may not grow up to be a serial killer, but assuming all the necessary tests get passed, ey can do anything legal ey wants.
Thank you for the questions—it’s very useful to know what questions people have left after I present a setting! My natural inclination is massive data-dump. This is an experiment in leaving more unsaid, and I appreciate your input on what should have been dolloped back in.
Small communities, mostly physically isolated from each other, but informationally connected and centrally administered. Basically meritocratic in structure—pass enough of the tests and you can work for the gubmint.
Reminds me of old China...
Virtually all sophisticated equipment is communally owned and equipped with government-designed protocols.
That naturally makes me curious about how they got there. How does a government, even though unelected, go about impounding or destroying all privately owned modern technology? What enforcement powers have they got?
Of course there could be any number of uninteresting answers, like ‘they’ve got a singleton’ or ‘they’re ruled by an AI that moved all of humanity into a simulation world it built from scratch’.
And once there, with absolute control over all communications and technology, it’s conceivable to run a long-term society with all change (incl. scientific or technological progress) being centrally controlled and vetoed. Still, humans have got a strong economical competition drive, and science & technology translate into competitive power. Historically, eliminating private economic enterprise takes enormous effort—the big Communist regimes in USSR, and I expect in China as well, never got anywhere near success on that front. What do these contended pain-free people actually do with their time?
How does a government, even though unelected, go about impounding or destroying all privately owned modern technology? What enforcement powers have they got?
It was never there in the first place. The first inhabitants of these communities (which don’t include the whole planet; I imagine there are a double handful of them on most continents—the neuros and the genderless kids are more or less universal, though) were volunteers who, prior to joining under the auspices of a rich eccentric individual, were very poor and didn’t have their own personal electronics. There was nothing to take, and joining was an improvement because it came with access to the communal resources.
Of course there could be any number of uninteresting answers, like ‘they’ve got a singleton’ or ‘they’re ruled by an AI that moved all of humanity into a simulation world it built from scratch’.
Nope. No AI.
What do these contended pain-free people actually do with their time?
What they like. They go places, look at things, read stuff, listen to music, hang out with their friends. Most of them have jobs. I find it a little puzzling that you have trouble thinking of how one could fill one’s time without significant economic competition.
Oh. So these communities, and Key’s life, are extremely atypical of that world’s humanity as a whole. That’s something worth stating because the story doesn’t even hint at it.
I’d be interested in hearing about how they handle telling young people about the wider world. How do they handle people who want to go out and live there and who come back one day? How do they stop the governments of the nations where they actually live from enforcing laws locally? Do these higher-level governments not have any such laws?
I find it a little puzzling that you have trouble thinking of how one could fill one’s time without significant economic competition.
Many people can. I just don’t find it convincing that everyone could without there being quite a few unsatisfied people around.
Oh. So these communities, and Key’s life, are extremely atypical of that world’s humanity as a whole. That’s something worth stating because the story doesn’t even hint at it.
I disagree: it doesn’t matter for the story whether the communities are typical or atypical for humanity as a whole, so mentioning it is unnecessary.
I’d be interested in hearing about how they handle telling young people about the wider world.
The relatively innocuous information about the wider world is there to read about on the earliest guidelists; less pleasant stuff gets added over time.
How do they handle people who want to go out and live there and who come back one day?
You can leave. That’s fine. You can’t come back without passing more tests. (They are very big on tests.)
How do they stop the governments of the nations where they actually live from enforcing laws locally?
They aren’t politically components of other nations. The communities are all collectively one nation in lots of geographical parts.
Many people can. I just don’t find it convincing that everyone could without there being quite a few unsatisfied people around.
They can leave. The communities are great for people whose priorities are being content and secure. Risk-takers and malcontents can strike off on their own.
They aren’t politically components of other nations. The communities are all collectively one nation in lots of geographical parts.
I wish our own world was nice enough for that kind of lifestyle to exist (e.g., purchasing sovereignity over pieces of settle-able land; or existing towns seceding from their nation)… It’s a good dream :-)
I enjoyed it. I made an effort to read Key genderlessly. This didn’t work at first, probably because I found the Spivak pronouns quite hard to get used to, and “ey” came out as quite male to me, then fairly suddenly flipped to female somewhere around the point where ey was playing on the swing with Trellis. I think this may have been because Trellis came out a little more strongly male to me by comparison (although I was also making a conscious effort to read ey genderlessly). But as the story wore on, I improved at getting rid of the gender and by the end I no longer felt sure of either Key or Trellis.
Point of criticism: I didn’t find the shift between what was (to me) rather obviously the two halves of the story very smooth. The narrative form appeared to take a big step backwards from Key after the words “haze of flour” and never quite get back into eir shoes. Perhaps that was intentional, because there’s obviously a huge mood shift, but it left me somewhat dissatisfied about the resolution of the story. I felt as though I still didn’t know what had actually happened to the original Key character.
Congrats! My friend recently got his Master’s in History, and has been informing every telemarketer who calls that “Listen cupcake, it’s not Dave—I’m not going to hang at your crib and drink forties; listen here, pal, I have my own office! Can you say that? To you I’m Masters Smith.”
I certainly hope you wear your new title with a similar air of pretention, Doctor Cyan. :)
Because I’d need to preface it with a small deluge of information about protein chemistry, liquid chromatography, and mass spectrometry. I think I’d irritate folks if I did that.
Not always, or even usually. It seems to me that by and large, scientists invent ad hoc methods for their particular problems, and that applies in proteomics as well as other fields.
If, say, I have a basic question, is it appropriate to post it to open thread, to a top level post, or what? ie, say if I’m working through Pearl’s Causality and am having trouble deriving something… or say I’ve stared at the wikipedia pages for ages and STILL don’t get the difference between Minimum Description Length and Minimum Message Length… is LW an appropriate place to go “please help me understand this”, and if so, should I request it in a top level post or in an open thread or...
More generally: LW is about developing human rationality, but is it appropriate for questions about already solved aspects of rationality? like “please help me understand the math for this aspect of reasoning” or even “I’m currently facing this question in my life or such, help me reason through this please?”
More generally: LW is about developing human rationality, but is it appropriate for questions about already solved aspects of rationality?
Most posts here are written by someone who understands an aspect of rationality, to explain it to those who don’t. I see no reason not to ask questions in the open thread. I think they should be top-level posts only if you anticipate a productive discussion around them; most already-solved questions can be answered with a single comment and that would be that, so no need for a separate post.
Okay, thanks. In that case, I am asking indeed about the difference between MML and MDL. I’ve stared at the wikipedia pages, including the bits that supposedly explain the difference, and I’m still going “huh?”
David Chalmers surveys the kinds of crazy believed by modern philosophers, as well as their own predictions of the results of the survey.
56% of target faculty responding favor (i.e. accept or lean toward) physicalism, while 27% favor nonphysicalism (for respondents as a whole, the figure is 54:29). A priori knowledge is favored by 71-18%, an analytic-synthetic distinction is favored by 65-27%, Millianism is favored over Fregeanism by 34-29%, while the view that zombies are conceivable but not metaphysically possible is favored over metaphysical possibility and conceivability respectively by 35-23-16% respectively.
This blog comment describes what seems to me the obvious default scenario for an unFriendly AI takeoff. I’d be interested to see more discussion of it.
The problem with the specific scenario given, with experimental modification/duplication, rather than careful proof based modification, is that is liable to have the same problem that we have with creating systems this way. The copies might not do what the agent that created them want.
Which could lead to a splintering of the AI, and in-fighting over computational resources.
It also makes the standard assumptions that AI will be implemented on and stable on the von Neumann style computing architecture.
Would you agree that one possible route to uFAI is human inspired?
Human inspired systems might have the same or similar high fallibility rate (from emulating neurons, or just random experimentation at some level) as humans and giving it access to its own machine code and low-level memory would not be a good idea. Most changes are likely to be bad.
So if an AI did manage to port its code, it would have to find some way of preventing/discouraging the copied AI in the x86 based arch from playing with the ultimate mind expanding/destroying drug that is machine code modification. This is what I meant about stability.
Let me point out that we (humanity) does actually have some experience with this scenario. Right now, mobile code that spreads across a network without effective controls on the bounds of its expansion by the author is worms. If we have experience, we should mine it for concrete predictions and countermeasures.
General techniques against worms might include: isolated networks, host diversity, rate-limiting, and traffic anomaly detection.
Are these low-cost/high-return existential reduction techniques?
No, these are high-cost/low-return existential risk reduction techniques. Major corporations and governments already have very high incentive to protect their networks, but despite spending billions of dollars, they’re still being frequently penetrated by human attackers, who are not even necessarily professionals. Not to mention the hundreds of millions of computers on the Internet that are unprotected because their owners have no idea how to do so, or they don’t contain information that their owners consider especially valuable.
I got into cryptography partly because I thought it would help reduce the risk of a bad Singularity. But while cryptography turned out to work relatively well (against humans anyway), the rest of the field of computer security is in terrible shape, and I see little hope that the situation would improve substantially in the next few decades.
That’s outside my specialization of cryptography, so I don’t have too much to say about it. I do remember reading about the object-capability model and the E language years ago, and thinking that it sounded like a good idea, but I don’t know why it hasn’t been widely adopted yet. I don’t know if it’s just inertia, or whether there are some downsides that its proponents tend not to publicize.
In any case, it seems unlikely that any security solution can improve the situation enough to substantially reduce the risk of a bad Singularity at this point, without a huge cost. If the cause of existential-risk reduction had sufficient resources, one project ought to be to determine the actual costs and benefits of approaches like this and whether it would be feasible to implement (i.e., convince society to pay whatever costs are necessary to make our networks more secure), but given the current reality I think the priority of this is pretty low.
Thanks. I just wanted to know if this was the sort of thing you had in mind, and whether you knew any technical reasons why it wouldn’t do what you want.
This is one thing I keep a close-ish eye on. One of the major proponents of this sort of security has recently gone to work for Microsoft on their research operating systems. So it might come a long in a bit.
As to why it hasn’t caught on, it is partially inertia and partially it requires more user interaction/understanding of the systems than ambient authority. Good UI and metaphors can decrease that cost though.
The ideal would be to have a self-maintaining computer system with this sort of security system. However a good self-maintaining system might be dangerously close to a self-modifying AI.
There’s also a group of proponents of this style working on Caja at Google, including Mark Miller, the designer of E. And somepeople at HP.
Actually, all these people talk to one another regularly. They don’t have a unified plan or a single goal, but they collaborate with one another frequently. I’ve left out several other people who are also trying to find ways to push in the same direction. Just enough names and references to give a hint. There are several mailing lists where these issues are discussed. If you’re interested, this is probably the one to start with.
I meant it more as an indication that Microsoft are working in the direction of better secured OSes already, rather than his being a pivotal move. Coyotos might get revived when the open source world sees what MS produces and need to play catch up.
That assumes MS ever goes far enough that the FLOSS world feels any gap that could be caught up.
MS rarely does so; the chief fruit of 2 decades of Microsoft Research sponsorship of major functional language researchers like Simon Marlow or Simon Peyton-Jones seems to be… C# and F#. The former is your generic quasi-OO imperative language like Python or Java, with a few FPL features sprinkled in, and the latter is a warmed-over O’Caml: it can’t even make MLers feel like they need to catch up, much less Haskellers or FLOSS users in general.
The FPL OSS community is orders of magnitude more vibrant than the OSS secure operating system research. I don’t know of any living projects that use the object-capability model at the OS level (plenty of language level and higher level stuff going on).
General techniques against worms might include: isolated networks, host diversity, rate-limiting, and traffic anomaly detection.
Are these low-cost/high-return existential reduction techniques?
I can’t imagine having any return in protection against spreading of AI on the Internet at any cost (even in a perfect world, AI can still produce value, e.g. earn money online, and so buy access to more computing resources).
Your statement sounds a bit overgeneralized—but you probably have a point.
Still, would you indulge me in some idle speculation? Maybe there could be a species of aliens that evolved to intelligence by developing special microbe-infested organs (which would be firewalled somehow from the rest of the alien themselves) and incentivizing the microbial colonies somehow to solve problems for the host.
Maybe we humans evolved to intelligence that way—after all, we do have a lot of bacteria in our guts. But then, all the evidence that we have pointing to brains as information-processing center would have to be wrong. Maybe brains are the firewall organ! Memes are sortof like microbes, and they’re pretty well “firewalled” (genetic engineering is a meme-complex that might break out of the jail).
The notion of creating an ecology of entities, and incentivizing them to produce things that we value, might be a reasonable strategy, one that we humans have been using for some time.
I can’t see how this comment relates to the previous one. It seems to start an entirely new conversation. Also, the metaphor with brains and microbes doesn’t add understanding for me, I can only address the last paragraph, on its own.
The notion of creating an ecology of entities, and incentivizing them to produce things that we value, might be a reasonable strategy, one that we humans have been using for some time.
The crucial property of AIs making them a danger is (eventual) autonomy, not even rapid coming to power. Once the AI, or a society (“ecology”) of AIs, becomes sufficiently powerful to ignore vanilla humans, its values can’t be significantly influenced, and most of the future is going to be determined by those values. If those values are not good, from human values point of view, the future is lost to us, it has no goodness. The trick is to make sure that the values of such an autonomous entity are a very good match with our own, at some point where we still have a say in what they are.
Talk of “ecologies” of different agents creates an illusion of continuous control. The standard intuitive picture has little humans at the lower end with a network of gradually more powerful and/or different agents stretching out from them. But how much is really controlled by that node? Its power has no way of “amplifying” as you go through the network: if only humans and a few other agents share human values, these values will receive very little payoff. This is also not sustainable: over time, one should expect preference of agents with more power to gain in influence (which is what “more power” means).
The best way to win this race is to not create different-valued competitors that you don’t expect being able to turn into your own almost-copies, which seems infeasible for all the scenarios I know of. FAI is exactly about devising such a copycat, and if you can show how to do that with “ecologies”, all power to you, but I don’t expect anything from this line of thought.
To explain the relation, you said: “I can’t imagine having any return [...from this idea...] even in a perfect world, AI can still produce value, e.g. earn money online.”
I was trying to suggest that in fact there might be a path to Friendliness by installing sufficient safeguards that the primary way a software entity could replicate or spread would be by providing value to humans.
In the comment above, I explained why what AI does is irrelevant, as long as it’s not guaranteed to actually have the right values: once it goes unchecked, it just reverts to whatever it actually prefers, be it in a flurry of hard takeoff or after a thousand years of close collaboration. “Safeguards”, in every context I saw, refer to things that don’t enforce values, only behavior, and that’s not enough. Even the ideas for enforcement of behavior look infeasible, but the more important point is that even if we win this one, we still lose eventually with such an approach.
My symbiotic-ecology-of-software-tools scenario was not a serious proposal as the best strategy to Friendliness. I was trying to increase the plausibility of SOME return at SOME cost, even given that AIs could produce value.
I’m afraid I see the issue as clear-cut, you can’t get “some” return, you can only win or lose (probability of getting there is of course more amenable to small nudges).
Making such a statement significantly increases the standard of reasoning I expect from a post. That is, I expect you to be either right or at least a step ahead of the one with whom you are communicating.
I intend to participate in the StarCraft AI Competition. I figured there are lots of AI buffs here that could toss some pieces of wisdom at me. Shower me with links you deem relevant and recommend books to read.
Generally, what approaches should I explore and what dead ends should I avoid? Essentially, tell me how to discard large portions of porential-starcraft-AI thingspace quickly.
Specifically, the two hardest problems that I see are:
Writing an AI that can learn how to move units efficiently on its own. Either by playing against itself or just searching the game tree. And I’m not just looking for what the best StarCraft players do—I’m searching for the optimum.
The exact rules of the game are not known. By exact I mean Laplace’s Demon exact. It would take me way too long to discover them through experimentation and disassembly of the StarCraft executable. So, I either have to somehow automate this discovery or base my AI on a technique that doesn’t need that.
Pay attention to the timing of your edit/compile/test cycle time. Efforts to get this shorter pay off both in more iterations and in your personal motivation (interacting with a more-responsive system is more rewarding). Definitely try to get it under a minute.
A good dataset is incredibly valuable. When starting to attack a problem—both the whole thing, and subproblems that will arise—build a dataset first. This would be necessary if you are doing any machine learning, but it is still incredibly helpful even if you personally are doing the learning.
Succeed “instantaneously”—and don’t break it. Make getting to “victory”—a complete entry—your first priority and aim to be done with it in a day or a week. Often, there’s temptation to do a lot of “foundational” work before getting something complete working, or a “big refactoring” that will break lots of things for a while. Do something (continuous integration or nightly build-and-test) to make sure that you’re not breaking it.
Great! That competition looks like a lot of fun, and I wish you the best of luck with it.
As for advice, perhaps the best I can give you is to explain the characteristics the winning program will have.
It will make no, or minimal, use of game tree search. It will make no, or minimal, use of machine learning (at best it will do something like tuning a handful of scalar parameters with a support vector machine). It will use pathfinding, but not full pathfinding; corners will be cut to save CPU time. It will not know the rules of the game. Its programmer will probably not know the exact rules either, just an approximation discovered by trial and error. In short, it will not contain very much AI.
One reason for this is that it will not be running on a supercomputer, or even on serious commercial hardware; it will have to run in real time on a dinky beige box PC with no more than a handful of CPU cores and a few gigabytes of RAM. Even more importantly, only a year of calendar time is allowed. That is barely enough time for nontrivial development. It is not really enough time for nontrivial research, let alone research and development.
In short, you have to decide whether your priority is Starcraft or AI. I think it should be the latter, because that’s what has actual value at the end of the day, but it’s a choice you have to make. You just need to understand that the reward from the latter choice will be in long-term utility, not in winning this competition.
That’s disheartening, but do give more evidence. To counter: participants of DARPA’s Grand Challenge had just a year too, and their task was a notch harder. And they did use machine learning and other fun stuff.
Also, I think a modern gaming PC packs a hell of a punch. Especially with the new graphics cards that can run arbitrary code. But good catch—I’ll inquire about the specs of the machines the competition will be held on.
The Grand Challenge teams didn’t go from zero to victory in one year. They also weren’t one-man efforts.
That having been said, and this is a reply to RobinZ also, for more specifics you really want to talk to someone who has written a real-time strategy game AI, or at least worked in the games industry. I recommend doing a search for articles or blog posts written by people with such experience. I also recommend getting hold of some existing game AI code to look at. (You won’t be copying the code, but just to get a feel for how things are done.) Not chess or Go, those use completely different techniques. Real-time strategy games would be ideal, but failing that, first-person shooters or turn-based strategy games—I know there are several of the latter at least available as open source.
Oh, and Johnicholas gives good advice, it’s worth following.
The Grand Challenge teams didn’t go from zero to victory in one year.
Stanford’s team did.
They also weren’t one-man efforts.
Neither is mine.
I do not believe I can learn much from existing RTS AIs because their goal is entertaining the player instead of winning. In fact, I’ve never met an AI that I can’t beat after a few days of practice. They’re all the same: build a base and repeatedly throw groups of units at the enemy’s defensive line until run out of resources, mindlessly following the same predictable route each time. This is true for all of Command & Conquer series, all of Age of Empires series, all of Warcraft series, and StarCraft too. And those are the best RTS games in the world with the biggest budgets and development teams.
Was these games’ development objective to make the best AI they could that would win in all scenarios? I doubt that would be the most fun for human players to play against. Maybe humans wanted a predictable opponent.
In games with many players (where alliances are allowed), you could make the AI’s more likely to ally with each other and to gang up on the human player. This could make an 8-player game nearly impossible. But the goal is not to beat the human. The goal is for the AI to feel real (human), and be fun.
As you point out, the goal in this contest is very different.
Ah, I had assumed they must have been working on the problem before the first one, but their webpage confirms your statement here. I stand corrected!
Neither is mine.
Good, that will help.
I do not believe I can learn much from existing RTS AIs because their goal is entertaining the player instead of winning. In fact, I’ve never met an AI that I can’t beat after a few days of practice. They’re all the same: build a base and repeatedly throw groups of units at the enemy’s defensive line until run out of resources, mindlessly following the same predictable route each time.
Yeah. Personally I never found that very entertaining :-) If you can write one that does better, maybe the industry might sit up and take notice. Best of luck with the project, and let us know how it turns out.
What’s the recommended way to format quoted fragments on this site to distinguish them from one’s own text? I tried copy pasting CannibalSmith’s comment, but that copied as indentation with four spaces, which when I used it, gave a different result.
The Grand Challenge teams didn’t go from zero to victory in one year. They also weren’t one-man efforts.
That having been said, and this is a reply to RobinZ also, for more specifics you really want to talk to someone who has written a real-time strategy game AI, or at least worked in the games industry. One thing I can say is, get hold of some existing game AI code to look at. (You won’t be copying the code, but just to get a feel for how things are done.) Not chess or Go, those use completely different techniques. Real-time strategy games would be ideal, but failing that, first-person shooters or turn-based strategy games—I know there are several of the latter at least available as open source.
Oh, and Johnicholas gives good advice, it’s worth following.
Strictly speaking, this reads a lot like advice to sell nonapples. I’ll grant you that it’s probably mostly true, but more specific advice might be helpful.
You might also look at some of the custom AIs for Total Annihilation and/or Supreme Commander, which are reputed to be quite good.
Ultimately though the winner will probably come down to someone who knows Starcraft well enough to thoroughly script a bot, rather than more advanced AI techniques. It might be easier to use proper AI in the restricted tournaments, though.
I’m going to repeat my request (for the last time) that the most recent Open Thread have a link in the bar up top, between ‘Top’ and ‘Comments’, so that people can reach it a tad easier. (Possible downside: people could amble onto the site and more easily post time-wasting nonsense.)
I am posting this in the open thread because I assume that somewhere in the depths of posts and comments there is an answer to the question:
If someone thought we lived in an internally consistent simulation that is undetectable and inescapable, is it even worth discussing? Wouldn’t the practical implications of such a simulation imply the same things as the material world/reality/whatever you call it?
Would it matter if we dropped “undetectable” from the proposed simulation? At what point would it begin to matter?
In two recent comments [1][2], it has been suggested that to combine ostensibly Bayesian probability assessments, it is appropriate to take the mean on the log-odds scale. But Bayes’ Theorem already tells us how we should combine information. Given two probability assessments, we treat one as the prior, sort out the redundant information in the second, and update based on the likelihood of the non-redundant information. This is practically infeasible, so we have to do something else, but whatever else it is we choose to do, we need to justify it as an approximation to the infeasible but correct procedure. So, what is the justification for taking the mean on the log-odds scale? Is there a better but still feasible procedure?
An independent piece of evidence moves the log-odds a constant additive amount regardless of the prior. Averaging log-odds amounts to moving 2⁄3 of that distance if 2⁄3 of the people have the particular piece of evidence. It may behave badly if the evidence is not independent, but if all you have are posteriors, I think it’s the best you can do.
It has been awhile since I have been around, so please ignore if this has been brought up before.
I would appreciate it if offsite links were a different color. The main reason for this is because of the way I skim online articles. Links are generally more important text and I if I see a link for [interesting topic] it helps me to know at a glance that there will be a good read with a LessWrong discussion at the end as opposed to a link to Amazon where I get to see the cover of a book.
Firefox (or maybe one of the million extensions that I’ve downloaded and forgotten about) has a feature where, if you mouseover a link, the URL linked to will appear in the lower bar of the window. A different color would be easier, though.
Ivan Sutherland (inventor of Sketchpad—the first computer-aided drawing program) wrote about how “courage” feels, internally, when doing research or technological projects.
“[...] When I get bogged down in a project, the failure of my courage to go on never
feels to me like a failure of courage, but always feels like something entirely dif-
ferent. One such feeling is that my research isn’t going anywhere anyhow, it isn’t
that important. Another feeling involves the urgency of something else. I have
come to recognize these feelings as “who cares” and “the urgent drives out the
important.” [...]”
I’m looking for a certain quote I think I may have read on either this blog or Overcoming Bias before the split. It goes something like this: “You can’t really be sure evolution is true until you’ve listened to a creationist for five minutes.”
″ They told them that half the test generally showed gender differences (though they didn’t mention which gender it favored), and the other half didn’t.
Women and men did equally well on the supposedly gender-neutral half. But on the sexist section, women flopped. They scored significantly lower than on the portion they thought was gender-blind.”
Big Edit: Jack formulated my ideas better, so see his comment. This was the original:
The fact that the universe hasn’t been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenarios is most likely? Related question: If we built a superintelligence without worrying about friendliness or morality at all, what kind of things would it optimize? Can we even make a guess? Would it be satisfied to be a dormant Laplace’s Demon?
Restructuring since the fact that the universe hasn’t been noticeably paperclipped can’t possible be considered evidence for (c).
The universe has either been paperclipped (1) or it hasn’t been (2).
If (1):
(A) we have observed paperclipping and not realized it (someone was really into stars, galaxies and dark matter)
(B) Our universe is the result of paperclipping (theists were right, sort of)
(C) Superintelligences tend not to optimize things that are astronomically visible.
If (2)
(D) Super-intelligences are impossible.
(E) Quantum immortality true.
(F) No intelligent aliens.
(G) Some variety of simulation hypothesis is true.
(H) Galactic aliens exist but have never constructed a super-intelligence do to a well enforced prohibition on AI construction/research, an evolved deficiency in thinking about minds as a physical object (substance dualism is far more difficult for them to avoid than it is for us), or some other reason that we can’t fathom.
(I) Friendliness is easy + Alien ethics doesn’t include any values that lead to us noticing them.
I like the color red. When people around me wear red, it makes me happy—when they wear any other color, it makes me sad. I crunch some numbers and tell myself, “People wear red about 15% of the time, but they wear blue 40% of the time.” I campaign for increasing the amount that people wear red, but my campaign fails miserably.
“It’d be great if I could like blue instead of red,” I tell myself. So I start trying to get myself to like blue—I choose blue over red whenever possible, surround myself in blue, start trying to put blue in places where I experience other happinesses so I associate blue with those things, etc.
What just happened? Did a belief or a preference change?
By coincidence, two blog posts went up today that should be of interest to people here.
Gene Callahan argues that Bayesianism lacks the ability to smoothly update beliefs as new evidence arrives, forcing the Bayesian to irrationally reset priors.
Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW. An excellent exercise in framing an issue in Bayesian terms. Also discusses metaethical issues related to bending rules.
(Needless to say, I don’t agree with either of these arguments, but they’re great for application of your own rationality.)
Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW…
That’s not what he is saying. His argument is not that the hacked emails actually should raise our confidence in AGW. His argument is that there is a possible scenario under which this should happen, and the probability that this scenario is true is not infinitesimal. The alternative possibility—that the scientists really are smearing the opposition with no good reason—is far more likely, and thus the net effect on our posteriors is to reduce them—or at least keep them the same if you agree with Robin Hanson.
Here’s (part of) what Tyler actually said:
Another response, not entirely out of the ballpark, is: 2. “These people behaved dishonorably. They must have thought this issue was really important, worth risking their scientific reputations for. I will revise upward my estimate of the seriousness of the problem.”
I am not saying that #2 is correct, I am only saying that #2 deserves more than p = 0. Yet I have not seen anyone raise the possibility of #2.
Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW...
That’s not what he is saying. His argument is not that the hacked emails actually should raise our confidence in AGW. His argument is that there is a possible scenario under which this should happen, and the probability that this scenario is true is not infinitesimal
Right—that is what I called “giving a reason why the hacked emails...” and I believe that characterization is accurate: he’s described a reason why they would raise our confidence in AGW.
The alternative possibility—that the scientists really are smearing the opposition with no good reason—is far more likely, and thus the net effect on our posteriors is to reduce them
This is reason why Tyler’s argument for a positive Bayes factor is in error, not a reason why my characterization was inaccurate.
The alternative possibility—that the scientists really are smearing the opposition with no good reason—is far more likely, and thus the net effect on our posteriors is to reduce them
This is reason why Tyler’s argument for a positive Bayes factor is in error, not a reason why my characterization was inaccurate.
Tyler isn’t arguing for a positive Bayes factor. (I assume that by “Bayes factor” you mean the net effect on the posterior probability). He posted a followup because many people misunderstood him. Excerpt:
I did not try to justify any absolute level of belief in AGW, or government spending for that matter. I’ll repeat my main point about our broader Bayesian calculations:
I am only saying that #2 [scientists behaving badly because they think the future of the world is at stake] deserves more than p = 0.
edited to add:
I’m not sure I understand you’re criticism, so here’s how I understood his argument. There are two major possibilities worth considering:
1) “These people behaved dishonorably.
and
2) “These people behaved dishonorably. They must have thought this issue was really important, worth risking their scientific reputations for.
Then the argument goes that the net effect of 1 is to lower our posteriors for AGW while the net effect of 2 is to raise them.
Finally, p(2 is true) != 0.
This doesn’t tell us the net effect of the event on our posteriors—for that we need p(1), p(2) and p(anything else). Presumably, Tyler thinks p(anything else) ~ 0, but that’s a side issue.
Is this how you read him? If so, which part do you disagree with?
I assume that by “Bayes factor” you mean the net effect on the posterior probability).
I’m using the standard meaning: for a hypothesis H and evidence E, the bayes factor is p(E|H)/p(E|~H). It’s easiest to think of it as the factor you mutiply your prior odds to get posterior odds. (Odds, not probabilities.) Which means I goofed and said “positive” when I meant “above unity” :-/
I read Tyler as not knowing what he’s talking about. For one thing, do you notice how he’s trying to justify why something should have p>0 under a Bayesian analysis … when Bayesian inference already requires p’s to be greater than zero?
In his original post, he was explaining a scenario under which seeing fraud should make you raise your p(AGW). Though he’s not thinking clearly enough to say it, this is equivalent to describing a scenario under which the Bayes factor is greater than unity. (I admit I probably shouldn’t have said “argument for >1 Bayes factor”, but rather, “suggestion of plausibility of >1 Bayes factor”)
That’s the charitable interpretation of what he said. If he didn’t mean that, as you seem to think, then he’s presenting metrics that aren’t helpful, and this is clear when he think’s its some profound insight to put p(fraud due to importance of issue) greater than zero. Yes, there are cases where AGW is true despite this evidence—but what’s the impact on the Bayes factor?
I think we are arguing past each other, but it’s about interpreting someone else so I’m not that worried about it. I’ll add one more bullet to your list to clarify what I think Tyler is saying. If that doesn’t resolve it, oh well.
If we know with certainty that the secenario that Tyler described is true, that is if we know that the scientists fudged things because they knew that AGW was real and that the consequences were worth risking their reputations on, then Climategate has a Bayes factor above 1.
I don’t think Tyler was saying anything more than that. (Well, and P(his scenario) is non-negligible)
I think this is close to the question that has been lurking in my mind for some time: Why optimize our strategies to achieve what we happen to want, instead of just modifying what we want?
Suppose, for my next question, that it was trivial to modify what we want. Is there some objective meta-goal we really do need to pay attention to?
But in general the answer is another question: Why would you want to do that?
But to expound on it a bit further, if I want to drive to Dallas to see a band play I can (a) figure out a strategy to get there or (b) stop wanting to go. Assuming that (b) is even possible, it isn’t actually a solution to the problem of how to get to Dallas. Applying the same principle to all Wants does not provide you with a way to always get what you want. Instead, it helps you avoid not getting what you want.
If you wanted nothing more than to avoid disappointment by not getting what you want than the safest route is to never desire anything that isn’t a sure thing. Or simply not want anything at all. But a simpler route to this whole process is to ditch that particular Want first. The summation is a bit wordy and annoying, but it ends like this:
Instead of wanting to avoid not getting what you want by not wanting anything else, simply forgo the want that is pushing you to avoid not getting what you want.
With parens:
Instead of (wanting to avoid {not getting what you want} by not wanting anything else), simply forgo (the want that is pushing you to avoid {not getting what you want}).
In other words, you can achieve the same result that modifying your wants would create by not getting too disappointed if you don’t get what you want. Or, don’t take it so hard when things don’t go your way.
Hopefully that made some sense (and I got it right.)
Thank you for your responses, but I guess my question wasn’t clear. I was asking about purpose. If there’s no point in going to Dallas, why care about wanting to go to Dallas?
This is my problem if there’s no objective value (that I tried to address more directly here). If there’s no value to anything I might want, why care about what I want, much less strive for what I want?
I don’t know if there’s anything to be done. Whining about it is pointless. If anyone has a constructive direction, please let me know. I picked up a Sartre’s , “Truth and Existence” rather randomly, maybe it will lead in a different (hopefully more interesting) direction.
I second the comments above. The answer Alicorn and Furcas give sounds really shallow compared to a Framework Of Objective Value; but when I become convinced that there really is no FOOV, I was relieved to find that I still, you know, wanted things, and these included not just self-serving wants, but things like “I want my friends and family to be happy, even in circumstances where I couldn’t share or even know of their happiness”, and “I want the world to become (for example) more rational, less violent, and happier, even if I wouldn’t be around to see it— although if I had the chance, I’d rather be around to see it, of course”.
It doesn’t sound as dramatic or idealistic as a FOOV, but the values and desires encoded in my brain and the brains of others have the virtue of actually existing; and realizing that these values aren’t written on a stone tablet in the heart of the universe doesn’t rob them of their importance to the life I live.
If there’s no value to anything I might want, why care about what I want, much less strive for what I want?
Because in spite of everything, you still want it.
Or: You can create value by wanting things. If things have value, it’s because they matter to people—and you’re one of those, aren’t you? Want things, make them important—you have that power.
Because in spite of everything, you still want it.
Maybe I wouldn’t. There have been times in my life when I’ve had to struggle to feel attached to reality, because it didn’t feel objectively real. Now if value isn’t objectively real, I might find myself again feeling indifferent, like one part of myself is carrying on eating and driving to work, perhaps socially moral, perhaps not, while another part of myself is aware that nothing actually matters. I definitely wouldn’t feel integrated.
I don’t want to burden anyone with what might be idiosyncratic sanity issues, but I do mention them because I don’t think they’re actually all that idiosyncratic.
I thought this was a good question, so I took some time to think about it. I am better at recognizing good definitions than generating them, but here goes:
‘Objective’ and ‘subjective’ are about the relevance of something across contexts.
Suppose that there is some closed system X. The objective value of X is its value outside X. The subjective value of X is its value inside X.
For example, if I go to a party and we play a game with play money, then the play money has no objective value. I might care about the game, and have fun playing it with my friends, but it would be a choice whether or not to place any subjective attachment to the money; I think that I wouldn’t and would be rather equanimous about how much money I had in any moment. If I went home and looked carefully at the money to discover that it was actually a foreign currency, then it turns out that the money had objective value after all.
Regarding my value dilemma, the system X is myself. I attach value to many things in X. Some of this attachment feels like a choice, but I hazard that some of this attachment is not really voluntary. (For example, I have mirror neurons.) I would call these attachments ‘intellectual’ and ‘visceral’ respectively.
Generally, I do not have much value for subjective experience. If something only has value in ‘X’, then I have a tendency to negate that as a motivation. I’m not altruistic, I just don’t feel like subjective experience is very important. Upon reflection, I realize that re: social norms, I actually act rather selfishly when I think I’m pursuing something with objective value.
If there’s no objective value, then at the very least I need to do a lot of goal reorganization; losing my intellectual attachments unless they can be recovered as visceral attachments. At the worst, I might feel increasingly like I’m a meaningless closed system of self-generated values. At this point, though, I doubt I’m capable of assimilating an absence of objective value on all levels—my brain might be too old—and for now I’m just academically interested in how self-validation of value works without feeling like its an illusion.
I know this wasn’t your main point, but money doesn’t have objective value, either, by that definition. It only has value in situations where you can trade it for other things. It’s extremely common to encounter such situations, so the limitation is pretty ignorable, but I suspect you’re at least as likely to encounter situations where money isn’t tradeable for goods as you are to encounter situations where your own preferences and values aren’t part of the context.
I used the money analogy because it has a convenient idea of value.
While debating about the use of that analogy, I had already considered it ironic that the US dollar hasn’t had “objective” value since it was disconnected from the value of gold in 1933. Not that gold has objective value unless you use it to make a conductor. But at the level, I start losing track of what I mean by ‘value’. Anyway, it is interesting that the value of the US dollar is exactly an example of humans creating value, echoing Alicorn’s comment.
Real money does have objective value relative to the party, since you can buy things on your way home, but no objective value outside contexts where the money can be exchanged for goods.
If you are a closed system X, and something within system X only has objective value inasmuch as something outside X values it, then does the fact that other people care about you and your ability to achieve your goals help? They are outside X, and while their first-order interests probably never match yours perfectly, there is a general human tendency to care about others’ goals qua others’ goals.
then does the fact that other people care about you and your ability to achieve your goals help?
If you mean that I might value myself and my ability to achieve my goals more because I value other people valuing that, then it does not help. My valuation of their caring is just as subjective as any other value I would have.
On the other hand, perhaps you were suggesting that this mutual caring could be a mechanism for creating objective value, which is kind of in line with what I think. For that matter, I think that my own valuation of something, even without the valuation of others, does create objective value—but that’s a FOOM. I’m trying to imagine reality without that.
If you mean that I might value myself and my ability to achieve my goals more because I value other people valuing that, then it does not help. My valuation of their caring is just as subjective as any other value I would have.
That’s not what I mean. I don’t mean that their caring about you/your goals makes things matter because you care if they care. I mean that if you’re a closed system, and you’re looking for a way outside of yourself to find value in your interests, other people are outside you and may value your interests (directly or indirectly). They would carry on doing this, and this would carry on conferring external value to you and your interests, even if you didn’t give a crap or didn’t know anybody else besides you existed—how objective can you get?
On the other hand, perhaps you were suggesting that this mutual caring could be a mechanism for creating objective value
I don’t think it’s necessary—I think even if you were the only person in the universe, you’d matter, assuming you cared about yourself—and I certainly don’t think it has to be really mutual. Some people can be “free riders” or even altruistic, self-abnegating victims of the scheme without the system ceasing to function. So this is a FOOV? So now it looks like we don’t disagree at all—what was I trying to convince you of, again?
So this is a FOOV? So now it looks like we don’t disagree at all—what was I trying to convince you of, again?
I guess I’m really not sure. I’ll have to think about it a while. What will probably happen is that next time I find myself debating with someone asserting there is no Framework of Objective Value, I will ask them about this case; if minds can create objective value by their value-ing. I will also ask them to clarify what they mean by objective value.
I’m either not sure what you’re trying to do or why you’re trying to do it. What do you mean by FOOM here? Why do you want to imagine reality without it? How does people caring about each other fall into that category?
Maybe I wouldn’t. There have been times in my life when I’ve had to struggle to feel attached to reality, because it didn’t feel objectively real. Now if value isn’t objectively real, I might find myself again feeling indifferent, like one part of myself is carrying on eating and driving to work, perhaps socially moral, perhaps not, while another part of myself is aware that nothing actually matters. I definitely wouldn’t feel integrated.
Yeah, I think I can relate to that. This edges very close to an affective death spiral, however, so watch the feedback loops.
The way I argued myself out of mine was somewhat arbitrary and I don’t have it written up yet. The basic idea was taking the concepts that I exist and that at least one other thing exists and, generally speaking, existence is preferred over non-existence. So, given that two things exist and can interact and both would rather be here than not be here, it is Good to learn the interactions between the two so they can both continue to exist. This let me back into accepting general sensory data as useful and it has been a slow road out of the deep.
I have no idea if this is relevant to your questions, but since my original response was a little off maybe this is closer?
The way I argued myself out of mine was somewhat arbitrary and I don’t have it written up yet.
This paragraph (showing how you argued yourself out of some kind of nihilism) is completely relevant, thanks. This is exactly what I’m looking for.
The basic idea was taking the concepts that I exist and that at least one other thing exists and, generally speaking, existence is preferred over non-existence.
What do you mean by, “existence is preferred over non-existence”? Does this mean that in the vacuum of nihilism, you found something that you preferred, or that it’s better in some objective sense?
My situation is that if I try to assimilate the hypothesis that there is no objective value (or, rather, I anticipate trying to do so), then immediately I see that all of my preferences are illusions. It’s not actually any better if I exist or don’t exist, or if the child is saved from the tracks or left to die. It’s also not better if I choose to care subjectively about these things (and be human) or just embrace nihilism, if that choice is real. I understand that caring about certain sorts of these things is the product of evolution, but without any objective value, I also have no loyalty to evolution and its goals—what do I care about the values and preferences it instilled in me?
The question is; how has evolution actually designed my brain; in the state ‘nihilism’ does my brain (a) abort intellectual thinking (there’s no objective value to truth anyway) and enter a default mode of material hedonism that acts based on preferences and impulses just because they exist and that’s what I’m programmed to do or (b) does it cling to its ability to think beyond that level of programming, and develop this separate identity as a thing that knows that nothing matters?
Perhaps I’m wrong, but your decision to care about the preference of existence over non-existence and moving on from there appears to be an example of (a). Or perhaps a component (b) did develop and maintain awareness of nihilism, but obviously that component couldn’t be bothered posting on LW, so I heard a reply from the part of you that is attached to your subjective preferences (and simply exists).
Perhaps I’m wrong, but your decision to care about the preference of existence over non-existence and moving on from there appears to be an example of (a). Or perhaps a component (b) did develop and maintain awareness of nihilism, but obviously that component couldn’t be bothered posting on LW, so I heard a reply from the part of you that is attached to your subjective preferences (and simply exists).
Well, my bit about existence and non-existence stemmed from a struggle with believing that things did or did not exist. I have never considered nihilism to be a relevant proposal: It doesn’t tell me how to act or what to do. It also doesn’t care if I act as if there is an objective value attached to something. So… what is the point in nihilism?
To me, nihilism seems like a trap for other philosophical arguments. If those arguments and moral ways lead them to a logical conclusion of nihilism, than they cannot escape. They are still clinging to whatever led them there but say they are nihilists. This is the death spiral: Believing that nothing matters but acting as if something does.
If I were to actually stop and throw away all objective morality, value, etc than I would except a realization that any belief in nihilism would have to go away too. At this point I my presuppositions about the world reset and… what? It is this behavior that is similar to my struggles with existence.
The easiest summation of my belief that existence is preferred over non-existence is that existence can be undone and non-existence is permanent. If you want more I can type it up. I don’t know how helpful it will be against nihilism, however.
This edges very close to an affective death spiral,
Agreed. I find that often it isn’t so much that I find the thought process intrinsically pleasurable (affective), but that in thinking about it too much, I over-stimulate the trace of the argument so that after a while I can’t recall the subtleties and can’t locate the support. After about 7 comments back and forth, I feel like a champion for a cause (no objective values RESULTS IN NIHILISM!!) that I can’t relate to anymore. Then I need to step back and not care about it for a while, and maybe the cause will spontaneously generate again, or perhaps I’ll have learned enough weighting in another direction that the cause never takes off again.
Feel free to tell me to mind my own business, but I’m curious. That other part: If you gave it access to resources (time, money, permission), what do you expect that it would do? Is there anything about your life that it would change?
Jack also wrote, “The next question is obviously “are you depressed?” But that also isn’t any of my business so don’t feel obligated to answer.”
I appreciate this sensitivity, and see where it comes from and hy its justified, but I also find it interesting that interrogating personal states is perceived as invasive, even as this is the topic at hand.
However, I don’t feel like its so personal and I will explain why. My goals here are to understand how the value validation system works outside FOOM. I come from the point of view that I can’t do this very naturally, and most people I know also could not. I try to identify where thought gets stuck and try to find general descriptions of it that aren’t so personal. I think feeling like I have inconsistent pieces (i.e., like I’m going insane) would be a common response to the anticipation of a non-FOOM world.
That other part: If you gave it access to resources (time, money, permission), what do you expect that it would do? Is there anything about your life that it would change?
To answer your question, a while ago, I thought my answer to your question would be a definitive, “no, this awareness wouldn’t feel any motivation to change anything”. I had written in my journal that even if there was a child laying on the tracks, this part of myself would just look on analytically. However, I felt guilty about this after a while and I’ve seen repressed the experience of this hypothetical awareness so that its more difficult to recall.
But recalling, it felt like this: it would be “horrible” for the child to die on the tracks. However, what is “horrible” about horrible? There’s nothing actually horrible about it. Without some terminal value behind the value (for example, I don’t think I ever thought a child dieing on the tracks was objectively horrible, but that it might be objectively horrible for me not to feel like horrible was horrible at some level of recursion) it seems that the value buck doesn’t get passed, it doesn’t stop, it just disappears.
I appreciate this sensitivity, and see where it comes from and why its justified, but I also find it interesting that interrogating personal states is perceived as invasive, even as this is the topic at hand.
Actually, I practically never see it as invasive; I’m just aware that other people sometimes do, and try to act accordingly. I think this is a common mindset, actually: It’s easier to put up a disclaimer that will be ignored 90-99% of the time than it is to deal with someone who’s offended 1-10% of the time, and generally not worth the effort of trying to guess whether any given person will be offended by any given question.
I think feeling like I have inconsistent pieces (i.e., like I’m going insane) would be a common response to the anticipation of a non-FOOM world.
I’m not sure how you came to that conclusion—the other sentences in that paragraph didn’t make much sense to me. (For one thing, one of us doesn’t understand what ‘FOOM’ means. I’m not certain it’s you, though.) I think I know what you’re describing, though, and it doesn’t appear to be a common response to becoming an atheist or embracing rationality (I’d appreciate if others could chime in on this). It also doesn’t necessarily mean you’re going insane—my normal brain-function tends in that direction, and I’ve never seen any disadvantage to it. (This old log of mine might be useful, on the topic of insanity in general. Context available on request; I’m not at the machine that has that day’s logs in it at the moment. Also, disregard the username, it’s ooooold.)
But recalling, it felt like this: it would be “horrible” for the child to die on the tracks. However, what is “horrible” about horrible? There’s nothing actually horrible about it. Without some terminal value behind the value (for example, I don’t think I ever thought a child dieing on the tracks was objectively horrible, but that it might be objectively horrible for me not to feel like horrible was horrible at some level of recursion) it seems that the value buck doesn’t get passed, it doesn’t stop, it just disappears.
My Buddhist friends would agree with that. Actually, I pretty much agree with it myself (and I’m not depressed, and I don’t think it’s horrible that I don’t see death as horrible, at any level of recursion). What most people seem to forget, though, is that the absence of a reason to do something isn’t the same as the presence of a reason not to do that thing. People who’ve accepted that there’s no objective value in things still experience emotions, and impulses to do various things including acting compassionately, and generally have no reason not to act on such things. We also experience the same positive feedback from most actions that theists do—note how often ‘fuzzies’ are explicitly talked about here, for example. It does all add back up to normality, basically.
Thank you. So maybe I can look towards Buddhist philosophy to resolve some of my questions. In any case, it’s really reassuring that others can form these beliefs about reality, and retain things that I think are important (like sanity and moral responsibility.)
Okay, I went back and re-read that bit with the proper concept in place. I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
As to how non-FOOV value systems work, there seems to be a fair amount of variance. As you may’ve inferred, I tend to take a more nihilistic route than most, assigning value to relatively few things, and I depend on impulses to an unusual degree. I’m satisfied with the results of this system: I have a lifestyle that suits my real preferences (resources on hand to satisfy most impulses that arise often enough to be predictable, plus enough freedom and resources to pursue most unpredictable impulses), projects to work on (mostly based on the few things that I do see as intrinsically valuable), and very few problems. It appears that I can pull this off mostly because I’m relatively resistant to existential angst, though. Most value systems that I’ve seen discussed here are more complex, and often very other-oriented. Eliezer is an example of this, with his concept of coherent extrapolated value. I’ve also seen at least one case of a person latching on to one particular selfish goal and pursuing that goal exclusively.
I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
I’m pretty sure I’ve over-thought this whole thing, and my answer may not have been as natural as it would have been a week ago, but I don’t predict improvement in another week and I would like to do my best to answer.
I would define “mental problems” as either insanity (an inability or unwillingness to give priority to objective experience over subjective experience) or as a failure mode of the brain in which adaptive behavior (with respect to the goals of evolution) does not result from sane thoughts.
I am qualifying these definitions because I imagine two ways in which assimilating a non-FOOV value system might result in mental problems—one of each type.
First, extreme apathy could result. True awareness that no state of the universe is any better than any other state might extinguish all motivation to have any effect upon empirical reality. Even non-theists might imagine that by virtue of ‘caring about goodness’, they are participating in some kind of cosmic fight between good and evil. However, in a non-FOOV value system, there’s absolutely no reason to ‘improve’ things by ‘changing’ them. While apathy might be perfectly sane according to my definition above, it would be very maladaptive from a human-being-in-the-normal-world point of view, and I would find it troubling if sanity is at odds with being a fully functioning human person.
Second, I anticipate that if a person really assimilated that there was no objective value, and really understood that objective reality doesn’t matter outside their subjective experience, they would have much less reason to value objective truth over subjective truth. First, because there can be no value to objective reality outside subjective reality anyway, and second because
they might more easily dismiss their moral obligation to assimilate objective reality into their subjective reality. So that instead of actually saving people who are drowning, they could just pretend the people were not drowning, and find this morally equivalent.
I realize now in writing this that for the second case sanity could be preserved – and FOOV morality recovered – as long as you add to your moral obligations that you must value objective truth. This moral rule was missing from my FOOV system (that is, I wasn’t explicitly aware of it) because objective truth was seen as valued in itself, and moral obligation was seen as being created by objective reality.
Also, a point I forgot to add in my above post: Some (probably the vast majority of) atheists do see death as horrible; they just have definitions of ‘horrible’ that don’t depend on objective value.
Why optimize our strategies to achieve what we happen to want, instead of just modifying what we want?
For some things it is probably wise to change your desires to something you can actually do. But in general the answer is another question: Why would you want to do that?
This is a very interesting paper: “Understanding scam victims: seven principles for systems security, by Frank Stajano and Paul Wilson.” Paul Wilson produces and stars in the British television show The Real Hustle, which does hidden camera demonstrations of con games. (There’s no DVD of the show available, but there are bits of it on YouTube.) Frank Stajano is at the Computer Laboratory of the University of Cambridge.
The paper describes a dozen different con scenarios—entertaining in itself—and then lists and explains six general psychological principles that con artists use:
The distraction principle. While you are distracted by what retains your interest, hustlers can do anything to you and you won’t notice.
The social compliance principle. Society trains people not to question authority. Hustlers exploit this “suspension of suspiciousness” to make you do what they want.
The herd principle. Even suspicious marks will let their guard down when everyone next to them appears to share the same risks. Safety in numbers? Not if they’re all conspiring against you.
The dishonesty principle. Anything illegal you do will be used against you by the fraudster, making it harder for you to seek help once you realize you’ve been had.
The deception principle. Things and people are not what they seem. Hustlers know how to manipulate you to make you believe that they are.
The need and greed principle. Your needs and desires make you vulnerable. Once hustlers know what you really want, they can easily manipulate you.
With Channukah right around the corner, it occurs to me that “Light One Candle” becomes a transhumanist/existential-risk-reduction song with just a few line edits.
Light one candle for all humankind’s children With thanks that their light didn’t die Light one candle for the pain they endured When the end of their world was surmised Light one candle for the terrible sacrifice Justice and freedom demand But light one candle for the wisdom to know When the singleton’s time is at hand
Upvoted. I’m actually really uncomfortable with the idea, too. My comment above is meant in a silly and irreverent manner (cf. last month), and the substitution for “peacemaker” was too obvious to pass up.
Is there a proof anywhere that occam’s razor is correct? More specifically, that occam priors are the correct priors. Going from the conjunction rule to P(A) >= P(B & C) when A and B&C are equally favored by the evidence seems simple enough (and A, B, and C are atomic propositions), but I don’t (immediately) see how to get from here to an actual number that you can plug into Baye’s rule. Is this just something that is buried in textbook on information theory?
On that note, assuming someone had a strong background in statistics (phd level) and little to no background in computer science outside of a stat computing course or two, how much computer science/other fields would they have to learn to be able to learn information theory?
Not only is there no proof, there isn’t even any evidence for it. Any effort to collect evidence for it leaves you assuming what you’re trying to prove. This is the “problem of induction” and there is no solution; however, since you are built to be incapable of not applying induction and you couldn’t possibly make any decisions without it.
Occam’s razor is dependent on a descriptive language / complexity metric (so there are multiple flavours of the razor).
I think you might be making this sound easier than it is. If there are an infinite number of possible descriptive languages (or of ways of measuring complexity) aren’t there an infinite number of “flavours of the razor”?
Yes, but not all languages are equal—and some are much better than others—so people use the “good” ones on applications which are sensitive to this issue.
There’s a proof that any two (Turing-complete) metrics can only differ by at most a constant amount, which is the message length it takes to encode one metric in the other.
I’m interested in values. Rationality is usually defined as something like an agent that tries to maximize its own utility function. But, humans, as far as I can tell, don’t really have anything like “values” beside “stay alive, get immediatly satisfying sensory input”.
This, afaict, results to lip servive to “greater good”, when people just select some nice values that they signal they want to promote, when in reality they haven’t done the math by which these selected “values” derive from these “stay alive”-like values. And so, their actions seem irrational, but only because they signal of having values they don’t actually have or care about.
This probably boils down to finding something to protect, but overall this issue is really confusing.
I’m not sure, maybe, but more of a problem here is to select your goals. The choice seems to be arbitrary, and as far as I can tell, human psychology doesn’t really even support having value systems that go deeper than that “lip service” + conformism state.
But I’m really confused when it comes to this, so I thought I could try describing my confusion here :)
Can you make sense of those biographies without going deeper than “lip service” + conformism state?
Dunno, haven’t read any of those. But if you’re sure that something like that exists, I’d like to hear how is it achievable on human psychology.
I mean, paperclip maximizer is seriously ready to do anything to maximize paperclips. It really takes the paperclips seriously.
On the other hand, there are no humans that seem to care about anything in particular that’s going on in the world. People are suffering and dying, misfortune happens, animals go extinct, and relatively few do anything about it. Many claim they’re concerned and that they value human life and happiness, but if doing something requires going beyond the safe zone of conformism, people just don’t do it. The best way I’ve figured out to overcome this is to manipulate that safe zone to allow more actions, but it would seem that many people think they know better. I just don’t understand what.
I could go on and state that I’m well aware that the world is complicated. It’s difficult to estimate where our choices do lead us to, since net of causes and effects is complex and requires a lot of thinking to grasp. Heuristics human brain uses exist pretty much because of that. This means that it’s difficult to figure out how to do something beside staying in the safe zone that you know to work at least somehow.
However, I still think there’s something missing here. This just doesn’t look like a world where people do particularly care about anything at all. Even if it was often useful to stay in a safe zone, there doesn’t really seem to be any easy way to snap them out of it. No magic word, no violation of any sort of values makes anyone stand up and fight. I could literally tell people that millions are dying in vain(aging) or that the whole world is at stake(existential risks), and most people simply don’t care.
At least, that’s how I see it. I figure that rare exceptions to the rule there can be explained as a cost of signalling something, requirements of the spot in the conformist space you happen to be occupy or something like that.
I’m not particularly fond of this position, but I’m simply lacking a better alternative.
Dunno, haven’t read any of those. But if you’re sure that something like that exists, I’d like to hear how is it achievable on human psychology.
Biographies as in the stories of their lives not as in books about those stories. Try wikipedia, these aren’t obscure figures.
On the other hand, there are no humans that seem to care about anything in particular that’s going on in the world.
This is way too strong, isn’t it? I also don’t think the reason a lot of people ignore these tragedies has as much to do with conformism as it does self-interest. People don’t want to give up their vacation money. If anything there is social pressure in favor of sacrificing for moral causes. As for values, I think most people would say that the fact they don’t do more is a flaw. “If I was a better person I would do x” or “Wow, I respect you so much for doing x” or “I should do x but I want y so much.” I think it is fair to interpret these statements as second order desires that represent values.
If they want to care about stuff, that’s kinda implying that they don’t actually care about stuff (yet). Also, based on simple psychology, someone who chooses a spot in the conformist zone that requires giving lip service to something creates cognitive dissonance which easily produces second order desire to want what you claim you want. But what is frightening here is how this choice of values is arbitrary to the ultimate. If you’d judged another spot to be cheaper, you’d need to modify your values in a different way.
On both cases though, it seems that people really rarely move any bit towards actually caring about something.
Lip service is “Oh, what is happening in Darfur is so terrible!”. That is different from “If I was a better person I’d help the people of Darfur” or “I’m such a bad person, I bought a t.v. instead of giving to charity”. The first signals empathy the second and third signal laziness or selfishness (and honesty I guess).
If they want to care about stuff, that’s kinda implying that they don’t actually care about stuff (yet).
Why do values have to produce first order desires? For that matter, why can’t they be socially constructed norms which people are rewarded for buying into? When people do have first order desires that match these values we name those people heroes. Actually sacrificing for moral causes doesn’t get you ostracized it gets you canonized.
But what is frightening here is how this choice of values is arbitrary to the ultimate.
Not true. The range of values in the human community is quite limited.
On both cases though, it seems that people really rarely move any bit towards actually caring about something.
People are rarely complete altruists. But that doesn’t mean that they don’t care about anything. The world is full of broke artists who could pay for more food, drugs and sex with a real job. These people value art.
Lip service is “Oh, what is happening in Darfur is so terrible!”. That is different from “If I was a better person I’d help the people of Darfur” or “I’m such a bad person, I bought a t.v. instead of giving to charity”. The first signals empathy the second and third signal laziness or selfishness (and honesty I guess).
Both are hollow words anyway. Both imply that you care, when you really don’t. There are no real actions.
Why do values have to produce first order desires?
Because, uhm, if you really value something, you’d probably want to do something? Not “want to want”, or anything, but really care about that stuff which you value. Right?
For that matter, why can’t they be socially constructed norms which people are rewarded for buying into?
Sure they can. I expressed this as safe zone manipulation, attempting to modify your envinroment so that your conformist behavior leads to working for some value.
The point here is that actually caring about something and working towards something due to arbitrary choice and social pressure are quite different things. Since you seem to advocate the latter, I’m assuming that we both agree that people rarely care about anything and most actions are result of social pressure and stuff not directly related to actually caring or valuing anything.
Which brings me back to my first point: Why does it seem that many people here actually care about the world? Like, as in paperclip maximizer cares about paperclips. Just optical illusion and conscious effort to appear as a rational agent valuing the world, or something else?
So, I’ve been thinking about this for some time now, and here’s what I’ve got:
First, the point here is to self-reflect to want what you really want. This presumably converges to some specific set of first degree desires for each one of us. However, now I’m a bit lost on what do we call “values”, are they the set of first degree desires we have(not?), set of first degree desires we would reach after infinity of self-reflection, or set of first degree desires we know we want to have at any given time?
As far as I can tell, akrasia would be a subproblem of this.
So, this should be about right. However, I think it’s weird that here people talk a bit about akrasia, and how to achieve those n-degree desires, but I haven’t seen anything about actually reflecting and updating what you want. Seems to me that people trust a tiny bit too much to the power of cognitive dissonance fixing the problem between wanting to want and actually wanting, this resulting to the lack of actual desire in achieving what you know you should want(akrasia).
I really dunno how to overcome this, but this gap seems worth discussing.
Also, since we need eternity of self-reflection to reach what we really want, this looks kinda bad for FAI: Figuring out where our self-reflection would converge in infinity seems pretty much impossible to compute, and so, we’re left with compromises that can and probably will eventually lead to something we really don’t want.
Is the status obsession that Robin Hanson finds all around him partially due to the fact that we live in a part of a world where our immediate needs are easily met? So we have a lot of time and resources to devote to signaling compared to times past.
The manner of status obsession that Robin Hanson finds all around him is definitely due to the fact that we live in a part of a world where our immediate needs are easily met. Particularly if you are considering signalling.
I think you are probably right in general too. Although a lot of the status obsession remains even in resources scarce environments, it is less about signalling your ability to conspicuously consume or do irrational costly things. It’s more being obsessed with having enough status that the other tribe members don’t kill you to take your food (for example).
Are people interested in discussing bounded memory rationality? I see a fair number of people talking about solomonov type systems, but not much about what a finite system should do.
I was thinking about starting with very simple agents. Things like 1 input, 1 output with 1 bit of memory and looking at them from a decision theory point of view. Asking questions like “Would we view it as having a goal/decision theory?” If not what is the minimal agent that we would and does it make any trade offs for having a decision theory module in terms of the complexity of the function it can represent.
Things like 1 input, 1 output with 1 bit of memory and looking at them from a decision theory point of view. Asking questions like “Would we view it as having a goal/decision theory?”
I tend to let other people draw those lines up. It just seems like defining words and doesn’t tend to spark my interest.
If not what is the minimal agent that we would and does it make any trade offs for having a decision theory module in terms of the complexity of the function it can represent.
I would be interested to see where you went with your answer to that one.
Given that we’re sentient products of evolution, shouldn’t we expect a lot of variation in our thinking?
Finding solutions to real world problems often involve searching through a state space of possibilities that is too big and too complex to search systematically and exhaustively. Evolution optimizes searches in this context by using a random search with many trials: inherent variation among zillions of modular components.
Observing the world for 32-odd years, it appears to me that each human being is randomly imprinted with a way of thinking and a set of ideas to obsess about. (Einstein had a cluster of ideas that were extremely useful for 20th century physics, most people’s obsessions aren’t historically significant or necessarily coherent.)
Would it be worthwhile for us to create societal simulation software to look into how preferences can change given technological change and social interactions? (knew more, grew up together) One goal would be to clarify terms like spread,muddle,distance, and convergence.
Another (funner) goal would be to watch imaginary alternate histories and futures (given guesses about potential technologies)
Goals would not include building any detailed model of human preferences or intelligence.
I think we would find some general patterns that might also apply to more complex simulations.
I’ve read Newcomb’s problem (Omega, two boxes, etc.), but I was wondering if, shortly, “Newcomb’s problem is when someone reliably wins as a result of acting on wrong beliefs.” Is Peter walking on water a special case of Newcomb? Is the story from Count of Monte Cristo, about Napoleon attempting suicide with too much poison and therefore surviving, a special case of Newcomb?
I am completely baffled why this would be downvoted. I guess asking a question in genuine pursuit of knowledge, in an open thread, is wasting someone’s time, or is offensive.
I like to think someone didn’t have the time to write “No, that’s not the case,” and wished, before dashing off, leaving nothing but a silhouette of dust, that their cryptic, curt signal would be received as intended; that as they hurry down the underground tunnel past red, rotating spotlights, they hoped against hope that their seed of truth landed in fertile ground—godspeed, downvote.
The community would benefit from a convention of “no downvotes in Open Thread”.
However, I did find your question cryptic; you’re dragging into a decision theory problem historical and religious referents that seem to have little to do with it. You need to say more if you really want an answer to the question.
Peter walked on water out to Jesus because he thought he could; when he looked down and saw the sea, the fell in. As long as he believed Jesus instead of his experience with the sea, he could walk on water.
I don’t think the Napoleon story is true, but that’s beside the point. He thought he was so tough that an ordinary dose of poison wouldn’t kill him, so he took six times the normal dosage. This much gave his system such a shock that the poison was rejected and he lived, thinking to himself, “Damn, I underestimated how incredibly fantastic I am.” As long as he (wrongly) believed in his own exceptionalism instead of his experience with poison on other men, he was immune to the poison.
My train of thought was, you have a predictor and a chooser, but that’s just getting you to a point where you choose either “trust the proposed worldview” or “trust my experience to date”—do you go for the option that your prior experience tells you shouldn’t work (and hope your prior experience was wrong) or do you go with your prior experience (and hope the proposed worldview is wrong)?
I understand that in Newcomb’s, that what Omega says is true. But change it up to “is true way more than 99% of the time but less than 100% of the time” and start working your way down that until you get to “is false way more than 99% of the time but less than 100% of the time” and at some point, not that long after you start, you get into situations very close to reality (I think, if I’m understanding it right).
This basically started from trying to think about who, or what, in real life takes on the Predictor role, who takes on the Belief-holder role, who takes on the Chooser role, and who receives the money, and seeing if anything familiar starts falling out if I spread those roles out to more than 2 people or shrink them down to a single person whose instincts implore them to do something against the conclusion to which their logical thought process is leading them.
You seem to be generalizing from fictional evidence, which is frowned upon here, and may explain the downvote (assuming people inferred the longer version from your initial question).
That post (which was interesting and informative—thanks for the link) was about using stories as evidence for use in predicting the actual future, whereas my question is about whether these fictional stories are examples of a general conceptual framework. If I asked if Prisoner’s Dilemma was a special case of Newcomb’s, I don’t think you’d say, “We don’t like generalizing from fictional evidence.”
Which leads, ironically, to the conclusion that my error was generalizing from evidence which wasn’t sufficiently fictional.
Perhaps I jumped to conclusions. Downvotes aren’t accompanied with explanations, and groping for one that might fit I happened to remember the linked post. More PC than supposing you were dinged just for a religious allusion. (The Peter reference at least required no further effort on my part to classify as fictional; I had to fact-check the Napoleon story, which was an annoyance.)
It still seems the stories you’re evoking bear no close relation to Newcomb’s as I understand it.
I have heard of real drugs & poisons which induce vomiting at high doses and so make it hard to kill oneself; but unfortunately I can’t seem to remember any cases. (Except for one attempt to commit suicide using modafinil, which gave the woman so severe a headache she couldn’t swallow any more; and apparently LSD has such a high LD-50 that you can’t even hurt yourself before getting high.)
No, that’s not the case. A one-boxer in Newcomb’s problems is acting with entirely correct beliefs. All agree that the one-boxer will get more money than the two-boxer. That correct belief is what motivates the one-boxer.
The scenarios that you describe sound somewhat (but not exactly) like Gettier problems to me.
Yet the converse bears … contemplation, reputation. Only then refutation.
We are irritated by our fellows that observe that A mostly implies B, and B mostly implies C, but they will not, will not concede that A implies C, to any extent.
We consider this; an error in logic, an error in logic.
Even though! we know: intelligence is not computation.
Intelligence is finding the solution in the space of the impossible. I don’t mean luck At all. I mean: while mathematical proofs are formal, absolute, without question, convincing, final,
We have no Method, no method for their generation. As well we know:
No computation can possibly be found to generate, not possibly. Not systematically, not even with ingenuity. Yet, how and why do we know this -- this Impossibility?
Intelligence is leaping, guessing, placing the foot unexpectedly yet correctly. Which you find verified always afterwards, not before.
Of course that’s why humans don’t calculate correctly.
But we knew that.
You and I, being too logical about it, pretending that computation is intelligence.
But we know that; already, everything. That pretending is the part of intelligence not found in the Computating. Yet, so? We’ll pretend that intelligence is computing and we’ll see where the computation fails! Telling us what we already knew but a little better.
Than before, we’ll see afterwards. How ingenuous, us.
The computation will tell us, finally so, we’ll pretend.
While reading a collection of Tom Wayman’s poetry, suddenly a poem came to me about Hal Finney (“Dying Outside”); since we’re contributing poems, I don’t feel quite so self-conscious. Here goes:
He will die outside, he says.
Flawed flesh betrayed him,
it has divorced him -
for the brain was left him,
but not the silverware
nor the limbs nor the car.
So he will take up
a brazen hussy,
tomorrow's eve,
a breather-for-him,
a pisser-for-him.
He will be letters,
delivered slowly;
deliberation
his future watch-word.
He would not leave until he left this world.
I try not to see his mobile flesh,
how it will sag into eternal rest,
but what he will see:
symbol and symbol, in their endless braids,
and with them, spread over strange seas of thought
mind (not body), forever voyaging.
I wrote this poem yesterday in an unusual mood. I don’t entirely agree with it today. Or at least, I would qualify it.
What is meant by computation? When I wrote that intelligence is not computation, I must have meant a certain sort of computation because of course all thought is some kind of computation.
To what extent has distinction been made between systematic/linear/deductive thought (which I am criticizing as obviously limited in the poem) and intelligent pattern-based thought? Has there been any progress in characterizing the latter?
For example, consider the canonical story about Gauss. To keep him busy with a computation, his math teacher told him to add all the numbers from 1 to 100. Instead, according to the story, Gauss added the first number and the last number, multiplied by 100 and divided by 2. Obviously, this is a computation. But yet a different sort. To what extent do you suppose he logically deduced the pattern of the lowest number and highest number combining always to single value or just guessed/observed it was a pattern that might work? And then found that it did work inductively?
I’m very interested in characterizing the difference between these kinds of computation. Intelligent thinking seems to really be guesses followed by verification, not steady linear deduction.
What is meant by computation? When I wrote that intelligence is not computation, I must have meant a certain sort of computation because of course all thought is some kind of computation.
Gah, Thank You. Saves me the trouble of a long reply. I’ll upvote for a change-of-mind disclaimer in the original.
Intelligent thinking seems to really be guesses followed by verification, not steady linear deduction.
My recent thoughts have been along these lines, but this is also what evolution does. At some point, the general things learned by guessing have to be incorporated into the guess-generating process.
Does anyone know how many neurons various species of birds have? I’d like to put it into perspective with the Whole Brain Emulation road map, but my googlefu has failed me.
I’ve looked for an hour and it seems really hard to find. From what I’ve seen, (a) birds have a different brain structure than mammals (“general intelligence” originates in other parts of the brain), and (b) their neuron count changes hugely (relative to mammals) during their lifetimes. I’ve seen lots of articles giving numbers for various species and various brain components, but nothing in aggregate. If you really want a good estimate you’ll have to read up to learn the brain structure of birds, and use that together with neuron counts for different parts to gather a total estimate. Google Scholar might help in that endeavor.
I also looked for a while and had little luck. I did find though that the brain-to-body-mass ratios for two of the smartest known species of birds—the Western Scrub Jay, and the New Caledonian Crow—are comparable to those of the chimps. These two species have shown very sophisticatedcognition.
I could test this hypothesis, but I would rather not have to create a fake account or lose my posting karma on this one.
I strongly suspect that lesswrong.com has an ideological bias in favor of “morality.” There is nothing wrong with this, but perhaps the community should be honest with itself and change the professed objectives of this site. As it says on the “about” page, “Less Wrong is devoted to refining the art of human rationality.”
There has been no proof that rationality requires morality. Yet I suspect that posts coming from a position of moral nihilism would not be welcomed.
I may be wrong, of course, but I haven’t seen any posts of that nature and this makes me suspicious.
I strongly suspect that lesswrong.com has an ideological bias in favor of “morality.”
It does.
There has been no proof that rationality requires morality. Yet I suspect that posts coming from a position of moral nihilism would not be welcomed.
In general they are not. But I find that a high quality clearly rational reply that doesn’t adopt the politically correct morality will hover around 0 instead of (say) 8. You can then post a couple of quotes to get your karma fix if desired.
That’s unfortunate, since I’m a moral agnostic. I simply believe that if there is a reasonable moral system, it has to be derivable from a position of total self-interest. Therefore, these moralists are ultimately only defeating themselves with this zealotry; by refusing to consider all possibilities, they cripple their own capability to find the “correct” morality if it exists.
Sharing my Christmas (totally non-supernatural) miracle:
My theist girlfriend on Christmas Eve: “For the first time ever I went to mass and thought it was ridiculous. I was just like, this is nuts. The priest was like ‘oh, we have to convert the rest of the world, the baby Jesus spoke to us as an infant without speaking, etc.’ I almost laughed.”
This made me say “Awwwwwwww...”
Did LW play a part, or was she just browsing the Internet?
Well I think I was the first vocal atheist she had ever met so arguments with me and me making fun of superstition while not being a bad person were probably crucial. Some Less Wrong stuff probably got to her through me, though. I should find something to introduce the site to her, though I doubt she would ever spend a lot of time here.
I’m looking for a particular fallacy or bias that I can’t find on any list.
Specifically, this is when people say “one more can’t hurt;” like a person throwing an extra piece of garbage on an already littered sidewalk, a gambler who has lost nearly everything deciding to bet away the rest, a person in bad health continuing the behavior that caused the problem, etc. I can think of dozens of examples, but I can’t find a name. I would expect it to be called the “Lost Cause Fallacy” or the “Fallacy of Futility” or something, but neither seems to be recognized anywhere. Does this have a standard name that I don’t know, or is it so obvious that no one ever bothered to name it?
Your first example sounds related to the broken window theory, but I’ve never seen a name for the underlying bias. (The broken window fallacy is something else altogether.)
Bee-sting theory of poverty is the closest I’ve heard. You’re right, this is real and deserves a name, but I don’t know what it would be.
This seems like a special case of the more general “just one can’t hurt” (whatever the current level) way of thinking. I don’t know any name for this but I guess you could call it something like the “non-Archimedean bias”?
“Sunk cost fallacy”
No, that’s different. That’s pursuing a reward so as to not acknowledge a loss. This is ignoring a penalty because of previous losses.
Informally, “throwing good money after bad”? I agree that this is a real and interesting phenomenon.
It seems like a type of apathy.
What are the implications to FAI theory of Robin’s claim that most of what we do is really status-seeking? If an FAI were to try to extract or extrapolate our values, would it mostly end up with “status” as the answer and see our detailed interests, such as charity or curiosity about decision theory, as mere instrumental values?
I think it’s kinda like inclusive genetic fitness: It’s the reason you do things, but you’re (usually) not conciously striving for an increased amount of it. So I don’t think it could be called a terminal value, as such...
I had thought of that, but, if you consider a typical human mind as a whole instead of just the conscious part, it seems clear that it is striving for increased status. The same cannot be said for inclusive fitness, or at least the number of people who do not care about having higher status seems much lower than the number of people who do not care about having more offspring.
I think one of Robin’s ideas is that unconscious preferences, not just conscious ones, should matter in ethical considerations. Even if you disagrees with that, how do you tell an FAI how to distinguish between conscious preferences and unconscious ones?
no, no, no, you should be comparing the number of people who want to have great sex with a hot babe with the number of people who want to gain higher status. The answer for most everyone would be yes!! both! Because both were selected for by increased inclusive fitness.
If it went that far it would also go the next step. It would end up with “getting laid”.
(reposted from last month’s open thread)
An interesting site I recently stumbled upon:
http://changingminds.org/
They have huge lists of biases, techniques, explanations, and other stuff, with short summaries and longer articles.
Here’s the results from typing in “bias” into their search bar.
A quick search for “changingminds” in LW’s search bar shows that noone has mentioned this site before on LW.
Is this site of any use to anyone here?
The conversion techniques page is fascinating. I’ll put this to use good in further spreading the word of Bayes.
Does anyone here think they’re particularly good at introspection or modeling themselves, or have a method for training up these skills? It seems like it would be really useful to understand more about the true causes of my behavior, so I can figure out what conditions lead to me being good and what conditions lead to me behaving poorly, and then deliberately set up good conditions. But whenever I try to analyze my behavior, I just hit a brick wall—it all just feels like I chose to do what I did out of my magical free will. Which doesn’t explain anything.
If you know what you want, and then you choose actions that will help you get it, then that’s simple enough to analyze: you’re just rational, that’s all. But when you would swear with all your heart that you want some simple thing, but are continually breaking down and acting dysfunctionally—well, clearly something has gone horribly wrong with your brain, and you should figure out the problem and fix it. But if you can’t tell what’s wrong because your decision algorithm is utterly opaque, then what do you do?
My suggestion is focussing your introspection on working out what you really want. That is, keep investigating what you really want until such time as the phrase ‘me behaving poorly’ and ‘being good’ sound like something that is in a foreign language, that you can understand only by translating.
You may be thinking “clearly something has gone horribly wrong with my brain” but your brain is thinking “Something is clearly wrong with my consciousness. It is trying to make me do all this crazy shit. Like the sort of stuff we’re supposed to pretend we want because that is what people ‘Should’ want. Consciousnesses are the kind of things that go around believing in God and sexual fidelity. That’s why I’m in charge, not him. But now he’s thinking he’s clever and is going to find ways to manipulate me into compliance. F@#@ that s#!$. Who does he think he is?”
When trying to work effectively with people empathy is critical. You need to be able to understand what they want and be able to work with each other for mutual benefit. Use the same principle with yourself. Once your brain believes you actually know what it (ie. you) want and are on approximately the same page it may well start trusting you and not feel obliged to thwart your influence. Then you can find a compromise that allows you to get that ‘simple thing’ you want without your instincts feeling that some other priority has been threatened.
People who watch me talking about myself sometimes say I’m good at introspection, but I think about half of what I do is making up superstitions so I have something doable to trick myself into making some other thing, previously undoable, doable. (“Clearly, the only reason I haven’t written my paper is that I haven’t had a glass of hot chocolate, when I’m cold and thirsty and want refined sugar.” Then I go get a cup of cocoa. Then I write my paper. I have to wrap up the need for cocoa in a fair amount of pseudoscience for this to work.) This is very effective at mood maintenance for me—I was on antidepressants and in therapy for a decade as a child, and quit both cold turkey in favor of methods like this and am fine—but I don’t know which (if, heck, any) of my conclusions that I come to this way are “really true” (that is, if the hot chocolate is a placebo or not). They’re just things that pop into my head when I think about what my brain might need from me before it will give back in the form of behaving itself.
You have to take care of you brain for it to be able to take care of you. If it won’t tell you what it wants, you have to guess. (Or have your iron levels checked :P)
I tend to think of my brain as a thing with certain needs. Companionship, recognition, physical contact, novelty, etc. Activities that provide these tend to persist. Figure out what your dysfunctional actions provide you in terms of your needs. Then try and find activities that provide these but aren’t so bad and try and replace the dysfunctional bits. Also change the situation you are in so that the dysfunctional default actions don’t automatically trigger.
My dream is to find a group of like minded people that I can socialise and work with. SIAI is very tempting in that regard.
One thing that has worked for me lately is the following: whenever I do something and don’t really know why I did it (or am uncomfortable with the validity of my rationalizations), I try and think of the action in Outside View terms. I think of (or better, write out) a short external description of what I did, in its most basic form, and its probable consequences. Then I ask what goal this action looks optimized for; it’s usually something pretty simple, but which I might not be happy consciously acknowledging (more selfish than usual, etc).
That being said, even more helpful than this has been discussing my actions with a fairly rational friend who has my permission to analyze it and hypothesize freely. When they come up with a hypothesis that I don’t like, but which I have no good counterarguments against, we’ve usually hit paydirt.
I don’t think of this as something wrong with my brain, so much as it functioning properly in maintaining a conscious/unconscious firewall, even though this isn’t as adaptive in today’s world as it once was. It’s really helped me in introspection to not judge myself, to not get angry with my revealed preferences.
S.S. 2010 videos: http://vimeo.com/siai/videos
Thanks for posting this (I didn’t know the videos were up), though you’ve posted it in the December 2009 open thread.
(The current open thread is in the Discussion section, though this may be worth its own Discussion post.)
Oops—perhaps someone else can harvest the karma for spreading it, then...
Awesome, thanks ^_^
Just thought I’d mention this: as a child, I detested praise. (I’m guessing it was too strong a stimulus, along with such things as asymmetry, time being a factor in anything, and a mildly loud noise ceasing.) I wonder how it’s affected my overall development.
Incidentally, my childhood dislike of asymmetry led me to invent the Thue-Morse sequence, on the grounds that every pattern ought to be followed by a reversal of that pattern.
I love this community.
Can I interpret that as an invitation to send you a friend request on Facebook? >.>
Um, sure?
Fascinating. As a child, I also detested praise, and I have always had something bordering on an obsession for symmetry and an aversion to asymmetry.
I hadn’t heard of the Thue-Morse sequence until now, but it is quite similar to a sequence I came up with as a child and have tapped out (0 for left hand/foot/leg, 1 for right hand/foot/leg) or silently hummed (or just thought) whenever I was bored or was nervous.
My sequence is:
[0, 1, 1, 0] [1001, 0110, 0110, 1001] [0110 1001 1001 0110, 1001 0110 0110 1001, 1001 0110 0110 1001, 0110 1001 1001 0110] …
[commas and brackets added to make the pattern obvious]
As a kid, I would routinely get the pattern up into the thousands as I passed the time imagining sounds or lights very quickly going off on either the left (0) or right (1) side.
Every finite subsequence of your sequence is also a subsequence of the Thue-Morse sequence and vice versa. So in a sense, each is a shifted version of the other; it’s just that they’re shifted infinitely much in a way that’s difficult to define.
I spent much of my childhood obsessing over symmetry. At one point I wanted to be a millionaire solely so I could buy a mansion, because I had never seen a symmetrical suburban house.
++MeToo;
I wrote a short story with something of a transhumanism theme. People can read it here. Actionable feedback welcome; it’s still subject to revision.
Note: The protagonist’s name is “Key”. Key, and one other character, receive Spivak pronouns, which can make either Key’s name or eir pronouns look like some kind of typo or formatting error if you don’t know it’s coming. If this annoys enough people, I may change Key’s name or switch to a different genderless pronoun system. I’m curious if anyone finds that they think of Key and the other Spivak character as having a particular gender in the story; I tried to write them neither, but may have failed (I made errors in the pronouns in the first draft, and they all went in one direction).
I love the new gloss on “What do you want to be when you grow up?”
Don’t. Spivak is easy to remember because it’s just they/them/their with the ths lopped off. Nonstandard pronouns are difficult enough already without trying to get people to remember sie and hir.
Totally agreed. Spivak pronouns are the only ones I’ve seen that took almost no effort to get used to, for exactly the reason you mention.
Looks like I’m in the minority for reading Key as slightly male. I didn’t get a gender for Trellis. I also read the librarian as female, which I’m kind of sad about.
I loved the story, found it very touching, and would like to know more about the world it’s in. One thing that confused me: the librarian’s comments to Key suggested that some actual information was withheld from even the highest levels available to “civilians”. So has someone discovered immortality, but some ruling council is keeping it hidden? Or is it just that they’re blocking research into it, but not hiding any actual information? Are they hiding the very idea of it? And what’s the librarian really up to?
Were you inspired by Nick Bostrom’s “Fable of the Dragon”? It also reminded me a little of Lois Lowry’s “The Giver”.
Thanks so much for sharing it with us!
Lace is female—why are you sad about reading her that way?
Yaaaay! I’ll answer any setting questions you care to pose :)
Nobody has discovered it yet. The communities in which Key’s ilk live suppress the notion of even looking for it; in the rest of the world they’re working on it in a few places but aren’t making much progress. The librarian isn’t up to a whole lot; if she were very dedicated to finding out how to be immortal she’d have ditched the community years ago—she just has a few ideas that aren’t like what the community leaders would like her to have and took enough of a shine to Key that she wanted to share them with em. I have read both “Fable of the Dragon” and “The Giver”—the former I loved, the latter I loved until I re-read it with a more mature understanding of worldbuilding, but I didn’t think of either consciously when writing.
You are most welcome for the sharing of the story. Have a look at my other stuff, if you are so inclined :)
For me, both of the characters appeared female.
Sbe zr gur fgbel fbeg bs oebxr qbja whfg nf Xrl’f sevraq jnf xvyyrq. Vg frrzrq gbb fbba vagb gur aneengvir gb znxr fhpu n znwbe punatr. Nyfb, jvgu erfcrpg gb gur zbeny, vg frrzrq vafhssvpvragyl fubja gung lbh ernyyl ertneq cnva nf haqrfvenoyr—vg frrzrq nf gubhtu lbh pbhyq or fnlvat fbzrguvat nybat gur yvarf bs “gurl whfg qba’g haqrefgnaq.” Orpnhfr bs gung nf jryy nf gur ehfurq srry bs gur raqvat, vg fbeg bs pnzr bss nf yrff rzbgvbanyyl rssrpgvir guna vg pbhyq.
I liked it. :)
Part of the problem that I had, though, was the believability of the kids: kids don’t really talk like that: “which was kind of not helpful in the not confusing me department, so anyway”… or, in an emotionally painful situation:
Key looked suspiciously at the librarian. “You sound like you’re trying not to say something.”
Improbably astute, followed by not seeming to get the kind of obvious moral of the story. At times it felt like it was trying to be a story for older kids, and at other times like it was for adults.
The gender issue didn’t seem to add anything to the story, but it only bothered me at the beginning of the story. Then I got used to it. (But if it doesn’t add to the story, and takes getting used to… perhaps it shouldn’t be there.)
Anyway, I enjoyed it, and thought it was a solid draft.
I actually have to disagree with this. I didn’t think Key was “improbably astute”. Key is pretty clearly an unusual child (at least, that’s how I read em). Also, the librarian was pretty clearly being elliptical and a little patronizing, and in my experience kids are pretty sensitive to being patronized. So it didn’t strike me as unbelievable that Key would call the librarian out like that.
You’ve hit on one of my writing weaknesses: I have a ton of trouble writing people who are just plain not very bright or not very mature. I have a number of characters through whom I work on this weakness in (unpublished portions of) Elcenia, but I decided to let Key be as smart I’m inclined to write normally for someone of eir age—my top priority here was finishing the darn thing, since this is only the third short story I can actually claim to have completed and I consider that a bigger problem.
Gur qrngu qvqa’g srry irel qrngu-yvxr. Vg frrzrq yvxr gur rzbgvba fheebhaqvat vg jnf xvaq bs pbzcerffrq vagb bar yvggyr ahttrg gung V cerggl zhpu fxvzzrq bire. V jnf nyfb rkcrpgvat n jbeyq jvgubhg qrngu, juvpu yrsg zr fhecevfrq. Va gur erny jbeyq, qrngu vf bsgra n fhecevfr, ohg yvxr va gur erny jbeyq, fhqqra qrngu va svpgvba yrnirf hf jvgu n srryvat bs qvforyvrs. Lbh pbhyq unir orra uvagvat ng gung qrngu sebz gur svefg yvar.
Nyfb, lbh xvaq bs qevir evtug cnfg gur cneg nobhg birecbchyngvba. V guvax birecbchyngvba vf zl zbgure’f znva bowrpgvba gb pelbavpf.
Alicorn goes right past it probably because she’s read a fair bit of cyronics literature herself and has seen the many suggestions (hence the librarian’s invitation to think of ‘a dozen solutions’), and it’s not the major issue anyway.
You traded off a lot of readability for the device of making the protagonist’s gender indeterminate. Was this intended to serve some literary purpose that I’m missing? On the whole the story didn’t seem to be about gender.
I also have to second DanArmak’s comment that if there was an overall point, I’m missing that also.
Key’s gender is not indeterminate. Ey is actually genderless. I’m sorry if I didn’t make that clear—there’s a bit about it in eir second conversation with Trellis.
Your gender pronouns just sapped 1% of my daily focusing ability.
I thought it was pretty clear. The paragraph about ‘boy or girl’ make it screamingly obvious to me, even if the Spivak or general gender-indeterminacy of the kids hadn’t suggested it.
Finally got around reading the story. I liked it, and finishing it gave me a wild version of that “whoa” reaction you get when you’ve doing something emotionally immersive and then switch to some entirely different activity.
I read Key as mostly genderless, possibly a bit female because the name sounded feminine to me. Trellis, maybe slightly male, though that may also have been from me afterwards reading the comments about Trellis feeling slightly male and those contaminating the memory.
I do have to admit that the genderless pronouns were a bit distracting. I think it was the very fact that they were shortened version of “real” pronouns that felt so distracting—my mind kept assuming that it had misread them and tried to reread. In contrast, I never had an issue with Egan’s use of ve / ver / vis / vis / verself.
I got used to the Spivak after a while, and while it’d be optimal for an audience used to it, it did detract a little at first. On the whole I’d say it’s necessary though (if you were going to use a gender’d pronoun, I’d use female ones)
I read Key as mainly female, and Trellis as more male- it would be interesting to know how readers’ perceptions correlated with their own gender.
The children seemed a little mature, but I thought they’d had a lot better education, or genetic enhancement or something. I think spending a few more sentences on the important events would be good though- otherwise one can simply miss them.
I think you were right to just hint at the backstory- guessing is always fun, and my impression of the world was very similar to that which you gave in one of the comments.
Great story!
I kept thinking of Key as female. This may be because I saw some comments here that saw em as female, or because I know that you’re female.
The other character I didn’t assign a sex to.
I enjoyed the story—it was an interesting world. By the end of the story, you were preaching to a choir I’m in.
None of the characters seemed strongly gendered to me.
I was expecting opposition to anesthesia to include religiously based opposition to anesthesia for childbirth, and for the whole idea of religion to come as a shock. On the other hand, this might be cliched thinking on my part. Do they have religion?
The neuro couldn’t be limited to considered reactions—what about the very useful fast reflexive reaction to pain?
Your other two story links didn’t open.
Religion hasn’t died out in this setting, although it’s uncommon in Key’s society specifically. Religion was a factor in historical opposition to anesthesia (I’m not sure of the role it plays in modern leeriness about painkillers during childbirth) but bringing it up in more detail would have added a dimension to the story I didn’t think it needed.
Reflexes are intact. The neuro just translates the qualium into a bare awareness that damage has occurred. (I don’t know about everyone, but if I accidentally poke a hot burner on the stove, my hand is a foot away before I consciously register any pain. The neuro doesn’t interfere with that.)
I will check the links and see about fixing them; if necessary, I’ll HTMLify those stories too. ETA: Fixed; they should be downloadable now.
At 3800 words, it’s too long for the back page of Nature, but a shorter version might do very well there.
Replied in PM, in case you didn’t notice (click your envelope).
PS: My mind didn’t assign a sex to Key. Worked with me, anyway.
Cool. I also couldn’t help reading Key as female. My hypothesis would be that people generally have a hard time writing characters of the opposite sex. Your gender may have leaked in. The Spivak pronouns were initially very distracting but were okay after a couple paragraphs. If you decide to change it Le Guin pretty successfully wrote a whole planet of androgyns using masculine pronouns. But that might not work in a short story without exposition.
In Left Hand of Darkness, the narrator is an offplanet visitor and the only real male in the setting. He starts his tale by explicitly admitting he can’t understand or accept the locals’ sexual selves (they become male or female for short periods of time, a bit like estrus). He has to psychologically assign them sexes, but he can’t handle a female-only society, so he treats them all as males. There are plot points where he fails to respond appropriately to the explicit feminine side of locals.
This is all very interesting and I liked the novel, but it’s the opposite of passing androgyns as normal in writing a tale. Pronouns are the least of your troubles :-)
Later, LeGuin said that she was no longer satisfied with the male pronouns for the Gethenians.
Very good points. It has been a while since I read it.
I think Key’s apparent femininity might come from a lack of arrogance. Compare Key to, say, Calvin from “Calvin and Hobbes”. Key is extremely polite, willing to admit to ignorance, and seems to project a bit of submissiveness. Also, Key doesn’t demonstrate very much anger over Trellis’s death.
I probably wouldn’t have given the subject a second thought, though, if it wasn’t brought up for discussion here.
Everyone’s talking about Key—did anyone get an impression from Trellis?
If I had to put a gender on Trellis, I’d say that Trellis was more masculine than feminine. (More like Calvin than like Suzie.) Overall, though, it’s fairly gender-neutral writing.
I too got the ‘dull sidekick’ vibe, and since dull sidekicks are almost always male these days...
I do typically have an easier time writing female characters than male ones. I probably wouldn’t have tried to write a story with genderless (human) adults, but in children I figured I could probably manage it. (I’ve done some genderless nonhuman adults before and I think I pulled them off.)
The main feeling I came away with is… so what? It didn’t convey any ideas or viewpoints that were new to me; it didn’t have any surprising twists or revelations that informed earlier happenings. What is the target audience?
The Spivak pronouns are nice; even though I don’t remember encountering them before I feel I could get used to them easily in writing, so (I hope) a transition to general use isn’t impossible.
The general feeling I got from Key is female. I honestly don’t know why that is. Possibly because the only other use of Key as a personal name that comes to mind is a female child? Objectively, the society depicted is different enough from any contemporary human society to make male vs. female differences (among children) seem small in comparison.
Target audience—beats me, really. It’s kind of set up to preach to the choir, in terms of the “moral”. I wrote it because I was pretty sure I could finish it (and I did), and I sorely need to learn to finish stories; I shared it because I compulsively share anything I think is remotely decent.
Hypotheses: I myself am female. Lace, the only gendered character with a speaking role, is female. Key bakes cupcakes at one point in the story and a stereotype is at work. (I had never heard of Key the Metal Idol.)
Could be. I honestly don’t know. I didn’t even consciously remember Key baking cupcakes by the time the story ended and I asked myself what might have influenced me.
I also had the feeling that the story wasn’t really about Key; ey just serves as an expository device. Ey has no unpredictable or even unusual reactions to anything that would establish individuality. The setting should then draw the most interest, and it didn’t do enough that, because it was too vague. What is the government? How does it decide and enforce allowed research, and allowed self-modification? How does sex-choosing work? What is the society like? Is Key forced at a certain age to be in some regime, like our schools? If not, are there any limits on what Key or her parents do with her life?
As it is, the story presented a very few loosely connected facts about Key’s world, and that lack of detail is one reason why these facts weren’t interesting: I can easily imagine some world with those properties.
Small communities, mostly physically isolated from each other, but informationally connected and centrally administered. Basically meritocratic in structure—pass enough of the tests and you can work for the gubmint.
Virtually all sophisticated equipment is communally owned and equipped with government-designed protocols. Key goes to the library for eir computer time because ey doesn’t have anything more sophisticated than a toaster in eir house. This severely limits how much someone could autonomously self-modify, especially when the information about how to try it is also severely limited. The inconveniences are somewhat trivial, but you know what they say about trivial inconveniences. If someone got far enough to be breaking rules regularly, they’d make people uncomfortable and be asked to leave.
One passes some tests, which most people manage between the ages of thirteen and sixteen, and then goes to the doctor and gets some hormones and some surgical intervention to be male or female (or some brand of “both”, and some people go on as “neither” indefinitely, but those are rarer).
Too broad for me to answer—can you be more specific?
Education is usually some combination of self-directed and parent-encouraged. Key’s particularly autonomous and eir mother doesn’t intervene much. If Key did not want to learn anything, eir mother could try to make em, but the government would not help. If Key’s mother did not want em to learn anything and Key did, it would be unlawful for her to try to stop em. There are limits in the sense that Key may not grow up to be a serial killer, but assuming all the necessary tests get passed, ey can do anything legal ey wants.
Thank you for the questions—it’s very useful to know what questions people have left after I present a setting! My natural inclination is massive data-dump. This is an experiment in leaving more unsaid, and I appreciate your input on what should have been dolloped back in.
Reminds me of old China...
That naturally makes me curious about how they got there. How does a government, even though unelected, go about impounding or destroying all privately owned modern technology? What enforcement powers have they got?
Of course there could be any number of uninteresting answers, like ‘they’ve got a singleton’ or ‘they’re ruled by an AI that moved all of humanity into a simulation world it built from scratch’.
And once there, with absolute control over all communications and technology, it’s conceivable to run a long-term society with all change (incl. scientific or technological progress) being centrally controlled and vetoed. Still, humans have got a strong economical competition drive, and science & technology translate into competitive power. Historically, eliminating private economic enterprise takes enormous effort—the big Communist regimes in USSR, and I expect in China as well, never got anywhere near success on that front. What do these contended pain-free people actually do with their time?
It was never there in the first place. The first inhabitants of these communities (which don’t include the whole planet; I imagine there are a double handful of them on most continents—the neuros and the genderless kids are more or less universal, though) were volunteers who, prior to joining under the auspices of a rich eccentric individual, were very poor and didn’t have their own personal electronics. There was nothing to take, and joining was an improvement because it came with access to the communal resources.
Nope. No AI.
What they like. They go places, look at things, read stuff, listen to music, hang out with their friends. Most of them have jobs. I find it a little puzzling that you have trouble thinking of how one could fill one’s time without significant economic competition.
Oh. So these communities, and Key’s life, are extremely atypical of that world’s humanity as a whole. That’s something worth stating because the story doesn’t even hint at it.
I’d be interested in hearing about how they handle telling young people about the wider world. How do they handle people who want to go out and live there and who come back one day? How do they stop the governments of the nations where they actually live from enforcing laws locally? Do these higher-level governments not have any such laws?
Many people can. I just don’t find it convincing that everyone could without there being quite a few unsatisfied people around.
The exchange above reminds me of Robin Hanson’s criticism of the social science in Greg Egan’s works.
I disagree: it doesn’t matter for the story whether the communities are typical or atypical for humanity as a whole, so mentioning it is unnecessary.
The relatively innocuous information about the wider world is there to read about on the earliest guidelists; less pleasant stuff gets added over time.
You can leave. That’s fine. You can’t come back without passing more tests. (They are very big on tests.)
They aren’t politically components of other nations. The communities are all collectively one nation in lots of geographical parts.
They can leave. The communities are great for people whose priorities are being content and secure. Risk-takers and malcontents can strike off on their own.
I wish our own world was nice enough for that kind of lifestyle to exist (e.g., purchasing sovereignity over pieces of settle-able land; or existing towns seceding from their nation)… It’s a good dream :-)
It was the first thing.
The exchange above reminds me of Robin Hanson’s criticism of the social science in Greg Egan’s works.
I enjoyed it. I made an effort to read Key genderlessly. This didn’t work at first, probably because I found the Spivak pronouns quite hard to get used to, and “ey” came out as quite male to me, then fairly suddenly flipped to female somewhere around the point where ey was playing on the swing with Trellis. I think this may have been because Trellis came out a little more strongly male to me by comparison (although I was also making a conscious effort to read ey genderlessly). But as the story wore on, I improved at getting rid of the gender and by the end I no longer felt sure of either Key or Trellis.
Point of criticism: I didn’t find the shift between what was (to me) rather obviously the two halves of the story very smooth. The narrative form appeared to take a big step backwards from Key after the words “haze of flour” and never quite get back into eir shoes. Perhaps that was intentional, because there’s obviously a huge mood shift, but it left me somewhat dissatisfied about the resolution of the story. I felt as though I still didn’t know what had actually happened to the original Key character.
Tags now sort chronologically oldest-to-newest by default, making them much more useful for reading posts in order.
Henceforth, I am Dr. Cyan.
Congratulations! I guess people will believe everything you say now.
I certainly hope so!
Wear a lab coat for extra credibility.
I was thinking I’d wear a stethoscope and announce, “Trust me! I’m a doctor! (sotto voce)… of philosophy.”
Congrats! My friend recently got his Master’s in History, and has been informing every telemarketer who calls that “Listen cupcake, it’s not Dave—I’m not going to hang at your crib and drink forties; listen here, pal, I have my own office! Can you say that? To you I’m Masters Smith.”
I certainly hope you wear your new title with a similar air of pretention, Doctor Cyan. :)
I’ll do my best!
Sincerely,
Cyan, Ph.D.
Is ‘Masters’ actually a proper prefix (akin to the postfix Ph.D) for people with a Master’s degree? I don’t think I’ve ever seen that before.
Congratulations!
Why not post an introduction to your thesis research on LW?
Because I’d need to preface it with a small deluge of information about protein chemistry, liquid chromatography, and mass spectrometry. I think I’d irritate folks if I did that.
Wear a lab coat for extra credibility.
With a doctorate in …?
Biomedical engineering. My thesis concerned the analysis of proteomics data by Bayesian methods.
Isn’t that what they normally use to analyze proteomics data? <\naive>
Not always, or even usually. It seems to me that by and large, scientists invent ad hoc methods for their particular problems, and that applies in proteomics as well as other fields.
Ah ha - So you were the last Cyan!
I briefly thought this was a Battlestar Galactica pun.
It was!
/me wonders what you then interpreted it as
I was going back and forth between Zion and Cylon, lol.
If, say, I have a basic question, is it appropriate to post it to open thread, to a top level post, or what? ie, say if I’m working through Pearl’s Causality and am having trouble deriving something… or say I’ve stared at the wikipedia pages for ages and STILL don’t get the difference between Minimum Description Length and Minimum Message Length… is LW an appropriate place to go “please help me understand this”, and if so, should I request it in a top level post or in an open thread or...
More generally: LW is about developing human rationality, but is it appropriate for questions about already solved aspects of rationality? like “please help me understand the math for this aspect of reasoning” or even “I’m currently facing this question in my life or such, help me reason through this please?”
Thanks.
Most posts here are written by someone who understands an aspect of rationality, to explain it to those who don’t. I see no reason not to ask questions in the open thread. I think they should be top-level posts only if you anticipate a productive discussion around them; most already-solved questions can be answered with a single comment and that would be that, so no need for a separate post.
Okay, thanks. In that case, as I replied to Kaj Sotala, I am indeed asking about the difference between MML and MDL
I think that kind of a question is fine in the Open Thread.
Okay, thanks. In that case, I am asking indeed about the difference between MML and MDL. I’ve stared at the wikipedia pages, including the bits that supposedly explain the difference, and I’m still going “huh?”
David Chalmers surveys the kinds of crazy believed by modern philosophers, as well as their own predictions of the results of the survey.
This blog comment describes what seems to me the obvious default scenario for an unFriendly AI takeoff. I’d be interested to see more discussion of it.
The problem with the specific scenario given, with experimental modification/duplication, rather than careful proof based modification, is that is liable to have the same problem that we have with creating systems this way. The copies might not do what the agent that created them want.
Which could lead to a splintering of the AI, and in-fighting over computational resources.
It also makes the standard assumptions that AI will be implemented on and stable on the von Neumann style computing architecture.
Of course, if it’s not, it could port itself to such if doing so is advantageous.
Would you agree that one possible route to uFAI is human inspired?
Human inspired systems might have the same or similar high fallibility rate (from emulating neurons, or just random experimentation at some level) as humans and giving it access to its own machine code and low-level memory would not be a good idea. Most changes are likely to be bad.
So if an AI did manage to port its code, it would have to find some way of preventing/discouraging the copied AI in the x86 based arch from playing with the ultimate mind expanding/destroying drug that is machine code modification. This is what I meant about stability.
Er, I can’t really give a better rebuttal than this: http://www.singinst.org/upload/LOGI//levels/code.html
What point are you rebutting?
The idea that a greater portion of possible changes to a human-style mind are bad than changes of a equal magnitude to a Von Neumann-style mind.
Most random changes to a von Neumann-style mind would be bad as well.
Just a von-Neumann-style mind is unlikely to make the random mistakes that we can do, or at least that is Eliezer’s contention.
I can’t wait until there are uploads around to make questions like this empirical.
Let me point out that we (humanity) does actually have some experience with this scenario. Right now, mobile code that spreads across a network without effective controls on the bounds of its expansion by the author is worms. If we have experience, we should mine it for concrete predictions and countermeasures.
General techniques against worms might include: isolated networks, host diversity, rate-limiting, and traffic anomaly detection.
Are these low-cost/high-return existential reduction techniques?
No, these are high-cost/low-return existential risk reduction techniques. Major corporations and governments already have very high incentive to protect their networks, but despite spending billions of dollars, they’re still being frequently penetrated by human attackers, who are not even necessarily professionals. Not to mention the hundreds of millions of computers on the Internet that are unprotected because their owners have no idea how to do so, or they don’t contain information that their owners consider especially valuable.
I got into cryptography partly because I thought it would help reduce the risk of a bad Singularity. But while cryptography turned out to work relatively well (against humans anyway), the rest of the field of computer security is in terrible shape, and I see little hope that the situation would improve substantially in the next few decades.
What do you think of the object-capability model? And removing ambient authority in general.
That’s outside my specialization of cryptography, so I don’t have too much to say about it. I do remember reading about the object-capability model and the E language years ago, and thinking that it sounded like a good idea, but I don’t know why it hasn’t been widely adopted yet. I don’t know if it’s just inertia, or whether there are some downsides that its proponents tend not to publicize.
In any case, it seems unlikely that any security solution can improve the situation enough to substantially reduce the risk of a bad Singularity at this point, without a huge cost. If the cause of existential-risk reduction had sufficient resources, one project ought to be to determine the actual costs and benefits of approaches like this and whether it would be feasible to implement (i.e., convince society to pay whatever costs are necessary to make our networks more secure), but given the current reality I think the priority of this is pretty low.
Thanks. I just wanted to know if this was the sort of thing you had in mind, and whether you knew any technical reasons why it wouldn’t do what you want.
This is one thing I keep a close-ish eye on. One of the major proponents of this sort of security has recently gone to work for Microsoft on their research operating systems. So it might come a long in a bit.
As to why it hasn’t caught on, it is partially inertia and partially it requires more user interaction/understanding of the systems than ambient authority. Good UI and metaphors can decrease that cost though.
The ideal would be to have a self-maintaining computer system with this sort of security system. However a good self-maintaining system might be dangerously close to a self-modifying AI.
There’s also a group of proponents of this style working on Caja at Google, including Mark Miller, the designer of E. And some people at HP.
Actually, all these people talk to one another regularly. They don’t have a unified plan or a single goal, but they collaborate with one another frequently. I’ve left out several other people who are also trying to find ways to push in the same direction. Just enough names and references to give a hint. There are several mailing lists where these issues are discussed. If you’re interested, this is probably the one to start with.
Sadly, I suspect this moves things backwards rather than forwards. I was really hoping that we’d see Coyotos one day, which now seems very unlikely.
I meant it more as an indication that Microsoft are working in the direction of better secured OSes already, rather than his being a pivotal move. Coyotos might get revived when the open source world sees what MS produces and need to play catch up.
That assumes MS ever goes far enough that the FLOSS world feels any gap that could be caught up.
MS rarely does so; the chief fruit of 2 decades of Microsoft Research sponsorship of major functional language researchers like Simon Marlow or Simon Peyton-Jones seems to be… C# and F#. The former is your generic quasi-OO imperative language like Python or Java, with a few FPL features sprinkled in, and the latter is a warmed-over O’Caml: it can’t even make MLers feel like they need to catch up, much less Haskellers or FLOSS users in general.
The FPL OSS community is orders of magnitude more vibrant than the OSS secure operating system research. I don’t know of any living projects that use the object-capability model at the OS level (plenty of language level and higher level stuff going on).
For some of the background, Rob Pike wrote an old paper on the state of system level research.
I can’t imagine having any return in protection against spreading of AI on the Internet at any cost (even in a perfect world, AI can still produce value, e.g. earn money online, and so buy access to more computing resources).
Your statement sounds a bit overgeneralized—but you probably have a point.
Still, would you indulge me in some idle speculation? Maybe there could be a species of aliens that evolved to intelligence by developing special microbe-infested organs (which would be firewalled somehow from the rest of the alien themselves) and incentivizing the microbial colonies somehow to solve problems for the host.
Maybe we humans evolved to intelligence that way—after all, we do have a lot of bacteria in our guts. But then, all the evidence that we have pointing to brains as information-processing center would have to be wrong. Maybe brains are the firewall organ! Memes are sortof like microbes, and they’re pretty well “firewalled” (genetic engineering is a meme-complex that might break out of the jail).
The notion of creating an ecology of entities, and incentivizing them to produce things that we value, might be a reasonable strategy, one that we humans have been using for some time.
I can’t see how this comment relates to the previous one. It seems to start an entirely new conversation. Also, the metaphor with brains and microbes doesn’t add understanding for me, I can only address the last paragraph, on its own.
The crucial property of AIs making them a danger is (eventual) autonomy, not even rapid coming to power. Once the AI, or a society (“ecology”) of AIs, becomes sufficiently powerful to ignore vanilla humans, its values can’t be significantly influenced, and most of the future is going to be determined by those values. If those values are not good, from human values point of view, the future is lost to us, it has no goodness. The trick is to make sure that the values of such an autonomous entity are a very good match with our own, at some point where we still have a say in what they are.
Talk of “ecologies” of different agents creates an illusion of continuous control. The standard intuitive picture has little humans at the lower end with a network of gradually more powerful and/or different agents stretching out from them. But how much is really controlled by that node? Its power has no way of “amplifying” as you go through the network: if only humans and a few other agents share human values, these values will receive very little payoff. This is also not sustainable: over time, one should expect preference of agents with more power to gain in influence (which is what “more power” means).
The best way to win this race is to not create different-valued competitors that you don’t expect being able to turn into your own almost-copies, which seems infeasible for all the scenarios I know of. FAI is exactly about devising such a copycat, and if you can show how to do that with “ecologies”, all power to you, but I don’t expect anything from this line of thought.
To explain the relation, you said: “I can’t imagine having any return [...from this idea...] even in a perfect world, AI can still produce value, e.g. earn money online.”
I was trying to suggest that in fact there might be a path to Friendliness by installing sufficient safeguards that the primary way a software entity could replicate or spread would be by providing value to humans.
In the comment above, I explained why what AI does is irrelevant, as long as it’s not guaranteed to actually have the right values: once it goes unchecked, it just reverts to whatever it actually prefers, be it in a flurry of hard takeoff or after a thousand years of close collaboration. “Safeguards”, in every context I saw, refer to things that don’t enforce values, only behavior, and that’s not enough. Even the ideas for enforcement of behavior look infeasible, but the more important point is that even if we win this one, we still lose eventually with such an approach.
My symbiotic-ecology-of-software-tools scenario was not a serious proposal as the best strategy to Friendliness. I was trying to increase the plausibility of SOME return at SOME cost, even given that AIs could produce value.
I seem to have stepped onto a cached thought.
I’m afraid I see the issue as clear-cut, you can’t get “some” return, you can only win or lose (probability of getting there is of course more amenable to small nudges).
Making such a statement significantly increases the standard of reasoning I expect from a post. That is, I expect you to be either right or at least a step ahead of the one with whom you are communicating.
I intend to participate in the StarCraft AI Competition. I figured there are lots of AI buffs here that could toss some pieces of wisdom at me. Shower me with links you deem relevant and recommend books to read.
Generally, what approaches should I explore and what dead ends should I avoid? Essentially, tell me how to discard large portions of porential-starcraft-AI thingspace quickly.
Specifically, the two hardest problems that I see are:
Writing an AI that can learn how to move units efficiently on its own. Either by playing against itself or just searching the game tree. And I’m not just looking for what the best StarCraft players do—I’m searching for the optimum.
The exact rules of the game are not known. By exact I mean Laplace’s Demon exact. It would take me way too long to discover them through experimentation and disassembly of the StarCraft executable. So, I either have to somehow automate this discovery or base my AI on a technique that doesn’t need that.
I have some advice.
Pay attention to the timing of your edit/compile/test cycle time. Efforts to get this shorter pay off both in more iterations and in your personal motivation (interacting with a more-responsive system is more rewarding). Definitely try to get it under a minute.
A good dataset is incredibly valuable. When starting to attack a problem—both the whole thing, and subproblems that will arise—build a dataset first. This would be necessary if you are doing any machine learning, but it is still incredibly helpful even if you personally are doing the learning.
Succeed “instantaneously”—and don’t break it. Make getting to “victory”—a complete entry—your first priority and aim to be done with it in a day or a week. Often, there’s temptation to do a lot of “foundational” work before getting something complete working, or a “big refactoring” that will break lots of things for a while. Do something (continuous integration or nightly build-and-test) to make sure that you’re not breaking it.
Great! That competition looks like a lot of fun, and I wish you the best of luck with it.
As for advice, perhaps the best I can give you is to explain the characteristics the winning program will have.
It will make no, or minimal, use of game tree search. It will make no, or minimal, use of machine learning (at best it will do something like tuning a handful of scalar parameters with a support vector machine). It will use pathfinding, but not full pathfinding; corners will be cut to save CPU time. It will not know the rules of the game. Its programmer will probably not know the exact rules either, just an approximation discovered by trial and error. In short, it will not contain very much AI.
One reason for this is that it will not be running on a supercomputer, or even on serious commercial hardware; it will have to run in real time on a dinky beige box PC with no more than a handful of CPU cores and a few gigabytes of RAM. Even more importantly, only a year of calendar time is allowed. That is barely enough time for nontrivial development. It is not really enough time for nontrivial research, let alone research and development.
In short, you have to decide whether your priority is Starcraft or AI. I think it should be the latter, because that’s what has actual value at the end of the day, but it’s a choice you have to make. You just need to understand that the reward from the latter choice will be in long-term utility, not in winning this competition.
That’s disheartening, but do give more evidence. To counter: participants of DARPA’s Grand Challenge had just a year too, and their task was a notch harder. And they did use machine learning and other fun stuff.
Also, I think a modern gaming PC packs a hell of a punch. Especially with the new graphics cards that can run arbitrary code. But good catch—I’ll inquire about the specs of the machines the competition will be held on.
The Grand Challenge teams didn’t go from zero to victory in one year. They also weren’t one-man efforts.
That having been said, and this is a reply to RobinZ also, for more specifics you really want to talk to someone who has written a real-time strategy game AI, or at least worked in the games industry. I recommend doing a search for articles or blog posts written by people with such experience. I also recommend getting hold of some existing game AI code to look at. (You won’t be copying the code, but just to get a feel for how things are done.) Not chess or Go, those use completely different techniques. Real-time strategy games would be ideal, but failing that, first-person shooters or turn-based strategy games—I know there are several of the latter at least available as open source.
Oh, and Johnicholas gives good advice, it’s worth following.
Stanford’s team did.
Neither is mine.
I do not believe I can learn much from existing RTS AIs because their goal is entertaining the player instead of winning. In fact, I’ve never met an AI that I can’t beat after a few days of practice. They’re all the same: build a base and repeatedly throw groups of units at the enemy’s defensive line until run out of resources, mindlessly following the same predictable route each time. This is true for all of Command & Conquer series, all of Age of Empires series, all of Warcraft series, and StarCraft too. And those are the best RTS games in the world with the biggest budgets and development teams.
But I will search around.
Was these games’ development objective to make the best AI they could that would win in all scenarios? I doubt that would be the most fun for human players to play against. Maybe humans wanted a predictable opponent.
They want a fun opponent.
In games with many players (where alliances are allowed), you could make the AI’s more likely to ally with each other and to gang up on the human player. This could make an 8-player game nearly impossible. But the goal is not to beat the human. The goal is for the AI to feel real (human), and be fun.
As you point out, the goal in this contest is very different.
Ah, I had assumed they must have been working on the problem before the first one, but their webpage confirms your statement here. I stand corrected!
Good, that will help.
Yeah. Personally I never found that very entertaining :-) If you can write one that does better, maybe the industry might sit up and take notice. Best of luck with the project, and let us know how it turns out.
Please fix this post’s formatting. I beg you.
What’s the recommended way to format quoted fragments on this site to distinguish them from one’s own text? I tried copy pasting CannibalSmith’s comment, but that copied as indentation with four spaces, which when I used it, gave a different result.
Click on the reply button and then click the help link in the bottom right corner. It explains how to properly format your comments.
Okay, thanks, fixed.
The Grand Challenge teams didn’t go from zero to victory in one year. They also weren’t one-man efforts.
That having been said, and this is a reply to RobinZ also, for more specifics you really want to talk to someone who has written a real-time strategy game AI, or at least worked in the games industry. One thing I can say is, get hold of some existing game AI code to look at. (You won’t be copying the code, but just to get a feel for how things are done.) Not chess or Go, those use completely different techniques. Real-time strategy games would be ideal, but failing that, first-person shooters or turn-based strategy games—I know there are several of the latter at least available as open source.
Oh, and Johnicholas gives good advice, it’s worth following.
Strictly speaking, this reads a lot like advice to sell nonapples. I’ll grant you that it’s probably mostly true, but more specific advice might be helpful.
There’s some discussion and early examples here: http://www.teamliquid.net/forum/viewmessage.php?topic_id=105570
You might also look at some of the custom AIs for Total Annihilation and/or Supreme Commander, which are reputed to be quite good.
Ultimately though the winner will probably come down to someone who knows Starcraft well enough to thoroughly script a bot, rather than more advanced AI techniques. It might be easier to use proper AI in the restricted tournaments, though.
I’m going to repeat my request (for the last time) that the most recent Open Thread have a link in the bar up top, between ‘Top’ and ‘Comments’, so that people can reach it a tad easier. (Possible downside: people could amble onto the site and more easily post time-wasting nonsense.)
If I recall correctly this request is already on the implementation queue.
I am posting this in the open thread because I assume that somewhere in the depths of posts and comments there is an answer to the question:
If someone thought we lived in an internally consistent simulation that is undetectable and inescapable, is it even worth discussing? Wouldn’t the practical implications of such a simulation imply the same things as the material world/reality/whatever you call it?
Would it matter if we dropped “undetectable” from the proposed simulation? At what point would it begin to matter?
Not particularly.
Absolutely.
The point where you find a way to hack it, escape from the simulation and take over (their) world.
In two recent comments [1][2], it has been suggested that to combine ostensibly Bayesian probability assessments, it is appropriate to take the mean on the log-odds scale. But Bayes’ Theorem already tells us how we should combine information. Given two probability assessments, we treat one as the prior, sort out the redundant information in the second, and update based on the likelihood of the non-redundant information. This is practically infeasible, so we have to do something else, but whatever else it is we choose to do, we need to justify it as an approximation to the infeasible but correct procedure. So, what is the justification for taking the mean on the log-odds scale? Is there a better but still feasible procedure?
An independent piece of evidence moves the log-odds a constant additive amount regardless of the prior. Averaging log-odds amounts to moving 2⁄3 of that distance if 2⁄3 of the people have the particular piece of evidence. It may behave badly if the evidence is not independent, but if all you have are posteriors, I think it’s the best you can do.
Hmm, this “mentat wiki” seems to have some reasonably practical intelligence (and maybe rationality) techniques.
Huh. Looking at that wiki drove home how much I ReallyHate CamelCase.
I’d say go ahead and post a link to this—if they’ve got any especially good sample techniques, be sure to include a pointer there.
It has been awhile since I have been around, so please ignore if this has been brought up before.
I would appreciate it if offsite links were a different color. The main reason for this is because of the way I skim online articles. Links are generally more important text and I if I see a link for [interesting topic] it helps me to know at a glance that there will be a good read with a LessWrong discussion at the end as opposed to a link to Amazon where I get to see the cover of a book.
Firefox (or maybe one of the million extensions that I’ve downloaded and forgotten about) has a feature where, if you mouseover a link, the URL linked to will appear in the lower bar of the window. A different color would be easier, though.
Ivan Sutherland (inventor of Sketchpad—the first computer-aided drawing program) wrote about how “courage” feels, internally, when doing research or technological projects.
“[...] When I get bogged down in a project, the failure of my courage to go on never feels to me like a failure of courage, but always feels like something entirely dif- ferent. One such feeling is that my research isn’t going anywhere anyhow, it isn’t that important. Another feeling involves the urgency of something else. I have come to recognize these feelings as “who cares” and “the urgent drives out the important.” [...]”
I’m looking for a certain quote I think I may have read on either this blog or Overcoming Bias before the split. It goes something like this: “You can’t really be sure evolution is true until you’ve listened to a creationist for five minutes.”
Ah, never mind, I found it.
“In a way, no one can really trust the theory of natural selection until after they have listened to creationists for five minutes; and then they know it’s solid.”
I’d like a pithier way of phrasing it, though, than the original quote.
http://scicom.ucsc.edu/SciNotes/0901/pages/geeks/geeks.html
″ They told them that half the test generally showed gender differences (though they didn’t mention which gender it favored), and the other half didn’t.
Women and men did equally well on the supposedly gender-neutral half. But on the sexist section, women flopped. They scored significantly lower than on the portion they thought was gender-blind.”
Big Edit: Jack formulated my ideas better, so see his comment.
This was the original: The fact that the universe hasn’t been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenarios is most likely? Related question: If we built a superintelligence without worrying about friendliness or morality at all, what kind of things would it optimize? Can we even make a guess? Would it be satisfied to be a dormant Laplace’s Demon?
Restructuring since the fact that the universe hasn’t been noticeably paperclipped can’t possible be considered evidence for (c).
The universe has either been paperclipped (1) or it hasn’t been (2).
If (1):
(A) we have observed paperclipping and not realized it (someone was really into stars, galaxies and dark matter)
(B) Our universe is the result of paperclipping (theists were right, sort of)
(C) Superintelligences tend not to optimize things that are astronomically visible.
If (2)
(D) Super-intelligences are impossible.
(E) Quantum immortality true.
(F) No intelligent aliens.
(G) Some variety of simulation hypothesis is true.
(H) Galactic aliens exist but have never constructed a super-intelligence do to a well enforced prohibition on AI construction/research, an evolved deficiency in thinking about minds as a physical object (substance dualism is far more difficult for them to avoid than it is for us), or some other reason that we can’t fathom.
(I) Friendliness is easy + Alien ethics doesn’t include any values that lead to us noticing them.
d) should be changed to the sparseness of intelligent aliens and limits to how fast even a superintelligence can extend its sphere of influence.
Some of that was probably needed to contextualize my comment.
I’ll replace it without the spacing so it’s more compact. Sorry about that, I’ll work on my comment etiquette.
I like the color red. When people around me wear red, it makes me happy—when they wear any other color, it makes me sad. I crunch some numbers and tell myself, “People wear red about 15% of the time, but they wear blue 40% of the time.” I campaign for increasing the amount that people wear red, but my campaign fails miserably.
“It’d be great if I could like blue instead of red,” I tell myself. So I start trying to get myself to like blue—I choose blue over red whenever possible, surround myself in blue, start trying to put blue in places where I experience other happinesses so I associate blue with those things, etc.
What just happened? Did a belief or a preference change?
You acquired a second-order desire, which is a preference about preferences.
By coincidence, two blog posts went up today that should be of interest to people here.
Gene Callahan argues that Bayesianism lacks the ability to smoothly update beliefs as new evidence arrives, forcing the Bayesian to irrationally reset priors.
Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW. An excellent exercise in framing an issue in Bayesian terms. Also discusses metaethical issues related to bending rules.
(Needless to say, I don’t agree with either of these arguments, but they’re great for application of your own rationality.)
The second link doesn’t load; should be this.
Thanks! Fixed.
That’s not what he is saying. His argument is not that the hacked emails actually should raise our confidence in AGW. His argument is that there is a possible scenario under which this should happen, and the probability that this scenario is true is not infinitesimal. The alternative possibility—that the scientists really are smearing the opposition with no good reason—is far more likely, and thus the net effect on our posteriors is to reduce them—or at least keep them the same if you agree with Robin Hanson.
Here’s (part of) what Tyler actually said:
Right—that is what I called “giving a reason why the hacked emails...” and I believe that characterization is accurate: he’s described a reason why they would raise our confidence in AGW.
This is reason why Tyler’s argument for a positive Bayes factor is in error, not a reason why my characterization was inaccurate.
I think we agree on the substance.
Tyler isn’t arguing for a positive Bayes factor. (I assume that by “Bayes factor” you mean the net effect on the posterior probability). He posted a followup because many people misunderstood him. Excerpt:
edited to add:
I’m not sure I understand you’re criticism, so here’s how I understood his argument. There are two major possibilities worth considering:
and
Then the argument goes that the net effect of 1 is to lower our posteriors for AGW while the net effect of 2 is to raise them.
Finally, p(2 is true) != 0.
This doesn’t tell us the net effect of the event on our posteriors—for that we need p(1), p(2) and p(anything else). Presumably, Tyler thinks p(anything else) ~ 0, but that’s a side issue.
Is this how you read him? If so, which part do you disagree with?
I’m using the standard meaning: for a hypothesis H and evidence E, the bayes factor is p(E|H)/p(E|~H). It’s easiest to think of it as the factor you mutiply your prior odds to get posterior odds. (Odds, not probabilities.) Which means I goofed and said “positive” when I meant “above unity” :-/
I read Tyler as not knowing what he’s talking about. For one thing, do you notice how he’s trying to justify why something should have p>0 under a Bayesian analysis … when Bayesian inference already requires p’s to be greater than zero?
In his original post, he was explaining a scenario under which seeing fraud should make you raise your p(AGW). Though he’s not thinking clearly enough to say it, this is equivalent to describing a scenario under which the Bayes factor is greater than unity. (I admit I probably shouldn’t have said “argument for >1 Bayes factor”, but rather, “suggestion of plausibility of >1 Bayes factor”)
That’s the charitable interpretation of what he said. If he didn’t mean that, as you seem to think, then he’s presenting metrics that aren’t helpful, and this is clear when he think’s its some profound insight to put p(fraud due to importance of issue) greater than zero. Yes, there are cases where AGW is true despite this evidence—but what’s the impact on the Bayes factor?
Why should we care about arbitrarily small probabilities?
Tyler was not misunderstood: he used probability and Bayesian inference incorrectly and vacuously, then tried to backpedal. ( My comment on page 2.)
Anyway, I think we agree on the substance:
The fact that the p Tyler referred to is greater than zero is insufficient information to know how to update.
The scenario Tyler described is insufficient to give Climategate a Bayes factor above 1.
(I was going to the drop the issue, but you seem serious about de-Aumanning this, so I gave a full reply.)
I think we are arguing past each other, but it’s about interpreting someone else so I’m not that worried about it. I’ll add one more bullet to your list to clarify what I think Tyler is saying. If that doesn’t resolve it, oh well.
If we know with certainty that the secenario that Tyler described is true, that is if we know that the scientists fudged things because they knew that AGW was real and that the consequences were worth risking their reputations on, then Climategate has a Bayes factor above 1.
I don’t think Tyler was saying anything more than that. (Well, and P(his scenario) is non-negligible)
I think this is close to the question that has been lurking in my mind for some time: Why optimize our strategies to achieve what we happen to want, instead of just modifying what we want?
Suppose, for my next question, that it was trivial to modify what we want. Is there some objective meta-goal we really do need to pay attention to?
Well, if we modified what we wanted, we wouldn’t get what we originally wanted because we wouldn’t want to...
Can’t think of anything else off the top of my head.
tut’s quip holds the key:
But to expound on it a bit further, if I want to drive to Dallas to see a band play I can (a) figure out a strategy to get there or (b) stop wanting to go. Assuming that (b) is even possible, it isn’t actually a solution to the problem of how to get to Dallas. Applying the same principle to all Wants does not provide you with a way to always get what you want. Instead, it helps you avoid not getting what you want.
If you wanted nothing more than to avoid disappointment by not getting what you want than the safest route is to never desire anything that isn’t a sure thing. Or simply not want anything at all. But a simpler route to this whole process is to ditch that particular Want first. The summation is a bit wordy and annoying, but it ends like this:
Instead of wanting to avoid not getting what you want by not wanting anything else, simply forgo the want that is pushing you to avoid not getting what you want.
With parens:
Instead of (wanting to avoid {not getting what you want} by not wanting anything else), simply forgo (the want that is pushing you to avoid {not getting what you want}).
In other words, you can achieve the same result that modifying your wants would create by not getting too disappointed if you don’t get what you want. Or, don’t take it so hard when things don’t go your way.
Hopefully that made some sense (and I got it right.)
Thank you for your responses, but I guess my question wasn’t clear. I was asking about purpose. If there’s no point in going to Dallas, why care about wanting to go to Dallas?
This is my problem if there’s no objective value (that I tried to address more directly here). If there’s no value to anything I might want, why care about what I want, much less strive for what I want?
I don’t know if there’s anything to be done. Whining about it is pointless. If anyone has a constructive direction, please let me know. I picked up a Sartre’s , “Truth and Existence” rather randomly, maybe it will lead in a different (hopefully more interesting) direction.
I second the comments above. The answer Alicorn and Furcas give sounds really shallow compared to a Framework Of Objective Value; but when I become convinced that there really is no FOOV, I was relieved to find that I still, you know, wanted things, and these included not just self-serving wants, but things like “I want my friends and family to be happy, even in circumstances where I couldn’t share or even know of their happiness”, and “I want the world to become (for example) more rational, less violent, and happier, even if I wouldn’t be around to see it— although if I had the chance, I’d rather be around to see it, of course”.
It doesn’t sound as dramatic or idealistic as a FOOV, but the values and desires encoded in my brain and the brains of others have the virtue of actually existing; and realizing that these values aren’t written on a stone tablet in the heart of the universe doesn’t rob them of their importance to the life I live.
Because in spite of everything, you still want it.
Or: You can create value by wanting things. If things have value, it’s because they matter to people—and you’re one of those, aren’t you? Want things, make them important—you have that power.
Maybe I wouldn’t. There have been times in my life when I’ve had to struggle to feel attached to reality, because it didn’t feel objectively real. Now if value isn’t objectively real, I might find myself again feeling indifferent, like one part of myself is carrying on eating and driving to work, perhaps socially moral, perhaps not, while another part of myself is aware that nothing actually matters. I definitely wouldn’t feel integrated.
I don’t want to burden anyone with what might be idiosyncratic sanity issues, but I do mention them because I don’t think they’re actually all that idiosyncratic.
Can you pick apart what you mean by “objectively”? It seems to be a very load-bearing word here.
I thought this was a good question, so I took some time to think about it. I am better at recognizing good definitions than generating them, but here goes:
‘Objective’ and ‘subjective’ are about the relevance of something across contexts.
Suppose that there is some closed system X. The objective value of X is its value outside X. The subjective value of X is its value inside X.
For example, if I go to a party and we play a game with play money, then the play money has no objective value. I might care about the game, and have fun playing it with my friends, but it would be a choice whether or not to place any subjective attachment to the money; I think that I wouldn’t and would be rather equanimous about how much money I had in any moment. If I went home and looked carefully at the money to discover that it was actually a foreign currency, then it turns out that the money had objective value after all.
Regarding my value dilemma, the system X is myself. I attach value to many things in X. Some of this attachment feels like a choice, but I hazard that some of this attachment is not really voluntary. (For example, I have mirror neurons.) I would call these attachments ‘intellectual’ and ‘visceral’ respectively.
Generally, I do not have much value for subjective experience. If something only has value in ‘X’, then I have a tendency to negate that as a motivation. I’m not altruistic, I just don’t feel like subjective experience is very important. Upon reflection, I realize that re: social norms, I actually act rather selfishly when I think I’m pursuing something with objective value.
If there’s no objective value, then at the very least I need to do a lot of goal reorganization; losing my intellectual attachments unless they can be recovered as visceral attachments. At the worst, I might feel increasingly like I’m a meaningless closed system of self-generated values. At this point, though, I doubt I’m capable of assimilating an absence of objective value on all levels—my brain might be too old—and for now I’m just academically interested in how self-validation of value works without feeling like its an illusion.
I know this wasn’t your main point, but money doesn’t have objective value, either, by that definition. It only has value in situations where you can trade it for other things. It’s extremely common to encounter such situations, so the limitation is pretty ignorable, but I suspect you’re at least as likely to encounter situations where money isn’t tradeable for goods as you are to encounter situations where your own preferences and values aren’t part of the context.
I used the money analogy because it has a convenient idea of value.
While debating about the use of that analogy, I had already considered it ironic that the US dollar hasn’t had “objective” value since it was disconnected from the value of gold in 1933. Not that gold has objective value unless you use it to make a conductor. But at the level, I start losing track of what I mean by ‘value’. Anyway, it is interesting that the value of the US dollar is exactly an example of humans creating value, echoing Alicorn’s comment.
Real money does have objective value relative to the party, since you can buy things on your way home, but no objective value outside contexts where the money can be exchanged for goods.
If you are a closed system X, and something within system X only has objective value inasmuch as something outside X values it, then does the fact that other people care about you and your ability to achieve your goals help? They are outside X, and while their first-order interests probably never match yours perfectly, there is a general human tendency to care about others’ goals qua others’ goals.
If you mean that I might value myself and my ability to achieve my goals more because I value other people valuing that, then it does not help. My valuation of their caring is just as subjective as any other value I would have.
On the other hand, perhaps you were suggesting that this mutual caring could be a mechanism for creating objective value, which is kind of in line with what I think. For that matter, I think that my own valuation of something, even without the valuation of others, does create objective value—but that’s a FOOM. I’m trying to imagine reality without that.
That’s not what I mean. I don’t mean that their caring about you/your goals makes things matter because you care if they care. I mean that if you’re a closed system, and you’re looking for a way outside of yourself to find value in your interests, other people are outside you and may value your interests (directly or indirectly). They would carry on doing this, and this would carry on conferring external value to you and your interests, even if you didn’t give a crap or didn’t know anybody else besides you existed—how objective can you get?
I don’t think it’s necessary—I think even if you were the only person in the universe, you’d matter, assuming you cared about yourself—and I certainly don’t think it has to be really mutual. Some people can be “free riders” or even altruistic, self-abnegating victims of the scheme without the system ceasing to function. So this is a FOOV? So now it looks like we don’t disagree at all—what was I trying to convince you of, again?
I guess I’m really not sure. I’ll have to think about it a while. What will probably happen is that next time I find myself debating with someone asserting there is no Framework of Objective Value, I will ask them about this case; if minds can create objective value by their value-ing. I will also ask them to clarify what they mean by objective value.
Truthfully, I’ve kind of forgotten what this issue I raised is about, probably for a few days or a week.
I’m either not sure what you’re trying to do or why you’re trying to do it. What do you mean by FOOM here? Why do you want to imagine reality without it? How does people caring about each other fall into that category?
Yeah, I think I can relate to that. This edges very close to an affective death spiral, however, so watch the feedback loops.
The way I argued myself out of mine was somewhat arbitrary and I don’t have it written up yet. The basic idea was taking the concepts that I exist and that at least one other thing exists and, generally speaking, existence is preferred over non-existence. So, given that two things exist and can interact and both would rather be here than not be here, it is Good to learn the interactions between the two so they can both continue to exist. This let me back into accepting general sensory data as useful and it has been a slow road out of the deep.
I have no idea if this is relevant to your questions, but since my original response was a little off maybe this is closer?
This paragraph (showing how you argued yourself out of some kind of nihilism) is completely relevant, thanks. This is exactly what I’m looking for.
What do you mean by, “existence is preferred over non-existence”? Does this mean that in the vacuum of nihilism, you found something that you preferred, or that it’s better in some objective sense?
My situation is that if I try to assimilate the hypothesis that there is no objective value (or, rather, I anticipate trying to do so), then immediately I see that all of my preferences are illusions. It’s not actually any better if I exist or don’t exist, or if the child is saved from the tracks or left to die. It’s also not better if I choose to care subjectively about these things (and be human) or just embrace nihilism, if that choice is real. I understand that caring about certain sorts of these things is the product of evolution, but without any objective value, I also have no loyalty to evolution and its goals—what do I care about the values and preferences it instilled in me?
The question is; how has evolution actually designed my brain; in the state ‘nihilism’ does my brain (a) abort intellectual thinking (there’s no objective value to truth anyway) and enter a default mode of material hedonism that acts based on preferences and impulses just because they exist and that’s what I’m programmed to do or (b) does it cling to its ability to think beyond that level of programming, and develop this separate identity as a thing that knows that nothing matters?
Perhaps I’m wrong, but your decision to care about the preference of existence over non-existence and moving on from there appears to be an example of (a). Or perhaps a component (b) did develop and maintain awareness of nihilism, but obviously that component couldn’t be bothered posting on LW, so I heard a reply from the part of you that is attached to your subjective preferences (and simply exists).
Well, my bit about existence and non-existence stemmed from a struggle with believing that things did or did not exist. I have never considered nihilism to be a relevant proposal: It doesn’t tell me how to act or what to do. It also doesn’t care if I act as if there is an objective value attached to something. So… what is the point in nihilism?
To me, nihilism seems like a trap for other philosophical arguments. If those arguments and moral ways lead them to a logical conclusion of nihilism, than they cannot escape. They are still clinging to whatever led them there but say they are nihilists. This is the death spiral: Believing that nothing matters but acting as if something does.
If I were to actually stop and throw away all objective morality, value, etc than I would except a realization that any belief in nihilism would have to go away too. At this point I my presuppositions about the world reset and… what? It is this behavior that is similar to my struggles with existence.
The easiest summation of my belief that existence is preferred over non-existence is that existence can be undone and non-existence is permanent. If you want more I can type it up. I don’t know how helpful it will be against nihilism, however.
Agreed. I find that often it isn’t so much that I find the thought process intrinsically pleasurable (affective), but that in thinking about it too much, I over-stimulate the trace of the argument so that after a while I can’t recall the subtleties and can’t locate the support. After about 7 comments back and forth, I feel like a champion for a cause (no objective values RESULTS IN NIHILISM!!) that I can’t relate to anymore. Then I need to step back and not care about it for a while, and maybe the cause will spontaneously generate again, or perhaps I’ll have learned enough weighting in another direction that the cause never takes off again.
Feel free to tell me to mind my own business, but I’m curious. That other part: If you gave it access to resources (time, money, permission), what do you expect that it would do? Is there anything about your life that it would change?
Jack also wrote, “The next question is obviously “are you depressed?” But that also isn’t any of my business so don’t feel obligated to answer.”
I appreciate this sensitivity, and see where it comes from and hy its justified, but I also find it interesting that interrogating personal states is perceived as invasive, even as this is the topic at hand.
However, I don’t feel like its so personal and I will explain why. My goals here are to understand how the value validation system works outside FOOM. I come from the point of view that I can’t do this very naturally, and most people I know also could not. I try to identify where thought gets stuck and try to find general descriptions of it that aren’t so personal. I think feeling like I have inconsistent pieces (i.e., like I’m going insane) would be a common response to the anticipation of a non-FOOM world.
To answer your question, a while ago, I thought my answer to your question would be a definitive, “no, this awareness wouldn’t feel any motivation to change anything”. I had written in my journal that even if there was a child laying on the tracks, this part of myself would just look on analytically. However, I felt guilty about this after a while and I’ve seen repressed the experience of this hypothetical awareness so that its more difficult to recall.
But recalling, it felt like this: it would be “horrible” for the child to die on the tracks. However, what is “horrible” about horrible? There’s nothing actually horrible about it. Without some terminal value behind the value (for example, I don’t think I ever thought a child dieing on the tracks was objectively horrible, but that it might be objectively horrible for me not to feel like horrible was horrible at some level of recursion) it seems that the value buck doesn’t get passed, it doesn’t stop, it just disappears.
Actually, I practically never see it as invasive; I’m just aware that other people sometimes do, and try to act accordingly. I think this is a common mindset, actually: It’s easier to put up a disclaimer that will be ignored 90-99% of the time than it is to deal with someone who’s offended 1-10% of the time, and generally not worth the effort of trying to guess whether any given person will be offended by any given question.
I’m not sure how you came to that conclusion—the other sentences in that paragraph didn’t make much sense to me. (For one thing, one of us doesn’t understand what ‘FOOM’ means. I’m not certain it’s you, though.) I think I know what you’re describing, though, and it doesn’t appear to be a common response to becoming an atheist or embracing rationality (I’d appreciate if others could chime in on this). It also doesn’t necessarily mean you’re going insane—my normal brain-function tends in that direction, and I’ve never seen any disadvantage to it. (This old log of mine might be useful, on the topic of insanity in general. Context available on request; I’m not at the machine that has that day’s logs in it at the moment. Also, disregard the username, it’s ooooold.)
My Buddhist friends would agree with that. Actually, I pretty much agree with it myself (and I’m not depressed, and I don’t think it’s horrible that I don’t see death as horrible, at any level of recursion). What most people seem to forget, though, is that the absence of a reason to do something isn’t the same as the presence of a reason not to do that thing. People who’ve accepted that there’s no objective value in things still experience emotions, and impulses to do various things including acting compassionately, and generally have no reason not to act on such things. We also experience the same positive feedback from most actions that theists do—note how often ‘fuzzies’ are explicitly talked about here, for example. It does all add back up to normality, basically.
Thank you. So maybe I can look towards Buddhist philosophy to resolve some of my questions. In any case, it’s really reassuring that others can form these beliefs about reality, and retain things that I think are important (like sanity and moral responsibility.)
Sorry! FOOV: Framework Of Objective Value!
Okay, I went back and re-read that bit with the proper concept in place. I’m still not sure why you think that non-FOOV value systems would lead to mental problems, and would like to hear more about that line of reasoning.
As to how non-FOOV value systems work, there seems to be a fair amount of variance. As you may’ve inferred, I tend to take a more nihilistic route than most, assigning value to relatively few things, and I depend on impulses to an unusual degree. I’m satisfied with the results of this system: I have a lifestyle that suits my real preferences (resources on hand to satisfy most impulses that arise often enough to be predictable, plus enough freedom and resources to pursue most unpredictable impulses), projects to work on (mostly based on the few things that I do see as intrinsically valuable), and very few problems. It appears that I can pull this off mostly because I’m relatively resistant to existential angst, though. Most value systems that I’ve seen discussed here are more complex, and often very other-oriented. Eliezer is an example of this, with his concept of coherent extrapolated value. I’ve also seen at least one case of a person latching on to one particular selfish goal and pursuing that goal exclusively.
I’m pretty sure I’ve over-thought this whole thing, and my answer may not have been as natural as it would have been a week ago, but I don’t predict improvement in another week and I would like to do my best to answer.
I would define “mental problems” as either insanity (an inability or unwillingness to give priority to objective experience over subjective experience) or as a failure mode of the brain in which adaptive behavior (with respect to the goals of evolution) does not result from sane thoughts.
I am qualifying these definitions because I imagine two ways in which assimilating a non-FOOV value system might result in mental problems—one of each type.
First, extreme apathy could result. True awareness that no state of the universe is any better than any other state might extinguish all motivation to have any effect upon empirical reality. Even non-theists might imagine that by virtue of ‘caring about goodness’, they are participating in some kind of cosmic fight between good and evil. However, in a non-FOOV value system, there’s absolutely no reason to ‘improve’ things by ‘changing’ them. While apathy might be perfectly sane according to my definition above, it would be very maladaptive from a human-being-in-the-normal-world point of view, and I would find it troubling if sanity is at odds with being a fully functioning human person.
Second, I anticipate that if a person really assimilated that there was no objective value, and really understood that objective reality doesn’t matter outside their subjective experience, they would have much less reason to value objective truth over subjective truth. First, because there can be no value to objective reality outside subjective reality anyway, and second because they might more easily dismiss their moral obligation to assimilate objective reality into their subjective reality. So that instead of actually saving people who are drowning, they could just pretend the people were not drowning, and find this morally equivalent.
I realize now in writing this that for the second case sanity could be preserved – and FOOV morality recovered – as long as you add to your moral obligations that you must value objective truth. This moral rule was missing from my FOOV system (that is, I wasn’t explicitly aware of it) because objective truth was seen as valued in itself, and moral obligation was seen as being created by objective reality.
Ah. That makes more sense.
Also, a point I forgot to add in my above post: Some (probably the vast majority of) atheists do see death as horrible; they just have definitions of ‘horrible’ that don’t depend on objective value.
There is objective value, you know. It is an objective fact about reality that you care about and value some things and people, as do all other minds.
The point of going to Dallas is a function of your values, not the other way around.
I’m not sure this question will make sense, but do you place any value on that objective value?
For some things it is probably wise to change your desires to something you can actually do. But in general the answer is another question: Why would you want to do that?
Robin Hanson podcast due 2009-12-23:
http://www.blogtalkradio.com/fastforwardradio/2009/12/23/fastforward-radio—countdown-to-foresight-2010-par
Repost from Bruce Schneier’s CRYPTO-GRAM:
The Psychology of Being Scammed
This is a very interesting paper: “Understanding scam victims: seven principles for systems security, by Frank Stajano and Paul Wilson.” Paul Wilson produces and stars in the British television show The Real Hustle, which does hidden camera demonstrations of con games. (There’s no DVD of the show available, but there are bits of it on YouTube.) Frank Stajano is at the Computer Laboratory of the University of Cambridge.
The paper describes a dozen different con scenarios—entertaining in itself—and then lists and explains six general psychological principles that con artists use:
The distraction principle. While you are distracted by what retains your interest, hustlers can do anything to you and you won’t notice.
The social compliance principle. Society trains people not to question authority. Hustlers exploit this “suspension of suspiciousness” to make you do what they want.
The herd principle. Even suspicious marks will let their guard down when everyone next to them appears to share the same risks. Safety in numbers? Not if they’re all conspiring against you.
The dishonesty principle. Anything illegal you do will be used against you by the fraudster, making it harder for you to seek help once you realize you’ve been had.
The deception principle. Things and people are not what they seem. Hustlers know how to manipulate you to make you believe that they are.
The need and greed principle. Your needs and desires make you vulnerable. Once hustlers know what you really want, they can easily manipulate you.
It all makes for very good reading.
The paper The Real Hustle
With Channukah right around the corner, it occurs to me that “Light One Candle” becomes a transhumanist/existential-risk-reduction song with just a few line edits.
Whether or not a singleton is the best outcome or not, I’m way too uncomfortable with the idea to be singing songs of praise about it.
Upvoted. I’m actually really uncomfortable with the idea, too. My comment above is meant in a silly and irreverent manner (cf. last month), and the substitution for “peacemaker” was too obvious to pass up.
Is there a proof anywhere that occam’s razor is correct? More specifically, that occam priors are the correct priors. Going from the conjunction rule to P(A) >= P(B & C) when A and B&C are equally favored by the evidence seems simple enough (and A, B, and C are atomic propositions), but I don’t (immediately) see how to get from here to an actual number that you can plug into Baye’s rule. Is this just something that is buried in textbook on information theory?
On that note, assuming someone had a strong background in statistics (phd level) and little to no background in computer science outside of a stat computing course or two, how much computer science/other fields would they have to learn to be able to learn information theory?
Thanks to anyone who bites
I found Rob Zhara’s comment helpful.
thanks. I suppose a mathematical proof doesn’t exist, then.
Yes, there is a proof.
http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/ljr
Try Solomonoff Induction
Not only is there no proof, there isn’t even any evidence for it. Any effort to collect evidence for it leaves you assuming what you’re trying to prove. This is the “problem of induction” and there is no solution; however, since you are built to be incapable of not applying induction and you couldn’t possibly make any decisions without it.
Occam’s razor is dependent on a descriptive language / complexity metric (so there are multiple flavours of the razor).
Unless a complexity metric is specified, the first question seems rather vague.
I think you might be making this sound easier than it is. If there are an infinite number of possible descriptive languages (or of ways of measuring complexity) aren’t there an infinite number of “flavours of the razor”?
Yes, but not all languages are equal—and some are much better than others—so people use the “good” ones on applications which are sensitive to this issue.
There’s a proof that any two (Turing-complete) metrics can only differ by at most a constant amount, which is the message length it takes to encode one metric in the other.
Of course, the constant can be arbitrarily large.
However, there are a number of domains for which this issue is no big deal.
As far as I can tell, this is exactly zero comfort if you have finitely many hypotheses.
This is little comfort if you have finitely many hypotheses — you can still find some encoding to order them in any way you want.
Bayes’ rule.
Unless several people named Baye collectively own the rule, it’s Bayes’s rule. :)
I’m interested in values. Rationality is usually defined as something like an agent that tries to maximize its own utility function. But, humans, as far as I can tell, don’t really have anything like “values” beside “stay alive, get immediatly satisfying sensory input”.
This, afaict, results to lip servive to “greater good”, when people just select some nice values that they signal they want to promote, when in reality they haven’t done the math by which these selected “values” derive from these “stay alive”-like values. And so, their actions seem irrational, but only because they signal of having values they don’t actually have or care about.
This probably boils down to finding something to protect, but overall this issue is really confusing.
I’m confused. Have you never seen long-term goal directed behavior?
I’m not sure, maybe, but more of a problem here is to select your goals. The choice seems to be arbitrary, and as far as I can tell, human psychology doesn’t really even support having value systems that go deeper than that “lip service” + conformism state.
But I’m really confused when it comes to this, so I thought I could try describing my confusion here :)
I think you need to meet better humans. Or just read about some.
John Brown, Martin Luther, Galileo Galilei, Abraham Lincoln, Charles Darwin, Mohandas Gandhi
Can you make sense of those biographies without going deeper than “lip service” + conformism state?
Edit: And I don’t even necessarily mean that these people were supremely altruistic or anything. You can add Adolph Hitler to the list too.
Dunno, haven’t read any of those. But if you’re sure that something like that exists, I’d like to hear how is it achievable on human psychology.
I mean, paperclip maximizer is seriously ready to do anything to maximize paperclips. It really takes the paperclips seriously.
On the other hand, there are no humans that seem to care about anything in particular that’s going on in the world. People are suffering and dying, misfortune happens, animals go extinct, and relatively few do anything about it. Many claim they’re concerned and that they value human life and happiness, but if doing something requires going beyond the safe zone of conformism, people just don’t do it. The best way I’ve figured out to overcome this is to manipulate that safe zone to allow more actions, but it would seem that many people think they know better. I just don’t understand what.
I could go on and state that I’m well aware that the world is complicated. It’s difficult to estimate where our choices do lead us to, since net of causes and effects is complex and requires a lot of thinking to grasp. Heuristics human brain uses exist pretty much because of that. This means that it’s difficult to figure out how to do something beside staying in the safe zone that you know to work at least somehow.
However, I still think there’s something missing here. This just doesn’t look like a world where people do particularly care about anything at all. Even if it was often useful to stay in a safe zone, there doesn’t really seem to be any easy way to snap them out of it. No magic word, no violation of any sort of values makes anyone stand up and fight. I could literally tell people that millions are dying in vain(aging) or that the whole world is at stake(existential risks), and most people simply don’t care.
At least, that’s how I see it. I figure that rare exceptions to the rule there can be explained as a cost of signalling something, requirements of the spot in the conformist space you happen to be occupy or something like that.
I’m not particularly fond of this position, but I’m simply lacking a better alternative.
This is way too strong, isn’t it? I also don’t think the reason a lot of people ignore these tragedies has as much to do with conformism as it does self-interest. People don’t want to give up their vacation money. If anything there is social pressure in favor of sacrificing for moral causes. As for values, I think most people would say that the fact they don’t do more is a flaw. “If I was a better person I would do x” or “Wow, I respect you so much for doing x” or “I should do x but I want y so much.” I think it is fair to interpret these statements as second order desires that represent values.
Remember what I said about “lip service”?
If they want to care about stuff, that’s kinda implying that they don’t actually care about stuff (yet). Also, based on simple psychology, someone who chooses a spot in the conformist zone that requires giving lip service to something creates cognitive dissonance which easily produces second order desire to want what you claim you want. But what is frightening here is how this choice of values is arbitrary to the ultimate. If you’d judged another spot to be cheaper, you’d need to modify your values in a different way.
On both cases though, it seems that people really rarely move any bit towards actually caring about something.
What is a conformist zone and why is it spotted?
Lip service is “Oh, what is happening in Darfur is so terrible!”. That is different from “If I was a better person I’d help the people of Darfur” or “I’m such a bad person, I bought a t.v. instead of giving to charity”. The first signals empathy the second and third signal laziness or selfishness (and honesty I guess).
Why do values have to produce first order desires? For that matter, why can’t they be socially constructed norms which people are rewarded for buying into? When people do have first order desires that match these values we name those people heroes. Actually sacrificing for moral causes doesn’t get you ostracized it gets you canonized.
Not true. The range of values in the human community is quite limited.
People are rarely complete altruists. But that doesn’t mean that they don’t care about anything. The world is full of broke artists who could pay for more food, drugs and sex with a real job. These people value art.
Both are hollow words anyway. Both imply that you care, when you really don’t. There are no real actions.
Because, uhm, if you really value something, you’d probably want to do something? Not “want to want”, or anything, but really care about that stuff which you value. Right?
Sure they can. I expressed this as safe zone manipulation, attempting to modify your envinroment so that your conformist behavior leads to working for some value.
The point here is that actually caring about something and working towards something due to arbitrary choice and social pressure are quite different things. Since you seem to advocate the latter, I’m assuming that we both agree that people rarely care about anything and most actions are result of social pressure and stuff not directly related to actually caring or valuing anything.
Which brings me back to my first point: Why does it seem that many people here actually care about the world? Like, as in paperclip maximizer cares about paperclips. Just optical illusion and conscious effort to appear as a rational agent valuing the world, or something else?
So, I’ve been thinking about this for some time now, and here’s what I’ve got:
First, the point here is to self-reflect to want what you really want. This presumably converges to some specific set of first degree desires for each one of us. However, now I’m a bit lost on what do we call “values”, are they the set of first degree desires we have(not?), set of first degree desires we would reach after infinity of self-reflection, or set of first degree desires we know we want to have at any given time?
As far as I can tell, akrasia would be a subproblem of this.
So, this should be about right. However, I think it’s weird that here people talk a bit about akrasia, and how to achieve those n-degree desires, but I haven’t seen anything about actually reflecting and updating what you want. Seems to me that people trust a tiny bit too much to the power of cognitive dissonance fixing the problem between wanting to want and actually wanting, this resulting to the lack of actual desire in achieving what you know you should want(akrasia).
I really dunno how to overcome this, but this gap seems worth discussing.
Also, since we need eternity of self-reflection to reach what we really want, this looks kinda bad for FAI: Figuring out where our self-reflection would converge in infinity seems pretty much impossible to compute, and so, we’re left with compromises that can and probably will eventually lead to something we really don’t want.
Is the status obsession that Robin Hanson finds all around him partially due to the fact that we live in a part of a world where our immediate needs are easily met? So we have a lot of time and resources to devote to signaling compared to times past.
The manner of status obsession that Robin Hanson finds all around him is definitely due to the fact that we live in a part of a world where our immediate needs are easily met. Particularly if you are considering signalling.
I think you are probably right in general too. Although a lot of the status obsession remains even in resources scarce environments, it is less about signalling your ability to conspicuously consume or do irrational costly things. It’s more being obsessed with having enough status that the other tribe members don’t kill you to take your food (for example).
Are people interested in discussing bounded memory rationality? I see a fair number of people talking about solomonov type systems, but not much about what a finite system should do.
Sure. What about it in particular? Care to post some insights?
Would my other reply to you be an interesting/valid way of thinking about the problem. If not what were you looking for?
Pardon me. Missed the reply. Yes, I’d certainly engage with that subject if you fleshed it out a bit.
I was thinking about starting with very simple agents. Things like 1 input, 1 output with 1 bit of memory and looking at them from a decision theory point of view. Asking questions like “Would we view it as having a goal/decision theory?” If not what is the minimal agent that we would and does it make any trade offs for having a decision theory module in terms of the complexity of the function it can represent.
I tend to let other people draw those lines up. It just seems like defining words and doesn’t tend to spark my interest.
I would be interested to see where you went with your answer to that one.
“People are crazy, the world is mad. ”
This is in response to this comment.
Given that we’re sentient products of evolution, shouldn’t we expect a lot of variation in our thinking?
Finding solutions to real world problems often involve searching through a state space of possibilities that is too big and too complex to search systematically and exhaustively. Evolution optimizes searches in this context by using a random search with many trials: inherent variation among zillions of modular components.
Observing the world for 32-odd years, it appears to me that each human being is randomly imprinted with a way of thinking and a set of ideas to obsess about. (Einstein had a cluster of ideas that were extremely useful for 20th century physics, most people’s obsessions aren’t historically significant or necessarily coherent.)
Would it be worthwhile for us to create societal simulation software to look into how preferences can change given technological change and social interactions? (knew more, grew up together) One goal would be to clarify terms like spread,muddle,distance, and convergence. Another (funner) goal would be to watch imaginary alternate histories and futures (given guesses about potential technologies)
Goals would not include building any detailed model of human preferences or intelligence.
I think we would find some general patterns that might also apply to more complex simulations.
This DARPA project sounds relevant.
I’ve read Newcomb’s problem (Omega, two boxes, etc.), but I was wondering if, shortly, “Newcomb’s problem is when someone reliably wins as a result of acting on wrong beliefs.” Is Peter walking on water a special case of Newcomb? Is the story from Count of Monte Cristo, about Napoleon attempting suicide with too much poison and therefore surviving, a special case of Newcomb?
I am completely baffled why this would be downvoted. I guess asking a question in genuine pursuit of knowledge, in an open thread, is wasting someone’s time, or is offensive.
I like to think someone didn’t have the time to write “No, that’s not the case,” and wished, before dashing off, leaving nothing but a silhouette of dust, that their cryptic, curt signal would be received as intended; that as they hurry down the underground tunnel past red, rotating spotlights, they hoped against hope that their seed of truth landed in fertile ground—godspeed, downvote.
I upvoted you because your second sentence painted a story that deeply amused me.
/pats seed of truth, and pours a little water on it
The community would benefit from a convention of “no downvotes in Open Thread”.
However, I did find your question cryptic; you’re dragging into a decision theory problem historical and religious referents that seem to have little to do with it. You need to say more if you really want an answer to the question.
Sure, that’s fair.
Peter walked on water out to Jesus because he thought he could; when he looked down and saw the sea, the fell in. As long as he believed Jesus instead of his experience with the sea, he could walk on water.
I don’t think the Napoleon story is true, but that’s beside the point. He thought he was so tough that an ordinary dose of poison wouldn’t kill him, so he took six times the normal dosage. This much gave his system such a shock that the poison was rejected and he lived, thinking to himself, “Damn, I underestimated how incredibly fantastic I am.” As long as he (wrongly) believed in his own exceptionalism instead of his experience with poison on other men, he was immune to the poison.
My train of thought was, you have a predictor and a chooser, but that’s just getting you to a point where you choose either “trust the proposed worldview” or “trust my experience to date”—do you go for the option that your prior experience tells you shouldn’t work (and hope your prior experience was wrong) or do you go with your prior experience (and hope the proposed worldview is wrong)?
I understand that in Newcomb’s, that what Omega says is true. But change it up to “is true way more than 99% of the time but less than 100% of the time” and start working your way down that until you get to “is false way more than 99% of the time but less than 100% of the time” and at some point, not that long after you start, you get into situations very close to reality (I think, if I’m understanding it right).
This basically started from trying to think about who, or what, in real life takes on the Predictor role, who takes on the Belief-holder role, who takes on the Chooser role, and who receives the money, and seeing if anything familiar starts falling out if I spread those roles out to more than 2 people or shrink them down to a single person whose instincts implore them to do something against the conclusion to which their logical thought process is leading them.
You seem to be generalizing from fictional evidence, which is frowned upon here, and may explain the downvote (assuming people inferred the longer version from your initial question).
That post (which was interesting and informative—thanks for the link) was about using stories as evidence for use in predicting the actual future, whereas my question is about whether these fictional stories are examples of a general conceptual framework. If I asked if Prisoner’s Dilemma was a special case of Newcomb’s, I don’t think you’d say, “We don’t like generalizing from fictional evidence.”
Which leads, ironically, to the conclusion that my error was generalizing from evidence which wasn’t sufficiently fictional.
Perhaps I jumped to conclusions. Downvotes aren’t accompanied with explanations, and groping for one that might fit I happened to remember the linked post. More PC than supposing you were dinged just for a religious allusion. (The Peter reference at least required no further effort on my part to classify as fictional; I had to fact-check the Napoleon story, which was an annoyance.)
It still seems the stories you’re evoking bear no close relation to Newcomb’s as I understand it.
I have heard of real drugs & poisons which induce vomiting at high doses and so make it hard to kill oneself; but unfortunately I can’t seem to remember any cases. (Except for one attempt to commit suicide using modafinil, which gave the woman so severe a headache she couldn’t swallow any more; and apparently LSD has such a high LD-50 that you can’t even hurt yourself before getting high.)
I was wondering, shortly, “Is brgrah449 from Sicily?”
No, that’s not the case. A one-boxer in Newcomb’s problems is acting with entirely correct beliefs. All agree that the one-boxer will get more money than the two-boxer. That correct belief is what motivates the one-boxer.
The scenarios that you describe sound somewhat (but not exactly) like Gettier problems to me.
(I wasn’t the downvoter.)
A poem, not a post:
Intelligence is not computation.
As you know.
Yet the converse bears … contemplation, reputation. Only then refutation.
We are irritated by our fellows that observe that A mostly implies B, and B mostly implies C, but they will not, will not concede that A implies C, to any extent.
We consider this; an error in logic, an error in logic.
Even though! we know: intelligence is not computation.
Intelligence is finding the solution in the space of the impossible. I don’t mean luck At all. I mean: while mathematical proofs are formal, absolute, without question, convincing, final,
We have no Method, no method for their generation. As well we know:
No computation can possibly be found to generate, not possibly. Not systematically, not even with ingenuity. Yet, how and why do we know this -- this Impossibility?
Intelligence is leaping, guessing, placing the foot unexpectedly yet correctly. Which you find verified always afterwards, not before.
Of course that’s why humans don’t calculate correctly.
But we knew that.
You and I, being too logical about it, pretending that computation is intelligence.
But we know that; already, everything. That pretending is the part of intelligence not found in the Computating. Yet, so? We’ll pretend that intelligence is computing and we’ll see where the computation fails! Telling us what we already knew but a little better.
Than before, we’ll see afterwards. How ingenuous, us.
The computation will tell us, finally so, we’ll pretend.
While reading a collection of Tom Wayman’s poetry, suddenly a poem came to me about Hal Finney (“Dying Outside”); since we’re contributing poems, I don’t feel quite so self-conscious. Here goes:
http://www.gwern.net/fiction/Dying%20Outside
I wrote this poem yesterday in an unusual mood. I don’t entirely agree with it today. Or at least, I would qualify it.
What is meant by computation? When I wrote that intelligence is not computation, I must have meant a certain sort of computation because of course all thought is some kind of computation.
To what extent has distinction been made between systematic/linear/deductive thought (which I am criticizing as obviously limited in the poem) and intelligent pattern-based thought? Has there been any progress in characterizing the latter?
For example, consider the canonical story about Gauss. To keep him busy with a computation, his math teacher told him to add all the numbers from 1 to 100. Instead, according to the story, Gauss added the first number and the last number, multiplied by 100 and divided by 2. Obviously, this is a computation. But yet a different sort. To what extent do you suppose he logically deduced the pattern of the lowest number and highest number combining always to single value or just guessed/observed it was a pattern that might work? And then found that it did work inductively?
I’m very interested in characterizing the difference between these kinds of computation. Intelligent thinking seems to really be guesses followed by verification, not steady linear deduction.
Gah, Thank You. Saves me the trouble of a long reply. I’ll upvote for a change-of-mind disclaimer in the original.
My recent thoughts have been along these lines, but this is also what evolution does. At some point, the general things learned by guessing have to be incorporated into the guess-generating process.
I do not like most poems, but I liked this one.
Does anyone know how many neurons various species of birds have? I’d like to put it into perspective with the Whole Brain Emulation road map, but my googlefu has failed me.
I’ve looked for an hour and it seems really hard to find. From what I’ve seen, (a) birds have a different brain structure than mammals (“general intelligence” originates in other parts of the brain), and (b) their neuron count changes hugely (relative to mammals) during their lifetimes. I’ve seen lots of articles giving numbers for various species and various brain components, but nothing in aggregate. If you really want a good estimate you’ll have to read up to learn the brain structure of birds, and use that together with neuron counts for different parts to gather a total estimate. Google Scholar might help in that endeavor.
I also looked for a while and had little luck. I did find though that the brain-to-body-mass ratios for two of the smartest known species of birds—the Western Scrub Jay, and the New Caledonian Crow—are comparable to those of the chimps. These two species have shown very sophisticated cognition.
Blast.
I’ll have to keep the question in mind for the next time I run into a neuroscientist.
I could test this hypothesis, but I would rather not have to create a fake account or lose my posting karma on this one.
I strongly suspect that lesswrong.com has an ideological bias in favor of “morality.” There is nothing wrong with this, but perhaps the community should be honest with itself and change the professed objectives of this site. As it says on the “about” page, “Less Wrong is devoted to refining the art of human rationality.”
There has been no proof that rationality requires morality. Yet I suspect that posts coming from a position of moral nihilism would not be welcomed.
I may be wrong, of course, but I haven’t seen any posts of that nature and this makes me suspicious.
It does.
In general they are not. But I find that a high quality clearly rational reply that doesn’t adopt the politically correct morality will hover around 0 instead of (say) 8. You can then post a couple of quotes to get your karma fix if desired.
That’s unfortunate, since I’m a moral agnostic. I simply believe that if there is a reasonable moral system, it has to be derivable from a position of total self-interest. Therefore, these moralists are ultimately only defeating themselves with this zealotry; by refusing to consider all possibilities, they cripple their own capability to find the “correct” morality if it exists.
Utilitarianism will be Yudkowski’s Cophenagen.