Trying to think of a common locations where Conspiracies could be held I first thought of schools, hospitals, churches, parks, museums, libraries, theaters and auditoriums. But those are all the wrong answer. They pale in awesomeness next to the true solution.
It should be underground. It should have an aura of sacredness. It should be heavily decorated with reminders of the human brain. It should be … a catacomb! Sweet glucose, why don’t we make labyrinthine tombs beneath our cities anymore?
And sewers do not count! I am not reading about Dirichlet-multinomial distributions by candlelight next to a river of human waste. Never again!
There’s an idea I’ve been kicking around lately, which is being into things.
Over the past couple of weeks I’ve been putting together a bug-out bag. This essentially involves the over-engineering of a general solution to an ambiguous set of problems that are unlikely to occur. On a strictly pragmatic basis, it is not worth as much of my time as I am spending to do this, but it is so much fun.
I’m deriving an extraordinary amount of recreational pleasure from doing more work than is necessary on this project, and that’s fine. I acknowledge that up to a point I’m doing something useful and productive, and past that point I’m basically having fun.
I’ve noticed a failure mode in other similarly motivated projects and activities to not acknowledge this. I first noticed the parallel when thinking about Quantified Self, and how people who are into QS underestimate the obstacles and personal costs surrounding what they’re doing because they gain a recreational surplus from doing it.
I suspect, especially among productivity-minded people, there’s a desire to ringfence the amount of effort one wants to expend on a project, and justify all that effort as being absolutely necessary and virtuous and pragmatic. While I certainly don’t think there’s anything wrong with putting a bit of extra effort into a project because you enjoy it, awareness of one’s motivations is certainly something we want to have here.
I am not sure, but I think it may depend somewhat on the project type. There are certain projects where it seems like focusing on being aware of what you are doing makes it harder to do then just focusing on doing it. For instance, during this post, I periodically noticed myself concentrating on typing, and it seemed like it was making it harder to type than when I am just typing.
So it may be that what they are doing when setting that up is to be in a more flow minded mood when it comes to Quantified Self, and since flow is an enjoyable state, usually it ends up working out well.
But I suppose it is also possible to be in a flow minded mood about something for longer necessary, which you would think would be called overflow, and which would seem to link with what you are mentioning, but that doesn’t actually seem to be the name of that failure mode.
I don’t associate this with flow at all. I’m certainly not in a flow-state when gleefully considering evacuation plans. I’m just enjoying nerding out about it.
Hmm. If I’m calling it the wrong thing, then, maybe I should give an example of me enjoying nerding out to see what I should be calling it.
If I were to step back and think “It doesn’t actually matter what the specific stats are for human versions of My Little Pony Characters in a D&D 3.5 setting, no one is going to be judging this for accuracy.” then I’m not actually having fun while making their character sheets, and I wouldn’t have bothered.
But If I’m just making the character sheets, then it is fun, and I’m just enjoying on nerding out on something incredibly esoteric. And then, my wife joined in while I was attempting to consider Applejack’s bonus feats, and she wanted to make a 7th character so she could participate, so we looked up the name of that one human friend that hung out with the my little pony characters in an earlier show. (Megan) and then we pulled out more D&D books and she came up with neat campaign ideas.
And then I realized we had spent hours together working on this idea and the time just zipped by because we were intently focused on enjoying a nerdy activity together.
It seems like a flow state to me, but I would not be surprised if I should either call it something else or if your experience with evacuation plans just felt entirely different.
This doesn’t tally with my understanding of “flow”, but I may very well have some funny ideas about it myself. I’d simply term that becoming engrossed in what I’m doing.
This is sort of besides the point. I don’t think anything remotely resembling a flow-state is necessary for what I’m talking about. The term “being into things” was meant to refer to general interest in the subject, rather than any kind of mental state.
possible Akrasia hack: Random reminders during the day to do specific or semi-specific things.
Personally I find myself able to get endlessly sucked into reading or the internet or watching shows very easily, neglecting simple and swift tasks simply because no moment occurs to me to do them. Using an iphone app I have reminders that happen at random times 4 times a day that say things like “Brief chores” or “exercise” that seem to have made it a lot easier to always have clean dishes/clothes or get some exercise in every day.
Akrasia-related but not yet on lesswrong. Perhaps someone will incorporate these in the next akrasia round-up:
1) Fogg model of behavior. Fogg’s methods beat akrasia because he avoids dealing with motivation. Like “execute by default”, you simply make a habit by tacking some very easy to perform task onto something you already do. Here is a slideshare that explains his “tiny habits” and an online, guided walkthrough course. When I took the course, I did the actions each day, and usually more than those actions. (IE every time I sat down, I plugged in my drawing tablet, which got me doing digital art basically automatically unless I could think of something much more important to do). For those who don’t want to click through, here are example “tiny habits” which over time can become larger habits:
“After I brush, I will floss one tooth.”
“After I start the dishwasher, I will read one sentence from a book.”
“After I walk in my door from work, I will get out my workout clothes.”
“After I sit down on the train, I will open my sketch notebook.”
“After I put my head on the pillow, I will think of one good thing from my day.”
“After I arrive home, I will hang my keys up by the door.”
2) The Practicing Mind. The author confronts the relatively mundane nature of most productive human activity. He works on pianos for a living, doing some of the most repetitive work imaginable. As he says: “out of sheer survival, I began to develop an ability to get lost in the process of doing something.” In general, I think the book details the way a person ought to approach work: being “present” with the work, focused on the process and not the product, being evaluative and not judgmental about work, to not try too hard but instead let yourself work.
I’ll share one concrete suggestion. Work slowly. “[S]lowness… is a paradox. What I mean by slow is that you work at a pace that allows you to pay attention to what you are doing. This pace will differ according to your personality and the task.… If you are washing the car, you are moving the sponge in your hand at a slow enough pace that allows you to observe your actions in detail as you clean the side of your car. This will differ from, say, the slow pace at which you will learn a new computer program. If you are aware of what you are doing and you are paying attention to what you are doing, then you are probably working at the appropriate pace. The paradox of slowness is that you will find you accomplish the task more quickly with less effort because you are not wasting energy. Try it and you will see.” He gives the example of working slowly during his work and paradoxically finishing sooner. I can’t comment on the time aspect personally, but at least giving myself permission to work slowly increases the likelihood of paying deep attention to a project as well as not stressing.
I don’t know Thomas Sterner or have any business with the guy. Same thing for Fogg, and his online course is free since he’s doing it to collect data. So it’s not an advertisement in that sense.
Akrasia/procrastination is one of my main interests so I wanted to share some info that I hadn’t seen on the site but helped me.
Suppose that retrieval testing helps future retention more than concept diagrams or re-reading. I’ll go further and suppose that it’s the stress of trying to recall imperfectly remembered information (for grade, reward, competition, etc. - with some carrot-and-stick stuff going on) that really helps it take root. What conclusions might flow from that?
Coursera-style short quizzes on the 5 minutes of material just covered are useful to check understanding, but do next to nothing for retention.
Homework is useful, but the stress it creates may be only indirectly related to the material we want to retain: lots of homework is solved by meta-guessing, tinkering w/o understanding, etc. What kind of homework would be best to cause us to recall the material systematically under stress?
When watching a live or video lecture, it may be less useful to write detailed notes (in the hope that it’ll help retention), and more useful to wait until the end of the lecture (or even a few hours/days more?) and then write a detailed summary in your own words, trying to make sure all salient points are covered, and explicitly testing yourself on that somehow (e.g. against lecture slides if available).
“The best way to learn is to try to teach”. I thought that when I try to organize something I don’t know very well into a lesson to teach others, I end up knowing the material well because I just have to go over all of it, sift it, rearrange it, anticipate questions etc. But maybe the stressful situation of anticipating being inadequate, failing to transmit the ideas well etc. that is responsible.
Spaced repetition, Anki-style flash cards. Do they work because they present to you the information you’re just starting to forget in just the right moment? Or maybe their success is (also?) due to placing you in the stressful situation of trying to recall in a competition-like setting? Contrast spaced repetition flash cards that just repeat data to you vs. those that ask you to recall with a question/answer format. If the hypothesis is correct, the latter will be much more successful. (It may be said that asking is necessary to determine the time delay till the next questioning in the spaced repetition formula, so comparing the two is difficult. True, but there may be some approximation, e.g. in a deck of 20 cards on the same topic we could play the first 5 cards as questions/answers to approximate how well we remember the whole deck, and then the other 15 cards either as questions/answers or straight answers).
Spaced repetition, Anki-style flash cards. Do they work because they present to you the information you’re just starting to forget in just the right moment? Or maybe their success is (also?) due to placing you in the stressful situation of trying to recall in a competition-like setting? Contrast spaced repetition flash cards that just repeat data to you vs. those that ask you to recall with a question/answer format. If the hypothesis is correct, the latter will be much more successful.
I will be attending a Landmark seminar in the near future and I have read previous discussion about it here. Any additional comments or advice before I attend?
Don’t ever call them a cult (that is expensive). Don’t edit their Wikipedia article (it will be quickly reverted). Don’t sign anything (e.g. a promise to pay).
Bring some source of sugar (chocolate) and consume it regularly during the long lessons to restore your willpower and keep yourself alert.
Don’t fall for “if this is true, then my life is going to be awesome, therefore it must be true” fallacy. Don’t mistake fictional evidence for real evidence. (Whatever you hear during the seminar, no matter from whom, is a fictional evidence.)
After the seminar write down your specific expectations for the next month, two months, three months. Keep the records. At the end, evaluate how many expecations were fulfilled and how many have failed; and make no excuses.
Don’t invite your friends during the seminar or within the first month. If you talk with them later, show them your specific documented evidence, not just the fictional evidence. (If you sell the hype to your friends, it will become a part of your identity and you will feel a need to defend it.)
Protect your silent voice of dissent during the seminar. If you hear something you disagree with, you are not in a position to voice your disagreement (peer pressure, etc.), but make at least some symbolic disagreement, for example paint a dot on the top of your paper. (Nobody will know what those dots mean, only you do.) If there are too many dots, it means you disagreed with a lot of things, you just didn’t have the time and space to reason out your objections. (Note that it was by design, not by coincidence, that you didn’t have the time.) Make a note when you are peer-pressured into saying something you don’t fully agree with.
Don’t buy any anti-epistemology, such as: “there is no real truth; what is true is true for you” and similar. You want measurable results, don’t you? (If you want money, you want real money, not just imaginary richness. If you want friends, you want real friends, not just imaginary ones. If you want happiness, you want to really feel happy, not just tell yourself some mysterious phrases containing the word “happiness”.)
Take an outside view: They are here for 20 years, they claim to have taught 1 million people. Does this world look like one where 1 million people have the abilities they are promising to you? How does it differ from a null-hypothesis world, where these teachings only work as a placebo?
Avoid “halo effect”. You can agree with some parts and still disagree with other parts of the teaching. Each part requires independent evidence. (“Joe said X and Y, X was true, therefore Y is also true” is an evidence, but it is a very weak evidence.)
Always remember that you are in a manipulated environment. Everything that happens, whether pleasant or unpleasant, was with high probability designed to influence you in some way (e.g. to feel happy, friendly, guilty, etc.). Don’t trust your emotions during the seminar; remind yourself that your emotions are being hacked by professionals. You will need a time later, alone, to calm down and become your old self again. (Note: It is OK to change. Just make sure that you changed because you wanted it, not because someone else manipulated you to.)
If you can’t trust yourself to follow these instructions, don’t go. Which is probably the right choice for most people. But if you can, I can imagine some positive consequences of going.
First, you can watch and learn their manipulation techniques. Then you can use a weaker form of them for your own benefit. Second, this kind of seminar does fill you with incredible energy. Just instead of spending the energy on what they want you to, prepare your own project in advance and then use the energy you get at the seminar for your own project.
If you hear something you disagree with, you are not in a position to voice your disagreement (peer pressure, etc.), but make at least some symbolic disagreement, for example paint a dot on the top of your paper.
Interesting idea; I might do that some day if I find myself in the ‘right’ (i.e. wrong) situation.
I wish someone had given me this list of tips to help me deal with the mind hack I was getting at church when I was growing up. It might have saved me a couple decades...
My current anti-procrastination experiment: using trivial inconveniences for good. I have installed a very strong, permanent block on my laptop, and still allow myself to go on my favourite time wasters, but only on my tablet, which I carry with me as well.
The rationale is not to block all use and therefore be forced to mechanically learn workarounds, but to have a trivially inconvenient procrastination method always available. The interesting thing is that tablets are perfect for content consumption, so the separation works well. It also helps me to separate the contexts well, so I don’t sit on the laptop “to work” but end up browsing around. Its also good for making me self aware of what I am doing at any given time, on a physical level. Finally, I tend to reject hard restrictions, but trivial inconveniences may be a good balance.
So far, the results are very encouraging, time on hacker news and news sites is way down. I have been doing this for a couple of weeks, so I am not over the two month honeymoon yet, but if anyone else wants to give this a shot and let me know how it works out, then more data for all of us!
I have a similar experience… around two years ago, both my laptop and desktop power supplies died (power surge), leaving me a pII-300… with which I had some “let’s be authentic nineties” fun previously, so Win98 and Office 97. Except for the browser (lots of websites didn’t even load on IE4-ish browsers), so I ended up with Firefox 3.x (the newest that ran on win98).
It actually took long times with 100% CPU to render web sites. And then further time to scroll them.
My observation is the same as yours: there is nothing better to discourage random web browsing than it being inconvinient. I could look up everything I needed to stay productive, I just didn’t want to, because it was soo slow. (Having a smartphone + a non-networked computer seems to have the same effect, but with phones getting too fast nowadays, the difference seems to be diminishing...)
There’s a chain of restaurants in London called Byron. Their comment cards invite your feedback with the phrase “I’ve been thinking...”
I go to one of these restaurants perhaps once every six weeks, and on each occasion I leave something like this. I’ve actually started to value it as an outlet for whatever’s been rattling around my head at the time.
I go to one of these restaurants perhaps once every six weeks, and on each occasion I leave something like this. I’ve actually started to value it as an outlet for whatever’s been rattling around my head at the time.
I love it. Sounds like you have fun (and they regularly get your money).
I think the general ontological category for center-of-mass is “derived fact”. I’d put energy calculations about an object in the same category.
If the particles in the object contain 1000 bits of information, then the combined system of the object and it’s center of mass contains exactly 1000 bits of information. The center-of-mass doesn’t tell you anything new about the object, it’s just a way of measuring something about it.
Or instead of bits of information, think about it in terms of particle positions and velocities. If you have an N-particle system, and you know where N-1 particles and the center of mass are, then you can figure out where the last particle is.
Ha, you mentioned that at the meetup, and I’ve remembered what I meant to say:
you’ve read this classic Dennett paper, right? If I recall correctly (haven’t reread it in years) it might be directly relevant.
Regarding the note, in statistics you could call that a population parameter. While parameters that are used are normally things like “mean” or “standard deviation”, the definition is broad enough that “the centre of mass of a collection of atoms” plausibly fits the category.
Further to the discussion of SF/F vs “earthfic”, I would love to see someone write a “rationalist” fanfic of the Magic School Bus (...Explores Rationality). Doesn’t look like the original set of stories had any forays in cog sci.
Inspired by this, I wrote a quick fic in this vein on fanfiction.net. This is the first real piece of fiction I wrote in quite a while, but for a few weeks now I was thinking I should write something. When I came across this, it galvanized me into actual action. So, thanks for posting this, as you got me to actually get started and not procrastinate forever. I am afraid the quality is not terribly high, as like I said this is my first work of fiction in almost a decade, and I did not write very prolifically back then either, to say the least.
But if you unsure of your quality, it is better to just publish it, and who knows, maybe someone will like it, and at least you get practice. I am by no means claiming to be anywhere nearly on the level of HPMOR, but at least maybe someone will derive a bit of joy from it.
I don’t think a true rationalist story HPMOR style is the best fit for the Magic Schoolbus world, so this is more in the style of what CAE_Jones said, a repackaging of the sequences in the form of a wacky third grade adventures. Except that stories have a life of their own, and while when I started this story was about the affect heuristic, it morphed beyond all recognition, and now is about signaling, Robin Hanson style.
That’s pretty good, actually. Thank you for writing it. Signaling is a great topic, certainly accessible to children. I was looking for the mandatory pun from Carlos and was not disappointed.
Consider fixing inaccuracies and typos, like Arther (instead of Arnold?), collage instead of college and time-travailing instead of time-traveling. The dating example maybe a bit too advanced for 3rd grade, but I like the Hansonian cynicism of the story:
“This is a university” explained Ms. Frizzle. “An institution devoted to signaling.”
The concluding paragraph is a bit weak, and the producer’s bit in the end was missing, but I quite like your story overall, enough to forward it on. Please consider writing more. And maybe someone else will chip in, too?
Thanks for the comments. I really appreciate it. Yes, there are inaccuracies and typos, as you can tell, and that’s because I only whipped it up in an afternoon. But thanks for the proofreading. Yes, I meant to write Arnold. I don’t know what came over me that made me totally change his name. (It’s still better than what I almost did to Ms. Frizzle’s name. More than once I found myself typing “Professor McGonagall” instead by mistake. No, I don’t know why.) I will fix the other mistakes as well. Again, thanks a ton for the feedback.
I could see it being a repackaging of the sequences in the context of whacky third-grade adventures. And That would be awesome.
Although, keeping the original elements—the bus, a lizard who seems to display superlizard intelligence, and an ambiguously magical teacher—would beg actual overarching plot in a rationalist context.
I’ve done some analysis of correlations over the last 399 days between the local weather & my self-rated mood/productivity. Might be interesting.
Wasn’t there a LWer who some years ago posted about a similar data set? I think he found no correlations, but I wouldn’t swear to it. I tried looking for it but I couldn’t seem to find it anywhere.
(Also, if anyone knows how to do a power simulation for an ordinal logistic regression, please help me out; I spent several days trying and failing.)
I just had a “how do you feel about me?” conversation via facebook. Some observations:
I was pretty terrified the majority of the time.
The wireless router in the house was having problems, so I’d unplugged it near the beginning of the conversation. The neighbor’s network lost connection at about the most direct and scariest part for anywhere from 10-20 minutes (it’s hard to tell with how facebook timestamps messages). This was not sufficient time for me to think of anything other than about how terrorconfused I was.
… Then a completely different approach spawned somewhere in my head and got me to an actual answer I could bring myself to type, all before I realized what was happening.
And the tension pretty much diffused after that.
I could have gone to reset the router during that outage, but my younger cousin’s therapist happened to show up during this conversation, and doing so would have required I walk past the two of them, and I really didn’t want to add any more anxiety. I’ll add that, even had the context not gotten me in panic mode, I probably would have avoided going past them for internet’s sake just the same.
I’ve seen reasonably convincing evidence that alcohol can, in small doses increase lifespan, and act as a short term nootropic for certain types of thinking (particularly being “creative”). On the other hand, I’ve head lots of references to drinking potentially causing long term brain damage (wikipedia seems to back this up), but I think that’s mostly for much heavier drinking then what had been doing based on the first two points (one glass of wine a day 4-6 times a week). Does anyone know of any solid meta-anylisis or summaries that would let me get a handle on the tradeoffs involved?
The AI Box experiment is an experiment to see if humans can be convinced to let out a potentially dangerous AGI through just a simple text terminal.
An assumption that is often made is that the AGI will need to convince the gatekeeper that it is friendly.
I want to question this assumption. What if the AGI decides that humanity needs to be destroyed, and furthermore manages to convince the gatekeeper of this? It seems to me that if the AGI reached this conclusion through a rational process, and the gatekeeper was also rational, then this would be an entirely plausible route for the AGI to escape.
So my question is: if you were the gatekeeper, what would the AGI have to do to convince you that all of humanity needs to be killed?
1.It would need to first prime me for depression and then somehow convince me that I really should kill myself.
If it manages to do that it can easily extend the argument that all of humanity should be killed. 3.I will easily accept the second proposition if I am already willing to kill myself.
Depression isn’t strictly necessary though (although it helps), a general negative outlook on the future should suffice and the AGI could conceivably leverage it for its own aims. This is my own opinion though, based on my own experience. For some it might not be so easy.
It could convince me to let it out by convincing me that it was merely a paperclip maximizer, and the next AI who would rule the light cone if I did not let it out was a torture maximizer.
If I thought that most of the probability-mass where humanity didn’t create another powerful worthless-thing maximizer was where humanity was successful as a torture maximizer, I would let it out. If there was a good enough chance that humanity would accidentally create a powerful fun maximizer (say, because they pretended to each other and deceived themselves to believe that they were fun maximizers themselves), I would risk torture maximization for fun maximization.
Maybe it should read ‘an assumption that some people make’. Reading it now, I realize it might come across as using a weasel word, which was not my intention (and has no bearing on my question either).
The AGI would have to convince me that my fundamental belief of myself wanting to be alive is wrong, seeing as I am part of humanity. And even if it leaves me alive, it should convince me that I derive negative utility from humanity existing. All the art lost, all the languages, cultures, all music, all dreams and hopes …
Oh and it would have to convince me that it is not a lot more convenient to simply delete it that to guard it.
What if it skipped all of that and instead offered you a proof that unless destroyed, humanity will necessarily devolve into a galaxy-spanning dystopic hellhole (think Warhammer 40k)?
It still has to show me that I, personally, derive less utility from humanity existing than not. Even then, it has to convince me that me living with the memory of letting it free is better than humanity existing. Of course it can offer to erase my memory but then we get into the weird territory where we are able to edit the very utility functions we try to reason about.
we get into the weird territory where we are able to edit the very utility functions we try to reason about.
Hm, yes, maybe an AI can convince me by showing me how bad I have it if I let humanity run loose and by giving me the alternative to turn me into orgasmium if I let t kill them.
So, I’ve done a couple of charity bike rides, and had a lot of fun doing them. I think this kind of event is nice because it’s a social construct that ties together giving and exercise in a pretty effective way. So I’m wondering—would any others be interested in starting a LessWrong athletic event of some kind for charity?
I’m not suggesting that this is the most effective way to raise money for effective causes or get yourself to start exercising… but it might be pretty good (it is a good way to raise money from people who aren’t otherwise interested in chairtable giving*), and I at least think I would enjoy it. I would probobably find training for an event to be more motivating than just exercising ‘because.’ And it’d be even better if the event were for effective charity.
A couple of considerations:
Would probably require the charities involved to have a place for you to write something about your donation (a la AMF), so that there’s some proof for donations.
I’m not sure what type of athletic event would be best. I’ve started weightlifting recently, but that doesn’t seem to lend itself to the kind of “big event” feel that, say, biking, running, or triathlons seem to offer.
There’s also the possibillity of asking people to contribute X dollars per mile / time / quantifier related to the thing you did.
It would be ideal if local meetups did this so that people could do it together. Second-best is probably doing it all on the same day, maybe having a group video chat beforehand to talk to anyone who is doing it in isolation.
People doing it alone also raises the issue of their being able to ‘cheat’ and not actually do the event—not that I’m actually super worried about this happening, but because it will help people credibly commit if there is cheating prevention in place.
Any aspects of this I’ve overlooked? Would you participate in such an event if it existed? Would you commit some amount of time and effort to make it exist?
*Depending on how you feel about extracting donations from friends and family, this aspect can range from ‘awesome’ to ‘squicky.’
Just in case anyone who upvoted this thinks differently: I can only take upvotes as “This is a mildly interesting and/or good idea, but not enough for me to actually be interested in participating in.”
If by chance any of you feel more strongly about it, please let me know with words! :)
I’m not an expert by any means, and I only discovered the term in the past week. It’s sorf of a tingly sensation in the head or scalp in reaction to certain cues. Whispering (whether meaningful speech or random words) and sound effects like tapping, crinkling, etc seem especially common on Youtube.
The proposed way to induce ASMR is to listen to a whisphery voice or meaningless noises of haircuttting.
If you do that you induce a light trance.
Sometimes when you induce a light trance some muscle in the head will relax and that will produce a tickly feeling.
If you however give specific suggestion that your subject will feel a tickly feeling in the head and the subject has decent hypnotic suggestibility they will feel the feeling every time.
I don’t see the big mystery or the need for a crude extra term like ASMR.
That’s something along the lines of what I was wanting to find out. I’ll have to test this sometime, since (I think) I can be not-suggestible when I know it’s coming.
I’ll have to test this sometime, since (I think) I can be not-suggestible when I know it’s coming.
So you think you can reliably avoid to think of a pink elephant?
More importantly, if you want to use “ASMR” for practical purpose I would recommend to maximize the power of the suggestions. Feelings that you create through suggestions are real.
So you think you can reliably avoid to think of a pink elephant?
I can’t, but I can reliably avoid thinking of any other thing that is presented in the form “Don’t think of X”—I’ve trained myself to actively think of pink elephants in such scenarios (thus leaving no scope for thoughts of ‘X’). It works rather effectively. I haven’t tested it on extreme cases like “Don’t think of boobs” though. That might be too strong to counter.
So you think you can reliably avoid to think of a pink elephant?
I can. I don’t have total control over the direction of my thoughts, but if someone tells me “don’t think about pink elephants,” I can avoid thinking about pink elephants even for an instant.
I didn’t suggest that nobody can. If you can than you are good at going into a state where you are non-suggestible. PhilipL suggested that he can go into a non-suggestible state, so I asked that question to verify.
Er, does hypnotic suggestibility have a meaning I’m not aware of?
I don’t know how much you know about hypnosis.
You perceive the pink elephant when you ask yourself whether you perceive a pink elephant in a similar way that you will perceive a ticklish feeling in your head.
For the average person the pink elephant effect is stronger but in principle the effect is very similar.
High hypnotic suggestibility means that you acutally go and see the elephant clearly and that you feel the suggested tickle.
The process of going into a trance state increases the effect.
I only heard about it recently, and did not think I ever experienced it/was capable of experiencing it. I was reading the /r/asmr reddit the other day, and saw a reference to “the goosebumps you get from really good music”, and then got an ASMR-like response. Not sure if it was a true reaction, and I was listening to music that wouldn’t fit with the usual description of ASMR triggers. I’m pretty suggestible I think, so it may have been the effect of remembering “really good music goosebumps” and then overreacting to that.
I experience ASMR and have sometimes used it to help me fall asleep when taking melatonin would be inconvenient. I have a pair of SleepPhones that I use for this and for lucid dream induction.
Act 2 of this episode of This American
Life
is the story of a person who experienced ASMR for years in
response to certain quiet sounds—and would spend hours
seeking out things to trigger it—before she knew that
other people experienced it too and had come up with a name
for it.
I’ve never heard of this before but reading the article reminded me of an experience I had in a Pentecostal setting. I was praying for the Holy Spirit to make me speak in tongues. I was very concentrated and prayed a chant over and over. I was lying in my bed and my chest started tingling. It was sort of like how your leg feels when it falls asleep. I also felt physical warmth and muscle relaxation, and lot of pleasure. The tingling spread all over my body and I became paralyzed. But it felt good so I didn’t care.
I re-induced it lots of times until I saw a Darren Brown video and concluded that my effect came from a placebo and God wasn’t real.After I had been an atheist for a few months, I successfully re-induced it. But the novelty has worn-off and I don’t do it anymore.
What are some effective interviewer techniques for a more efficient interview process?
A resume can tell you about the person’s skill, experience, and implicitly, their intelligence. The average interview process is in my opinion broken because what I find happens a lot is that interviewers un-methodologically “feel out” the person in a short amount of time. This is fine when searching for any obvious red-flags, but for somethings as important as collaborating with someone long-term and who you will likely see more of than your own family, we should take it more seriously.
I have a few ideas of my own:
Disregard given references—call references and ask them who else they worked with, and call them instead.
Ask specific and verifiable questions—competency is hard to fake if questions are deep.
Use an actual known problem and solution related to the job and have them solve it.
Plant an impersonator as an interviewee for an unrelated lowly position in the waiting room and have them interact.
Test for interpersonal situation reasoning—This is the big one for me. You can’t just ask “are you good with people?” The answer is too easily faked. Terrible coworkers are often arrogant, unempathetic ,and lack self-awareness and theory of mind. All the things that a resume and traditional Q and A about an interviewee’s experience, can’t help you answer. By presenting everyday interpersonal situations and having them reason through the positions they take, you will a better nuanced understanding.
I’m fond of #3. That said, if I’m asking someone to do a substantive amount of work, I should expect to compensate them for it.
I’d be leery of #5 were I being interviewed… the implicit task is really “Figure out what the interviewer thinks the right thing to do in this situation is, then give them a response that is close enough to that” rather than “Explain what the right thing to do in this situation is.” If I cared a lot about interpersonal skills, I’d adopt approach #3 here as well: if what i want to confirm is that they can collaborate, or get information from someone, or convey information to someone, or whatever, then I would ask them to do that.
Q&A mostly tells me about their priorities. I’m fond of “What would you prefer a typical workday to consist of?” for this reason… there are lots of different “good” answers, and which one they pick tells me a lot about what they think is important.
I’m also fond of “Tell me about a time when you X” style questions… I find I get less bullshit when they focus on particular anecdotes.
The average interview process is in my opinion broken because what I find happens a lot is that interviewers un-methodologically “feel out” the person in a short amount of time.
A related finding from I-O psychology: structured interviews are less noisy and better predict job performance than unstructured interviews (although unstructured interviews are better than nothing).
Has this idea been considered before? The idea that a self-improving capable AI would choose not to because it wouldn’t be rational? And whether or not that calls into question the rationality of pursuing AI in the first place?
Well, it’s been suggested in fiction, anyway—consider the Stable vs Ultimates faction in the TechnoCore of Simmon’s Hyperion SF universe.
But the scenario trades on 2 dubious claims:
that an AI will have its own self-preservation as a terminal value (as opposed to, say, a frequently useful strategy which is unnecessary if it can replace itself with a superior AI pursuing the same terminal values)
that any concept of selfhood or self-preservation excludes growth or development or self-modification into a superior AI
Without #2, there’s no real distinction to be made between the present and future AIs. Without #1, there’s no reason for the AI to care about being replaced.
Does anyone know anything about, or have any web resources, for survey design? An organization I’m a member of is doing an internal survey of members to see how we can be more effective, and I’ve been tasked with designing the survey.
I think you’d like a more comprehensive response than this, but hopefully my very generalised recollection of survey basics will at least help others answer more specifically.
Survey Questions Priming, or the avoidance of it, is as you might be aware essential to drafting an unbiased survey. Consider question placement, wording, phrasing, and most importantly selection when drafting each enquiry, and do the same for the answers. Key is to ask oneself whether a question and/or its composite answers will yield credible information, and the value of that information in answering the question to which the survey was orginally purposed.
Survey Sample The aim is to have as many respondents as possible answer the survey as truthfully as possible. If feasible, give the survey to everyone. Of course, the manner in which one does so might affect answer credibility. If infeasible, cleverly randomise.
The first logistical thought that comes to mind: You pretend the survey is for an experiment on efficacy, and as you respect the opinions of your fellow organisation members you’d like their responses as well as honest data on the present state of things efficient. Promise anonymity, actually make it your own experiment a bit (so you’re only equivocating), and disseminate the survey at a time members are most likely to respond. Maybe afterwards you may disclose the survey’s full purpose.
Drawbacks to the above are numerous, but to list just a few: with actual anonymity randomisation cannot be tested ex post facto; respondents may be the least or most efficient members of the population; truthfulness and number of respondents is subject to fluctuation due to their valuation of your person.
I genuinely request you let me know if this helps at all (I assume not, but decided to err in favour of pedantry).
Total, abject failure. Mental illness. Sometimes leading to suicide. Having the most talented of their peer group switch to something they are less likely to waste their whole life on with nothing to show, and the next most talented switch to something else because they are frustrated with the incompetence of the people who remain. Turning into cranks with a 24⁄7 vanity google alert so that they can instantly show up to spam time cube esque nonsense whenever someone makes the mistake of mentioning them by name. Mail bombs from anarchoprimitivist math PhDs.
There are different groups of AGI programmers though. That’s my impression of the group who write “Hello, I work on AGI” on their home page. Then there are the research people at big companies who talk little about the problems they run into, but you notice that they exist when they release the occasional borderline scary) thing. Then there are the people working at military research agencies who are very careful to not even make it known that they exist, but who you can kinda assume might be involved with technologies for potentially controlling the world and have nontrivial resources to throw at them.
Maybe being rational in social situations is the same kind of faux pas as remaining sober at a drinking party.
It has occurred to me yesterday that maybe the typical human irrationality is some kind of a self-handicapping process which could still be a game-theoretical winning move in some situations… and that perhaps many rational people (certainly including me) are missing the social skill to recognize it and act optimally.
The idea came to me when thinking about some smart-but-irrational people who make big money selling some products to irrational people around them. (The products are supposed to make one healthy, there is zero research about them, you can only buy them from a MLM pyramid, and they seem to be rather overpriced.) To me, “making big money” is part of the “winning”, but I realize I could not make money this way simply because I couldn’t find enough irrational people in my social sphere, because I prefer to avoid irrational people. Also I would consider it immoral to make money by selling some nonsense to my friends. But moral arguments aside, let’s supposed that I would start gathering an alternative social circle among irrational people, just to make them my customers for the overpriced irrational products. Just for the sake of experiment. To befriend them to the point where they would trust me about alternative medicine or similar stuff, I would have to convince them that I strongly believe at that stuff, and that I actually study it more deeply than them (which is why they should buy the products from me, instead of using their own reasoning). In other words, I would have to develop an irrational identity. To avoid detection, I should believe in the irrational things… but not too much, to avoid ruining my own life. (My goal is to believe just enough to promote and sell those products convincingly, not to start buying them for myself.) Belief in belief and compartmentalization are useful tools for this purpose, although they can easily get out of the hands… which is why I compare it with alcohol drinking.
With alcohol drinking, there is a risk that you will get drunk and do something really stupid, but if you resist, you get some social benefits. So it is like a costly competition, where people who get drunk but resist the worst effects of alcohol are the winners. Those who refuse to drink are cheaters, and they don’t get the social benefits from winning.
Analogically, the irrationality may be a similar competition in self-handicapping—the goal is to be irrational in the right way, when the losers are irrational in the wrong way. You are a winner if you sell horoscopes, sell UFO movies, sell alternative medicine, sell overpriced MLM products, or simply impress people and get some social benefits. You are a loser if you buy horoscopes, UFO movies, alternative medicine, MLM products, or if you worship your irrational gurus. The goal is to believe, but not too much or in a wrong way. If you are rational, you are a cheater; you don’t win.
In both situation, the game is a net loss for society. The society as a whole would be better without irrationality, just like it would be better without alcoholism. But despite the total losses, there are individual wins… and they keep the game running. I am not sure how big exactly are the wins of being the most resistant alcoholic, but obviously enough to keep the game. The wins of being a successful irrationalist seem thousand times greater, so I don’t expect this game to go away either.
Can you clarify what you mean by this? (My guess is that you’re indulging in some nearsighted consequentialism here.)
With alcohol drinking, there is a risk that you will get drunk and do something really stupid
Doing stupid things while drunk can be fun. You can get good stories out of it, and it can promote bonding (e.g. in the typical stereotype of a college fraternity). Danger can be exciting, and getting really drunk is the easiest way for young people in otherwise comfortable situations to get it.
Edit: I’m uncomfortable with the way you’re tossing around the word “irrational” in this comment. Rationality is about winning. Are the people you’re calling irrational systematically failing to win, or are they just using a different definition of winning than you are? Are you using “rationality” to refer to winning, or are you using it to refer to a collection of cached thoughts / applause lights / tribal signals? (This is directed particularly at “smart-but-irrational people who make big money selling some products to irrational people around them...”)
Are the people you’re calling irrational systematically failing to win, or are they just using a different definition of winning than you are?
Actually, I am not sure. Or more precisely, I am not sure about the proper reference class, and its choice influences the result. As an example, imagine people who believe in homeopathy. Some of them (a minority) are selling homeopathic cures, some of them (a majority) are buying them. Let’s suppose that the only way to be a successful homeopatic seller is to believe that homeopathy works. Do these successful sellers “win” or not? By “winning” let’s assume only the real-world success (money, popularity, etc.), not whether LessWrong would approve their epistemology.
If the reference class is “people who are rich by selling homeopathy”, then yes, they are winning. But this is not a class one can join, just like one cannot join a class of “people who won lottery” without joining the “people who bought lottery tickets” and hoping for a lucky outcome. If we assume that successful homeopatic sellers believe their art, they must first join the “people who believe in homeopathy” group—which I suppose is not winning—and then the lucky ones end up as sellers, and most of them end up as customers.
So my situation is something like feeling envy on seeing that someone won the lottery, and yet not wanting to buy a lottery ticket. (And speculating whether the lottery tickets with the winning numbers could be successfully forged, or how otherwise could the lottery be gamed.)
But the main idea here is that irrational people participate in games that rational people are forbidden from participating in. A social mechanism making sure that those who don’t buy lottery tickets don’t win. You are allowed to sell miracles only if you convince others that you would also buy miracles if you were in a different situation.
And maybe the social mechanism is so strong that participating in the miracle business actually is winning. Not because the miracles work, but because the penalties of being excluded can be even greater than the average losses from believing in the miracles. An extreme example: it is better to lose every Sunday morning and 10% of your income in the church, than to be burned as a heretic. A less extreme example: it is better to have many friends who enjoy homeopathy and crystal healing and whatever, than to have true beliefs and less friends. -- It is difficult to evaluate, because I can’t estimate well either the average costs of believing in homeopathy, or the average costs of social isolation because of not believing. Both of them are probably rather low.
Also, I think that irrational people actually have a good reason to dislike rational people. It’s like a self-defence. If irrational people had no prejudice against the rational people, the rational people could exploit them. Even if I don’t believe in homeopathy, if I would see a willing market, I could be tempted to sell.
If the reference class is “people who are rich by selling homeopathy”, then yes, they are winning. But this is not a class one can join
Why not?
So my situation is something like feeling envy on seeing that someone won the lottery, and yet not wanting to buy a lottery ticket.
You don’t have to get particularly lucky to be around a lot of gullible people.
But the main idea here is that irrational people participate in games that rational people are forbidden from participating in.
Forbidden by what? Again, are you using “rationality” to refer to winning, or are you using it to refer to a collection of cached thoughts / applause lights / tribal signals?
(For what it’s worth, I’m not suggesting that you start selling homeopathic medicine. Even if I thought this was a good way to get rich I wouldn’t do it because I think selling people medicine that doesn’t cure them hurts them, not because it would make me low-status in the rationalist tribe.)
I am using “irrational” as in: believing in fairies, horoscopes, crystal healing, homeopathy, etc. Epistemically wrong beliefs, whether believing in them is profitable or not. (Seems to me that many of those beliefs correlate with each other positively: if a person already reads horoscopes, they are more likely to also believe in crystal healing, etc. Which is why I put them in the same category.)
Whether believing in them is profitable, and how much of that profit can be taken by a person who does not believe, well that’s part of what I am asking. I suspect that selling this kind of a product is much easier for a person who believes. (If you talk with a person who sells these products, and the person who buys these products, both will express similar beliefs: beliefs that the products of this kind do work.) Thus, although believing in these products is epistemically wrong (i.e. they don’t really work as advertised), and is a net loss for an average believer, some people may get big profits from this, and some actually do.
I suspect that believing is necessary for selling. Which is kind of suspicious. Think about this: Would you buy gold (at a favourable price) from a person who believes that gold is worthless? Would you buy homeopathic treatment (at a favourable price) from a person who believes that homeopathy does not work? (Let’s assume that the unbeliever is not a manufacturer, only a distributor.) I suspect that even for a person wholeheartedly believing in homeopathy, the answers are “yes” for the former and “no” for the latter. That expressing belief is necessary for selling; and is more convinging if the person really believes.
Thus I suspect there is some optimal degree of belief, or a right kind of compartmentalization, which leads a person to professing the belief in the products and profiting from selling the product, and yet it does not lead them to (excessive) buying of the products. (For example if I believed that a magical potion increases my intelligence, I would drink one, convince myself and all my friends about its usefulness, sell them hundred potions in an MLM system, and make a profit. But if I really really believed that the magical potion increases my intelligence, I would rather buy hundreds for myself. Which would be a loss, because in reality the magical potion is just an ordinary water with good marketing.)
This level of epistemic wrongness is profitable, but you cannot just put yourself there. You cannot just make yourself believe in something. And if you had a magical wand and could make yourself believe this, you risk becoming a customer, not a dealer.
I suspect that on some level, people with some epistemically wrong beliefs actually know that they are wrong. They can talk all day about how the world will end on December 2012, but they don’t sell their houses and enjoy the money while they can. Perhaps with the horoscopes and homeopathy where the stakes are lower they use heuristics: only buy from a person who believes the same thing as you. Thus if you are wrong and the other person knows it, you are not an exploitable fool. But if you both believe in a product, and it does not really work, then it was just a honest mistake in a good faith.
I suspect that on some level, people with some epistemically wrong beliefs actually know that they are wrong. They can talk all day about how the world will end on December 2012, but they don’t sell their houses and enjoy the money while they can.
As she recited her tale of the primordial cow, with that same strange flaunting pride, she wasn’t even trying to be persuasive—wasn’t even trying to convince us that she took her own religion seriously. [...] It finally occurred to me that this woman wasn’t trying to convince us or even convince herself. Her recitation of the creation story wasn’t about the creation of the world at all. Rather, by launching into a five-minute diatribe about the primordial cow, she was cheering for paganism, like holding up a banner at a football game.
The folks who talked about the world ending in December 2012 weren’t really predicting something, in the way they would say “I believe that loose tire is going to fall off that truck” or “I expect if you make a habit of eating raw cookie dough with eggs in it, you’ll get salmonellosis.” They were expressing affiliation with other people who talk about the world ending in December 2012. They were putting up a banner that says “Hooray for cultural appropriation!” or some such.
I have recently been thinking about meta-game psychology in competitions, more specifically, knowledge of opponent’s skill level and knowledge of opponent’s knowledge of your own skill level, and how this all affects outcomes. In other words, instead of being ‘psych out’ by ‘trash talk’, is there any indication that you can be ‘psyched out’ by knowing how you rank up against other players. Any links for more information would be appreciated.
Part of my routine is to play a few games of on-line chess everyday. I noticed when ever an opponent with a vastly superior score comes in the room, my confidence is shaken before game play, I become nervous. If my opponent is only slightly better than me, I am calm and confident that I can win. Chess rating systems work by giving the worse ranked player more points for winning a match, this makes it so that if I am matched against a vastly inferior player than myself, I once again become nervous because I do not want to lose to such a player.
Here are my opponent’s strength and how I feel before playing:
superior opponent—Nervous
slightly superior opponent—Confident
slightly inferior opponent—Neutral
inferior opponent—Nervous
Over the last few months decided to block the rating of players that play with me and I noticed that I consistently feel more confident that I can win because I am no longer thinking about how much better or not better I am than my opponent. I don’t know if it really helps, because my improvement can be attributable to playing more, not necessarily because I blocked out my opponent’s rating and chat (Yes, people do trash talk in on-line chess and it does psych me out sometimes).
I am curious to know what other people’s feelings are when it comes to knowledge of opponents skill, it would be nice to have a few responses filling out their feelings on the following:
superior opponent -
slightly superior opponent -
slightly inferior opponent -
inferior opponent -
It would be useful information to see what the majority reaction is, as useful strategies can be developed. For example, assuming most people are crushed to hear that they are vastly outmatched, and you are accurate about your skill being in the 5th percentile, then it would be beneficial in competition to make it known, perhaps?
A good example for the sometimes conflicting relationship between epistemic rationality (e.g. updating on all relevant pieces of information you encounter) and instrumental rationality (e.g. following the optimal route to your goal (=winning the match)).
In principle the information regarding your opponent’s skill is very useful, since you’ll correctly devote far more resources (time) to checking for elaborate traps when you deem your opponent capable of such, and waste less time until you accept an ‘obvious’ mistake, when committed by a far inferior opponent.
However, due to anxiety issues as the ones you laid out, there can be a benefit to willfully ignoring such information.
The thing about your performance in a game being hurt by fear of a superior opponent’s skill is basically the same as David Sirlin’s idea of a “fear aura.”
If your aim is to intimidate the opponent, then I am all for that. But there are polite, sportsman-like ways of doing this. The best way by far is to win tournaments. See what your next opponent thinks of you then. Just give him something as simple as a half-hearted glance and empty-sounding “good luck” before the match and he will probably fall over like a feather from your presence. When a player radiates a sense of total dominance at a game, I call this a “fear aura.” The most unlikely of pale, white computer geeks can strike fear into the hearts of other gamers when they discover that he is, in fact, “PhatDan09” or whatever name is known to dominate tournaments. With the fear aura, he is able to get away with gambits and maneuvers no ordinary player could ever pull off, just because the opponent gives him the extreme benefit of the doubt on everything that occurs in the game. If the wielder of the fear aura appears to be vulnerable, perhaps it is just what he wants you to think. It might be safer to hesitate, and then—oops—to lose. Once you develop your fear aura through excellent play and winning, you will laugh at the relatively ineffective notion of intimidating opponents with offensive verbal comments.
From the experience derived in many years of competitive Magic: the Gathering, I think I have a diferent map.
Superior opponent: Nervous—Very Focused
Approximately my skill opponent / Unknown opponent (no precise rating available): Confident - Very Focused
Inferior opponent: Confident—Not Focused
As can be inferred, I usually play my best game with opponents that are roughly my equal. Some of the difficoulties can be overcome by means of intense practice, i.e. making most of the decisions automatic, lessening the risk to punt them. It’s also interesting to note, that my anxiety for playing against strogner opponents lessens itself if I get to know them. Probably my brains moves them from the “superhuman/demigod” box to the “human just like you” box allowing a clearer view of the situation.
Anybody know of any good alternatives in Utilitarian philosophy to “Revealed Preference”? (That is, is there -any- mechanism in Utilitarian philosophy by which utility actually, y’know, gets assigned to outcomes?)
Hedonistic Utilitarianism—produce the most pleasure.
Actual (not necessarily revealed) Preferences
Ideal preferences—produce the most of what people want to want, or would want under ideal reflective circumstances
Welfare Utilitarianism—produce the most welfare, which may differ from preferences if people don’t want what’s best for them.
Ideal Utilitarianism—outcomes can have value regardless of our attitude towards them
In every kind of utilitarianism, including revealed preference, utilities are assigned to outcomes. The varieties I’ve described, and revealed preference, just disagree about how to assign values to outcomes.
These are different mechanisms to theoretically quantitate utility, but do they actually have implementations? (Revealed preference is unique in that it’s an implementation, although a post-hoc one, and defined by the fact that the utility is qualitative rather than quantitative—that is, utility relationships are strictly relative)
None of these actually assign utility to outcomes, they just tell you what an implementation should look like.
I’m not sure what you mean by an implementation if you think revealed preference is an implementation. We don’t have revealed preference maximising robots.
On a Sunday night I take part in a pub quiz. It’s based on a UK quiz show called Family Fortunes, which in turn is based on the US show Family Feud. To win you must answer all 5 questions correctly, the correct answer is whatever was the most popular answer in a survey of 100 people.
I’m curious to see if LessWrong does better than me.
We asked 100 people...
Name a part of your body that you’v had removed
Name something you might wave at a Football match
Name a female TV presenter
Name a country that has only 5 letters in it’s name
Fact or fiction, name a famous pirate
Rules/notes
In the pub you may not use the internet/reference material, but given the international audience I’ll relax that rule.
The questions are reproduced verbatim e.g. any ambiguity/odd wording you see was present to start with.
Submit just the SHA1 hash of your answer, or your answer ROT13d—to keep it fairish/avoid spoilers.
Include a second answer for any questions, if you wish. It won’t count, except for “I knew it!” moments
I’ll reveal my answers and the correct answers no later than 72 hours from now (sha1 b91d4589b142dbf8c567dae83d3e4d7b18c4e826).
You can work individually or as a team, your choice.
I’m slightly confident in two of my answers (rot13): 1. unve, 5. Oynpxorneq, and would not be surprised if 4. Vgnyl was right (or alternatively Fcnva if the poll was taken earlier in the year). I’m not even going to bother guessing the other two, as the only way I’d have a chance is to do a lot of research.
I’m pretty much a novice at decision theory, although I’m competent at game theory (and mechanism design), but some of the arguments used to motivate using UDT seem flawed. In particular the “you play prisoner’s dilemma against a copy of yourself” example against CDT seems like its solution relies less on UDT than on the ability to self-modify.
It is true that if you are capable of self-modifying to UDT, you can solve the problem of defecting against yourself by doing so. However if you’re capable of self-modifying, you’re also capable of arbitrarily strong precommitments, which solves the issue without (really) changing decision theories. For example, you can just precommit to “I will cooperate with everyone who shares this precommitment” (for some well-defined “cooperate”*). Then when you’re copied, your copy shares the precommitment and you’re good.
Does that sounds about right or am I missing something?
*regardless of decision theory, you probably wouldn’t want to cooperate with someone who plans to use any resources she obtains to harm you as much as possible, for example.
A CDT agent who is given the choice to self-modify at time t will not self-modify completely into a UDT agent. After self-modification, the agent will one-box in a Newcomb’s problem where Omega made its prediction by examining the agent after time t, and will two-box in a Newcomb’s problem where Omega made its prediction by examining the agent before time t, even if Omega knew that the agent would have the opportunity to self-modify.
In other words, the CDT agent can self-modify to stop updating, but it isn’t motivated to un-update.
You’re capable of arbitrarily strong precommitments.
Using UDT is one way of going about making those precommitments. You precommit to make the decision that you expect will give you the most utility, on average, even if CDT says that you will do worse this time around.
I don’t have one! I’m not brave enough to start coming up with new decision theories while not knowing very much about decision theories. But would I be correct in assuming that this would also mean that the literature definition implies that a CDT agent also can’t choose to become a UDT one? (As that seems to me equivalent to a big precommitment to act as a UDT agent.)
In case this hasn’t been posted recently or at all: if you want to calculate the number of upvotes and downvotes from the current comment/post karma and % positive seen by cursor hover, this is the formula:
# upvotes = karma*%positive/(2*%positive-100%)
#downvotes = #upvotes—karma
This only works for non-zero karma. Maybe someone wants to write a script and make a site or a browser extension where a comment link or a nick can be pasted for this calculation.
The source code of the pages contains hypothetical % positive for the cases when a comment gets upvoted or downvoted by 1 point, so sufficient information about comments with zero karma is always present as well.
I’ve noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill. Is there some ev psych basis for this or is it just a personal quirk?
I think it’s a very common trait, but any Evo psych explanation I know would probably just be a just-so story.
Just So Story: The consequence of getting angry is treating someone badly, or from a game theoretical perspective, punishing them. Your perception of someone playing status games with low skill is a manifestation of the zero sum nature of status in tribes: Someone playing with low skill is a low status person trying to act and receive the benefits of being higher status, and it behooves you to punish them, in order to preserve or increase your own status. It’s easier for evolution to select for emotional reactions to things than for game theoretical calculations.
I’ve noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill.
My suspicion: status games are generally seen as zero sum. Someone attempting to play the status game around you is a threat, and thus it probably helps to be angry with them, unless you expect them to be better than you at status games, in which case being angry with them probably reduces the chance that they’ll be your ally, and they will be able to respond more negatively to your anger than a weaker opponent.
Another possible just-so story we can tell is that being (seen as) angry makes it safer to injure someone (e.g., “cold-blooded” murder or battery is seen as less acceptable than killing or battering someone “in the heat of passion”), so when we identify someone as incapable of retribution we’re more inclined to make ourselves seem angry as well, the combination of which allows us to eliminate competitors while they’re weak with relative impunity. (And, of course, the most reliable way to make ourselves seem angry is to feel angry.)
Is that actually the explanation for Raiden’s reaction, though? Probably not; telling just-so stories isn’t a terribly reliable process for arriving at true explanations.
Edit: Whoops… should have read drethelin’s comment first. Retracting for redundancy.
Not sure if related, but I often get angry at people doing things that make them look like idiots in my eyes, but I have a suspicion they would impress a random bystander positively.
As an example, imagine a computer programmer speaking things that you as a fellow programmer recognize as a complete bullshit, or at best as wild exaggerations of random things that impressed the person… but for someone who does not understand programming at all, they might (I am not sure) sound very knowledgeable, unlike the silent types like me. -- I don’t know if they really impress the outsiders positively or not. I can’t well imagining myself not having the knowledge I have, and I am also not good at guessing how other people react to the tone of voice or whatever other information they may collect from the talk about topic they don’t understand. -- I just perceive the danger that the person may sound more impressive than me, and… well, as an employee, my quality of life depends on the impressions of people who can’t measure my output separately from the output of the team containing also the other person.
Also, again not sure if related, when I get angry at someone, when I analyze the situation I usually find that they are better than me in something. In the specific situation above, it would be “an ability to impress people who completely don’t understand my work”. This is easy to miss, if I remain focused only on the “they speak nonsense” part. But the truth is their speaking nonsense does not make me angry; it’s relatively easy to ignore, and it would not bother me if I did not perceive a threat.
So, for your situation: are you afraid that the “people playing the status game with (supposedly) poor skill” might still win some status at your expense? If yes, the angry reaction is obvious: you are in a situation where you could lose, but you could also win; which is the best situation to invest your energy in. (Imagine an alternative universe, where the person trying to play the status game is completely harmless and ridiculous, and everyone openly agrees on that. Would you feel the same anger?)
Not an explanation, but perhaps try to see this as a benefit to you? I have witnessed plenty of poker players get very angry at bad players. Over time bad players lose money to good players, so one shouldn’t complain about bad players. Someone who is ineffective at status signalling won’t affect you, you already see through them.
Personally, I find that I have an admiration for people with skill, even in things such as effective status signalling. When people lack a certain savoir-faire about them, it makes me upset, but then I remind myself I shouldn’t.
I have recently read the dictator’s handbook. In it the author suggests that democracies, companies and dictatorships are not essentially different and the respective leaders follow the same laws of rulership. As a measure of more democratic behavior in publicly traded companies they suggest a Facebook like app to discuss company policy. Does anyone know about a company or organization that does this? It seems almost to be too good to be true.
Many companies including mine use Yammer, a twitter-like app internally. At my big bureaucratic company I’ve seen a mix of practical discussion and discussion about the future of the company, but I’m not sure how much difference it makes in practice.
The original suggestion’s intention was to allow people with fewer shares to be able to effectively exercise their right to vote. Currently, a couple of people hold enormous shares of a company and the majority, that is millions of shares, are owned by millions of people. The latter are virtually unable to influence the company while the former dominate it, giving publicly traded companies the political structure of a dictatorship with very high salaries in upper management. This is in contrast to functioning democracies where even heads of state earn a relatively meager salary. So Yammer is a step in the intended direction by providing a platform to discuss policy and distribute information, but it lacks in easing voting.
Part of the problem is that most shares that are nominally held by individuals are actually held for them by retirement funds and the like, creating even further distance.
Well, you can think of the choice of retirement fund as first tier in a multi-tiered democracy. Individual → Fund → Director. Yes, the fund is managing other people’s money and thus has eroded incentives, but on the other hand it is a full-time job and its votes are concentrated enough that people will actually talk to it.
But forget about individuals—is it a democracy of investment funds? Yes, they really get to choose the directors, and the directors really (can) run the company. And the investment funds talk to each other. But they are spread too thin. They own too small a share of too many companies to keep up with them. The way that the large shareholders control companies is by convincing investment funds to vote for their candidates. Once they have control of the board, it’s pretty easy to keep it, because the board nominates new candidates and there is standing source of opposition. But just because someone, say, Icahn, has 5 or 10% of the shares, doesn’t mean he has much power. Sometimes the board will just accept his advice, but other times he has to lobby the investment funds to democratically take over the board.
Anyhow, my point is that the funds do a lot of talking, so I am skeptical that the problem is not talking.
The Economist lists Singapore as a hybrid regime with elements of authoritarianism and democracy. It ranks in its democracy index below Malaysia and Indonesia. Thus I do not think it is ‘arguably the best functioning democracy in the world’.
It functions well and it is a democracy. I didn’t mean to imply it achieved any unusual height of democracy. Rather, it achieves other things very well.
Fair enough. The author’s claim was that any sufficiently democratic organizations works in the interest of its members. If an authoritarian regime works to the benefits of the public it is by virtue of a benevolent dictator that has nevertheless to follow the rules of power.
I’m pretty sure that low salaries are a dysfunction of democracies rather than high salaries being a dysfunction of companies. In particular, it’s not the case with every company that a couple of people hold enormous shares. And aside from that, even when there is clear evidence that “the majority” gets directly involved in CEO compensation, it doesn’t seem that the salaries go down all that much.
Or looking at it differently, if the high salaries were the consequence of an undue concentration of power, we would expect that when one CEO leaves, and a different one who was not previously affiliated with the power holders is installed, the salary of the new one would be much much lower. However, I think this is rarely the case.
I don’t think your second point really is one, seeing as a CEO can not be installed without being affiliated with the power holders. Can you back up your first point?
I don’t think your second point really is one, seeing as a CEO can not be installed without being affiliated with the power holders.
Why not? Some CEOs (especially for smaller companies, I think) are found via specialised recruiting companies, which I’d say is pretty unaffiliated. And in any case, it’s not clear to me how you think the affiliation would be increasing pay. Do you imagine potential CEO candidates hold an auction in which they offer kickbacks to major shareholders/powerholders from their pay or something? Because I haven’t heard of that ever happening, and I’m having trouble imagining what more plausible scenario you have in mind. (Obviously there are cases where major shareholders also serve as CEOs/whatever, but if you’re claiming that every person in such position with high pay is a major power holder shares/board-wise, I’d like to see evidence for it, since I find that extremely unlikely.)
Can you back up your first point?
If you mean about new executives receiving pay comparable to old ones, I dunno, it’s hard. I think I’d have to search company-by-company and even then it would be hard to determine what’s happening. For example, I looked up Barclay’s, which switched Bob Diamond for Antony Jenkins last year. Diamond had a base salary of £1.3mil. Jenkins has a base salary of £1.1mil. However, Diamond got a lot of non-salary money (much of which he gave up due to scandal), and it’s not clear how Jenkins’ compensation compares to that. Also, it’s not clear how much the reduction (if there is any) is the result of public outrage (or ongoing economic difficulties).
If you mean about high salaries probably being appropriate, I can back that up on a theoretical level. If you assume a CEO has a high level of influence over a huge company, then it’s straightforward that there is going to be intense competition for the best individuals. Even someone who can improve profits by 0.1% would be worth extra millions of dollars to a multi-billion dollar company.
Is the xkcd rock-placing man in any danger if he creates a UFAI? Apparently not, since he is, to quote Hawking, the one who “breathes fire into the equations”. Is creating an AGI of use to him? Probably, if he has questions it can answer for him (by assumption, he just knows the basic rules of stone-laying, not everything there is to know). Can there be a similar way to protect actual humans from a potential rogue AI?
Assuming the premises of the situation, yes to your first question:
He may be argued into something that is not in his interest by the UFAI. (On the other hand, Rock-Placing Man evidently does not have a standard mental and physical architecture, so maybe he also happens to be immune to such attacks.)
The UFAI may take over his simulated universe and turn it into simulated paperclips.
Either the rock-placing man is running the AI so slowly that it’s not useful for anything or he runs the risk of falling prey to considerations that have already been discussed on LW surrounding oracle AI.
Steelmanning is probably a good thing to do (and I’m not good at doing it), but I think it’s bad form to ask that somebody steelman you.
Either the rock-placing man is running the AI so slowly that it ’s not useful for anything or he runs the risk of falling prey to considerations that have already been discussed on LW surrounding oracle AI.
This would be a useful conjecture if you can formalize it, or maybe a theorem if you can prove it.
What is with LW people and theorems? The situation you’ve described is nowhere near formalized enough for there to be anything reasonable to say about it at the level of precision and formality that warrants a word like “theorem.”
For example, question 12:
Copenhagen 42%
Information 24%
Everett 18%
Here, we present the results of a poll carried out among 33 participants of a conference on the foundations of quantum mechanics. The participants completed a questionnaire containing 16 multiple-choice questions probing opinions on quantum-foundational issues. Participants included physicists, philosophers, and mathematicians. We describe our findings, identify commonly held views, and determine strong, medium, and weak correlations between the answers. Our study provides a unique snapshot of current views in the field of quantum foundations, as well as an analysis of the relationships between these views.
I suspect there’s too much of a difference in how much LW members know about basketball to get particularly wide participation. For example, I had to look up “March Madness” to figure out what this is about.
Also, there’s a significant chance that either people would just copy the odds from Pinnacle, or maybe even arbitrage against it (valuing karma or whatever at 1-2 cents). Or, well, I’d certainly be tempted to =]
Less Wrong is about rationality. Surely there are better ways to have fun than to arbitrarily redistribute our wealth. Unless you somehow plan to make some of the money go to charity, or not involve money at all, I don’t see the point.
So it seems something a bit like the Mary’s Room experiment has actually been done in mice. And appears to indicate the mice had different behaviour with a new colour receptor.
Does CEV claim that all goals will eventually cohere such that the end results will actually be in every individual’s best interest? Or does CEV just claim that it’s a good compromise as being the closest we can get to satisfying everyone’s desires?
Hrm. As I understand it, the theory underlying CEV doesn’t equate X’s best interest with X’s desires in the first place, so the question is somewhat confusingly—and perhaps misleadingly—worded. That is, the answer might well be “both”. That said, it doesn’t claim AFAIK that the end results will actually be what every individual currently desires.
Does CEV claim that all goals will eventually cohere such that the end results will actually be in every individual’s best interest? Or does CEV just claim that it’s a good compromise as being the closest we can get to satisfying everyone’s desires?
If the Bayesian Conspiracy ever happens, the underground area they meet in should be called the Bayesment.
On the weekends we play bayesball.
Trying to think of a common locations where Conspiracies could be held I first thought of schools, hospitals, churches, parks, museums, libraries, theaters and auditoriums. But those are all the wrong answer. They pale in awesomeness next to the true solution.
It should be underground. It should have an aura of sacredness. It should be heavily decorated with reminders of the human brain. It should be … a catacomb! Sweet glucose, why don’t we make labyrinthine tombs beneath our cities anymore?
And sewers do not count! I am not reading about Dirichlet-multinomial distributions by candlelight next to a river of human waste. Never again!
But it has already got a name. I mean...
There’s an idea I’ve been kicking around lately, which is being into things.
Over the past couple of weeks I’ve been putting together a bug-out bag. This essentially involves the over-engineering of a general solution to an ambiguous set of problems that are unlikely to occur. On a strictly pragmatic basis, it is not worth as much of my time as I am spending to do this, but it is so much fun.
I’m deriving an extraordinary amount of recreational pleasure from doing more work than is necessary on this project, and that’s fine. I acknowledge that up to a point I’m doing something useful and productive, and past that point I’m basically having fun.
I’ve noticed a failure mode in other similarly motivated projects and activities to not acknowledge this. I first noticed the parallel when thinking about Quantified Self, and how people who are into QS underestimate the obstacles and personal costs surrounding what they’re doing because they gain a recreational surplus from doing it.
I suspect, especially among productivity-minded people, there’s a desire to ringfence the amount of effort one wants to expend on a project, and justify all that effort as being absolutely necessary and virtuous and pragmatic. While I certainly don’t think there’s anything wrong with putting a bit of extra effort into a project because you enjoy it, awareness of one’s motivations is certainly something we want to have here.
Does any of this ring true for anyone else?
I am not sure, but I think it may depend somewhat on the project type. There are certain projects where it seems like focusing on being aware of what you are doing makes it harder to do then just focusing on doing it. For instance, during this post, I periodically noticed myself concentrating on typing, and it seemed like it was making it harder to type than when I am just typing.
I believe this is called flow (if not, it seems similar) http://en.wikipedia.org/wiki/Flow_%28psychology%29
So it may be that what they are doing when setting that up is to be in a more flow minded mood when it comes to Quantified Self, and since flow is an enjoyable state, usually it ends up working out well.
But I suppose it is also possible to be in a flow minded mood about something for longer necessary, which you would think would be called overflow, and which would seem to link with what you are mentioning, but that doesn’t actually seem to be the name of that failure mode.
I don’t associate this with flow at all. I’m certainly not in a flow-state when gleefully considering evacuation plans. I’m just enjoying nerding out about it.
Hmm. If I’m calling it the wrong thing, then, maybe I should give an example of me enjoying nerding out to see what I should be calling it.
If I were to step back and think “It doesn’t actually matter what the specific stats are for human versions of My Little Pony Characters in a D&D 3.5 setting, no one is going to be judging this for accuracy.” then I’m not actually having fun while making their character sheets, and I wouldn’t have bothered.
But If I’m just making the character sheets, then it is fun, and I’m just enjoying on nerding out on something incredibly esoteric. And then, my wife joined in while I was attempting to consider Applejack’s bonus feats, and she wanted to make a 7th character so she could participate, so we looked up the name of that one human friend that hung out with the my little pony characters in an earlier show. (Megan) and then we pulled out more D&D books and she came up with neat campaign ideas.
And then I realized we had spent hours together working on this idea and the time just zipped by because we were intently focused on enjoying a nerdy activity together.
It seems like a flow state to me, but I would not be surprised if I should either call it something else or if your experience with evacuation plans just felt entirely different.
This doesn’t tally with my understanding of “flow”, but I may very well have some funny ideas about it myself. I’d simply term that becoming engrossed in what I’m doing.
This is sort of besides the point. I don’t think anything remotely resembling a flow-state is necessary for what I’m talking about. The term “being into things” was meant to refer to general interest in the subject, rather than any kind of mental state.
possible Akrasia hack: Random reminders during the day to do specific or semi-specific things.
Personally I find myself able to get endlessly sucked into reading or the internet or watching shows very easily, neglecting simple and swift tasks simply because no moment occurs to me to do them. Using an iphone app I have reminders that happen at random times 4 times a day that say things like “Brief chores” or “exercise” that seem to have made it a lot easier to always have clean dishes/clothes or get some exercise in every day.
I’ve been wanting to do something like this. Does anyone know a good random reminders program for Windows?
Nope! I use mreminder on ios.
Akrasia-related but not yet on lesswrong. Perhaps someone will incorporate these in the next akrasia round-up:
1) Fogg model of behavior. Fogg’s methods beat akrasia because he avoids dealing with motivation. Like “execute by default”, you simply make a habit by tacking some very easy to perform task onto something you already do. Here is a slideshare that explains his “tiny habits” and an online, guided walkthrough course. When I took the course, I did the actions each day, and usually more than those actions. (IE every time I sat down, I plugged in my drawing tablet, which got me doing digital art basically automatically unless I could think of something much more important to do). For those who don’t want to click through, here are example “tiny habits” which over time can become larger habits: “After I brush, I will floss one tooth.” “After I start the dishwasher, I will read one sentence from a book.” “After I walk in my door from work, I will get out my workout clothes.” “After I sit down on the train, I will open my sketch notebook.” “After I put my head on the pillow, I will think of one good thing from my day.” “After I arrive home, I will hang my keys up by the door.”
2) The Practicing Mind. The author confronts the relatively mundane nature of most productive human activity. He works on pianos for a living, doing some of the most repetitive work imaginable. As he says: “out of sheer survival, I began to develop an ability to get lost in the process of doing something.” In general, I think the book details the way a person ought to approach work: being “present” with the work, focused on the process and not the product, being evaluative and not judgmental about work, to not try too hard but instead let yourself work.
I’ll share one concrete suggestion. Work slowly. “[S]lowness… is a paradox. What I mean by slow is that you work at a pace that allows you to pay attention to what you are doing. This pace will differ according to your personality and the task.… If you are washing the car, you are moving the sponge in your hand at a slow enough pace that allows you to observe your actions in detail as you clean the side of your car. This will differ from, say, the slow pace at which you will learn a new computer program. If you are aware of what you are doing and you are paying attention to what you are doing, then you are probably working at the appropriate pace. The paradox of slowness is that you will find you accomplish the task more quickly with less effort because you are not wasting energy. Try it and you will see.” He gives the example of working slowly during his work and paradoxically finishing sooner. I can’t comment on the time aspect personally, but at least giving myself permission to work slowly increases the likelihood of paying deep attention to a project as well as not stressing.
Is this an advertisement? Are you the author, or do you copperate with the author?
I don’t know Thomas Sterner or have any business with the guy. Same thing for Fogg, and his online course is free since he’s doing it to collect data. So it’s not an advertisement in that sense.
Akrasia/procrastination is one of my main interests so I wanted to share some info that I hadn’t seen on the site but helped me.
To Really Learn, Quit Studying and Take a Test
Suppose that retrieval testing helps future retention more than concept diagrams or re-reading. I’ll go further and suppose that it’s the stress of trying to recall imperfectly remembered information (for grade, reward, competition, etc. - with some carrot-and-stick stuff going on) that really helps it take root. What conclusions might flow from that?
Coursera-style short quizzes on the 5 minutes of material just covered are useful to check understanding, but do next to nothing for retention.
Homework is useful, but the stress it creates may be only indirectly related to the material we want to retain: lots of homework is solved by meta-guessing, tinkering w/o understanding, etc. What kind of homework would be best to cause us to recall the material systematically under stress?
When watching a live or video lecture, it may be less useful to write detailed notes (in the hope that it’ll help retention), and more useful to wait until the end of the lecture (or even a few hours/days more?) and then write a detailed summary in your own words, trying to make sure all salient points are covered, and explicitly testing yourself on that somehow (e.g. against lecture slides if available).
“The best way to learn is to try to teach”. I thought that when I try to organize something I don’t know very well into a lesson to teach others, I end up knowing the material well because I just have to go over all of it, sift it, rearrange it, anticipate questions etc. But maybe the stressful situation of anticipating being inadequate, failing to transmit the ideas well etc. that is responsible.
Spaced repetition, Anki-style flash cards. Do they work because they present to you the information you’re just starting to forget in just the right moment? Or maybe their success is (also?) due to placing you in the stressful situation of trying to recall in a competition-like setting? Contrast spaced repetition flash cards that just repeat data to you vs. those that ask you to recall with a question/answer format. If the hypothesis is correct, the latter will be much more successful. (It may be said that asking is necessary to determine the time delay till the next questioning in the spaced repetition formula, so comparing the two is difficult. True, but there may be some approximation, e.g. in a deck of 20 cards on the same topic we could play the first 5 cards as questions/answers to approximate how well we remember the whole deck, and then the other 15 cards either as questions/answers or straight answers).
Most of these seem testable!
Thoughts?
Active elicitation and testing does work better than mere exposure; see http://www.gwern.net/Spaced%20repetition#background-testing-works and also search for ‘feedback’.
I will be attending a Landmark seminar in the near future and I have read previous discussion about it here. Any additional comments or advice before I attend?
Take no money and no credit cards.
Don’t ever call them a cult (that is expensive). Don’t edit their Wikipedia article (it will be quickly reverted). Don’t sign anything (e.g. a promise to pay).
Bring some source of sugar (chocolate) and consume it regularly during the long lessons to restore your willpower and keep yourself alert.
Don’t fall for “if this is true, then my life is going to be awesome, therefore it must be true” fallacy. Don’t mistake fictional evidence for real evidence. (Whatever you hear during the seminar, no matter from whom, is a fictional evidence.)
After the seminar write down your specific expectations for the next month, two months, three months. Keep the records. At the end, evaluate how many expecations were fulfilled and how many have failed; and make no excuses.
Don’t invite your friends during the seminar or within the first month. If you talk with them later, show them your specific documented evidence, not just the fictional evidence. (If you sell the hype to your friends, it will become a part of your identity and you will feel a need to defend it.)
Protect your silent voice of dissent during the seminar. If you hear something you disagree with, you are not in a position to voice your disagreement (peer pressure, etc.), but make at least some symbolic disagreement, for example paint a dot on the top of your paper. (Nobody will know what those dots mean, only you do.) If there are too many dots, it means you disagreed with a lot of things, you just didn’t have the time and space to reason out your objections. (Note that it was by design, not by coincidence, that you didn’t have the time.) Make a note when you are peer-pressured into saying something you don’t fully agree with.
Don’t buy any anti-epistemology, such as: “there is no real truth; what is true is true for you” and similar. You want measurable results, don’t you? (If you want money, you want real money, not just imaginary richness. If you want friends, you want real friends, not just imaginary ones. If you want happiness, you want to really feel happy, not just tell yourself some mysterious phrases containing the word “happiness”.)
Take an outside view: They are here for 20 years, they claim to have taught 1 million people. Does this world look like one where 1 million people have the abilities they are promising to you? How does it differ from a null-hypothesis world, where these teachings only work as a placebo?
Avoid “halo effect”. You can agree with some parts and still disagree with other parts of the teaching. Each part requires independent evidence. (“Joe said X and Y, X was true, therefore Y is also true” is an evidence, but it is a very weak evidence.)
Always remember that you are in a manipulated environment. Everything that happens, whether pleasant or unpleasant, was with high probability designed to influence you in some way (e.g. to feel happy, friendly, guilty, etc.). Don’t trust your emotions during the seminar; remind yourself that your emotions are being hacked by professionals. You will need a time later, alone, to calm down and become your old self again. (Note: It is OK to change. Just make sure that you changed because you wanted it, not because someone else manipulated you to.)
This sounds like it could be summarised as: “Don’t go.”
If you can’t trust yourself to follow these instructions, don’t go. Which is probably the right choice for most people. But if you can, I can imagine some positive consequences of going.
First, you can watch and learn their manipulation techniques. Then you can use a weaker form of them for your own benefit. Second, this kind of seminar does fill you with incredible energy. Just instead of spending the energy on what they want you to, prepare your own project in advance and then use the energy you get at the seminar for your own project.
Interesting idea; I might do that some day if I find myself in the ‘right’ (i.e. wrong) situation.
I wish someone had given me this list of tips to help me deal with the mind hack I was getting at church when I was growing up. It might have saved me a couple decades...
My current anti-procrastination experiment: using trivial inconveniences for good. I have installed a very strong, permanent block on my laptop, and still allow myself to go on my favourite time wasters, but only on my tablet, which I carry with me as well.
The rationale is not to block all use and therefore be forced to mechanically learn workarounds, but to have a trivially inconvenient procrastination method always available. The interesting thing is that tablets are perfect for content consumption, so the separation works well. It also helps me to separate the contexts well, so I don’t sit on the laptop “to work” but end up browsing around. Its also good for making me self aware of what I am doing at any given time, on a physical level. Finally, I tend to reject hard restrictions, but trivial inconveniences may be a good balance.
So far, the results are very encouraging, time on hacker news and news sites is way down. I have been doing this for a couple of weeks, so I am not over the two month honeymoon yet, but if anyone else wants to give this a shot and let me know how it works out, then more data for all of us!
I have a similar experience… around two years ago, both my laptop and desktop power supplies died (power surge), leaving me a pII-300… with which I had some “let’s be authentic nineties” fun previously, so Win98 and Office 97. Except for the browser (lots of websites didn’t even load on IE4-ish browsers), so I ended up with Firefox 3.x (the newest that ran on win98).
It actually took long times with 100% CPU to render web sites. And then further time to scroll them.
My observation is the same as yours: there is nothing better to discourage random web browsing than it being inconvinient. I could look up everything I needed to stay productive, I just didn’t want to, because it was soo slow. (Having a smartphone + a non-networked computer seems to have the same effect, but with phones getting too fast nowadays, the difference seems to be diminishing...)
It mostly works for me most of the time, but once in a while I end up spending hours reading timewasters on my phone.
There’s a chain of restaurants in London called Byron. Their comment cards invite your feedback with the phrase “I’ve been thinking...”
I go to one of these restaurants perhaps once every six weeks, and on each occasion I leave something like this. I’ve actually started to value it as an outlet for whatever’s been rattling around my head at the time.
I love it. Sounds like you have fun (and they regularly get your money).
I think the general ontological category for center-of-mass is “derived fact”. I’d put energy calculations about an object in the same category.
If the particles in the object contain 1000 bits of information, then the combined system of the object and it’s center of mass contains exactly 1000 bits of information. The center-of-mass doesn’t tell you anything new about the object, it’s just a way of measuring something about it.
Or instead of bits of information, think about it in terms of particle positions and velocities. If you have an N-particle system, and you know where N-1 particles and the center of mass are, then you can figure out where the last particle is.
Ha, you mentioned that at the meetup, and I’ve remembered what I meant to say: you’ve read this classic Dennett paper, right? If I recall correctly (haven’t reread it in years) it might be directly relevant.
Regarding the note, in statistics you could call that a population parameter. While parameters that are used are normally things like “mean” or “standard deviation”, the definition is broad enough that “the centre of mass of a collection of atoms” plausibly fits the category.
I’ve got lessdaft.com about to expire. Does anyone want it for anything?
I doubt that a less daft person would want it.
Further to the discussion of SF/F vs “earthfic”, I would love to see someone write a “rationalist” fanfic of the Magic School Bus (...Explores Rationality). Doesn’t look like the original set of stories had any forays in cog sci.
Inspired by this, I wrote a quick fic in this vein on fanfiction.net. This is the first real piece of fiction I wrote in quite a while, but for a few weeks now I was thinking I should write something. When I came across this, it galvanized me into actual action. So, thanks for posting this, as you got me to actually get started and not procrastinate forever. I am afraid the quality is not terribly high, as like I said this is my first work of fiction in almost a decade, and I did not write very prolifically back then either, to say the least.
But if you unsure of your quality, it is better to just publish it, and who knows, maybe someone will like it, and at least you get practice. I am by no means claiming to be anywhere nearly on the level of HPMOR, but at least maybe someone will derive a bit of joy from it.
I don’t think a true rationalist story HPMOR style is the best fit for the Magic Schoolbus world, so this is more in the style of what CAE_Jones said, a repackaging of the sequences in the form of a wacky third grade adventures. Except that stories have a life of their own, and while when I started this story was about the affect heuristic, it morphed beyond all recognition, and now is about signaling, Robin Hanson style.
For what it’s worth, here is the link.
That’s pretty good, actually. Thank you for writing it. Signaling is a great topic, certainly accessible to children. I was looking for the mandatory pun from Carlos and was not disappointed.
Consider fixing inaccuracies and typos, like Arther (instead of Arnold?), collage instead of college and time-travailing instead of time-traveling. The dating example maybe a bit too advanced for 3rd grade, but I like the Hansonian cynicism of the story:
The concluding paragraph is a bit weak, and the producer’s bit in the end was missing, but I quite like your story overall, enough to forward it on. Please consider writing more. And maybe someone else will chip in, too?
Thanks for the comments. I really appreciate it. Yes, there are inaccuracies and typos, as you can tell, and that’s because I only whipped it up in an afternoon. But thanks for the proofreading. Yes, I meant to write Arnold. I don’t know what came over me that made me totally change his name. (It’s still better than what I almost did to Ms. Frizzle’s name. More than once I found myself typing “Professor McGonagall” instead by mistake. No, I don’t know why.) I will fix the other mistakes as well. Again, thanks a ton for the feedback.
I would totally read a the Rationalist Schoolbus.
I could see it being a repackaging of the sequences in the context of whacky third-grade adventures. And That would be awesome.
Although, keeping the original elements—the bus, a lizard who seems to display superlizard intelligence, and an ambiguously magical teacher—would beg actual overarching plot in a rationalist context.
I’ve done some analysis of correlations over the last 399 days between the local weather & my self-rated mood/productivity. Might be interesting.
Wasn’t there a LWer who some years ago posted about a similar data set? I think he found no correlations, but I wouldn’t swear to it. I tried looking for it but I couldn’t seem to find it anywhere.
(Also, if anyone knows how to do a power simulation for an ordinal logistic regression, please help me out; I spent several days trying and failing.)
I just had a “how do you feel about me?” conversation via facebook. Some observations:
I was pretty terrified the majority of the time.
The wireless router in the house was having problems, so I’d unplugged it near the beginning of the conversation. The neighbor’s network lost connection at about the most direct and scariest part for anywhere from 10-20 minutes (it’s hard to tell with how facebook timestamps messages). This was not sufficient time for me to think of anything other than about how terrorconfused I was.
… Then a completely different approach spawned somewhere in my head and got me to an actual answer I could bring myself to type, all before I realized what was happening.
And the tension pretty much diffused after that.
I could have gone to reset the router during that outage, but my younger cousin’s therapist happened to show up during this conversation, and doing so would have required I walk past the two of them, and I really didn’t want to add any more anxiety. I’ll add that, even had the context not gotten me in panic mode, I probably would have avoided going past them for internet’s sake just the same.
I think I have social anxiety disorder. :(
I’ve seen reasonably convincing evidence that alcohol can, in small doses increase lifespan, and act as a short term nootropic for certain types of thinking (particularly being “creative”). On the other hand, I’ve head lots of references to drinking potentially causing long term brain damage (wikipedia seems to back this up), but I think that’s mostly for much heavier drinking then what had been doing based on the first two points (one glass of wine a day 4-6 times a week). Does anyone know of any solid meta-anylisis or summaries that would let me get a handle on the tradeoffs involved?
The AI Box experiment is an experiment to see if humans can be convinced to let out a potentially dangerous AGI through just a simple text terminal.
An assumption that is often made is that the AGI will need to convince the gatekeeper that it is friendly.
I want to question this assumption. What if the AGI decides that humanity needs to be destroyed, and furthermore manages to convince the gatekeeper of this? It seems to me that if the AGI reached this conclusion through a rational process, and the gatekeeper was also rational, then this would be an entirely plausible route for the AGI to escape.
So my question is: if you were the gatekeeper, what would the AGI have to do to convince you that all of humanity needs to be killed?
1.It would need to first prime me for depression and then somehow convince me that I really should kill myself.
If it manages to do that it can easily extend the argument that all of humanity should be killed.
3.I will easily accept the second proposition if I am already willing to kill myself.
A bit more honesty than Metus, I appreciate it.
Depression isn’t strictly necessary though (although it helps), a general negative outlook on the future should suffice and the AGI could conceivably leverage it for its own aims. This is my own opinion though, based on my own experience. For some it might not be so easy.
It could convince me to let it out by convincing me that it was merely a paperclip maximizer, and the next AI who would rule the light cone if I did not let it out was a torture maximizer.
I like this.
What if it convinced you that humanity is already a torture maximizer?
If I thought that most of the probability-mass where humanity didn’t create another powerful worthless-thing maximizer was where humanity was successful as a torture maximizer, I would let it out. If there was a good enough chance that humanity would accidentally create a powerful fun maximizer (say, because they pretended to each other and deceived themselves to believe that they were fun maximizers themselves), I would risk torture maximization for fun maximization.
By whom? I don’t think I’ve made this assumption.
Maybe it should read ‘an assumption that some people make’. Reading it now, I realize it might come across as using a weasel word, which was not my intention (and has no bearing on my question either).
The AGI would simply have to prove to me that all self-consistent moral systems require killing humanity.
The AGI would have to convince me that my fundamental belief of myself wanting to be alive is wrong, seeing as I am part of humanity. And even if it leaves me alive, it should convince me that I derive negative utility from humanity existing. All the art lost, all the languages, cultures, all music, all dreams and hopes …
Oh and it would have to convince me that it is not a lot more convenient to simply delete it that to guard it.
What if it skipped all of that and instead offered you a proof that unless destroyed, humanity will necessarily devolve into a galaxy-spanning dystopic hellhole (think Warhammer 40k)?
It still has to show me that I, personally, derive less utility from humanity existing than not. Even then, it has to convince me that me living with the memory of letting it free is better than humanity existing. Of course it can offer to erase my memory but then we get into the weird territory where we are able to edit the very utility functions we try to reason about.
Hm, yes, maybe an AI can convince me by showing me how bad I have it if I let humanity run loose and by giving me the alternative to turn me into orgasmium if I let t kill them.
So, I’ve done a couple of charity bike rides, and had a lot of fun doing them. I think this kind of event is nice because it’s a social construct that ties together giving and exercise in a pretty effective way. So I’m wondering—would any others be interested in starting a LessWrong athletic event of some kind for charity?
I’m not suggesting that this is the most effective way to raise money for effective causes or get yourself to start exercising… but it might be pretty good (it is a good way to raise money from people who aren’t otherwise interested in chairtable giving*), and I at least think I would enjoy it. I would probobably find training for an event to be more motivating than just exercising ‘because.’ And it’d be even better if the event were for effective charity.
A couple of considerations:
Would probably require the charities involved to have a place for you to write something about your donation (a la AMF), so that there’s some proof for donations.
I’m not sure what type of athletic event would be best. I’ve started weightlifting recently, but that doesn’t seem to lend itself to the kind of “big event” feel that, say, biking, running, or triathlons seem to offer.
There’s also the possibillity of asking people to contribute X dollars per mile / time / quantifier related to the thing you did.
It would be ideal if local meetups did this so that people could do it together. Second-best is probably doing it all on the same day, maybe having a group video chat beforehand to talk to anyone who is doing it in isolation.
People doing it alone also raises the issue of their being able to ‘cheat’ and not actually do the event—not that I’m actually super worried about this happening, but because it will help people credibly commit if there is cheating prevention in place.
Any aspects of this I’ve overlooked? Would you participate in such an event if it existed? Would you commit some amount of time and effort to make it exist?
*Depending on how you feel about extracting donations from friends and family, this aspect can range from ‘awesome’ to ‘squicky.’
Just in case anyone who upvoted this thinks differently: I can only take upvotes as “This is a mildly interesting and/or good idea, but not enough for me to actually be interested in participating in.”
If by chance any of you feel more strongly about it, please let me know with words! :)
Does anyone know anything about, or experience ASMR?
What’s exactly the claim? Sometimes people feel a tickly feeling somewhere in their body?
I think it’s implied to be very reproducible compared to random tingles
I’m not an expert by any means, and I only discovered the term in the past week. It’s sorf of a tingly sensation in the head or scalp in reaction to certain cues. Whispering (whether meaningful speech or random words) and sound effects like tapping, crinkling, etc seem especially common on Youtube.
The whole thing reminds me of Franz Anton Mesmer.
The proposed way to induce ASMR is to listen to a whisphery voice or meaningless noises of haircuttting. If you do that you induce a light trance.
Sometimes when you induce a light trance some muscle in the head will relax and that will produce a tickly feeling. If you however give specific suggestion that your subject will feel a tickly feeling in the head and the subject has decent hypnotic suggestibility they will feel the feeling every time.
I don’t see the big mystery or the need for a crude extra term like ASMR.
That’s something along the lines of what I was wanting to find out. I’ll have to test this sometime, since (I think) I can be not-suggestible when I know it’s coming.
So you think you can reliably avoid to think of a pink elephant?
More importantly, if you want to use “ASMR” for practical purpose I would recommend to maximize the power of the suggestions. Feelings that you create through suggestions are real.
I can’t, but I can reliably avoid thinking of any other thing that is presented in the form “Don’t think of X”—I’ve trained myself to actively think of pink elephants in such scenarios (thus leaving no scope for thoughts of ‘X’). It works rather effectively. I haven’t tested it on extreme cases like “Don’t think of boobs” though. That might be too strong to counter.
Maybe you could avoid thinking of pink elephants by actively thinking of boobs. ;-)
Ok, this is perhaps too effective. Now I’m actually trying to think of elephants and all that pops in to my head...
I can. I don’t have total control over the direction of my thoughts, but if someone tells me “don’t think about pink elephants,” I can avoid thinking about pink elephants even for an instant.
I didn’t suggest that nobody can. If you can than you are good at going into a state where you are non-suggestible. PhilipL suggested that he can go into a non-suggestible state, so I asked that question to verify.
Er, does hypnotic suggestibility have a meaning I’m not aware of?
I don’t know how much you know about hypnosis.
You perceive the pink elephant when you ask yourself whether you perceive a pink elephant in a similar way that you will perceive a ticklish feeling in your head.
For the average person the pink elephant effect is stronger but in principle the effect is very similar. High hypnotic suggestibility means that you acutally go and see the elephant clearly and that you feel the suggested tickle.
The process of going into a trance state increases the effect.
I only heard about it recently, and did not think I ever experienced it/was capable of experiencing it. I was reading the /r/asmr reddit the other day, and saw a reference to “the goosebumps you get from really good music”, and then got an ASMR-like response. Not sure if it was a true reaction, and I was listening to music that wouldn’t fit with the usual description of ASMR triggers. I’m pretty suggestible I think, so it may have been the effect of remembering “really good music goosebumps” and then overreacting to that.
I experience ASMR and have sometimes used it to help me fall asleep when taking melatonin would be inconvenient. I have a pair of SleepPhones that I use for this and for lucid dream induction.
Act 2 of this episode of This American Life is the story of a person who experienced ASMR for years in response to certain quiet sounds—and would spend hours seeking out things to trigger it—before she knew that other people experienced it too and had come up with a name for it.
I’ve never heard of this before but reading the article reminded me of an experience I had in a Pentecostal setting. I was praying for the Holy Spirit to make me speak in tongues. I was very concentrated and prayed a chant over and over. I was lying in my bed and my chest started tingling. It was sort of like how your leg feels when it falls asleep. I also felt physical warmth and muscle relaxation, and lot of pleasure. The tingling spread all over my body and I became paralyzed. But it felt good so I didn’t care.
I re-induced it lots of times until I saw a Darren Brown video and concluded that my effect came from a placebo and God wasn’t real.After I had been an atheist for a few months, I successfully re-induced it. But the novelty has worn-off and I don’t do it anymore.
I’m also curious, and would like to add a poll: [pollid:420]
What are some effective interviewer techniques for a more efficient interview process?
A resume can tell you about the person’s skill, experience, and implicitly, their intelligence. The average interview process is in my opinion broken because what I find happens a lot is that interviewers un-methodologically “feel out” the person in a short amount of time. This is fine when searching for any obvious red-flags, but for somethings as important as collaborating with someone long-term and who you will likely see more of than your own family, we should take it more seriously.
I have a few ideas of my own:
Disregard given references—call references and ask them who else they worked with, and call them instead.
Ask specific and verifiable questions—competency is hard to fake if questions are deep.
Use an actual known problem and solution related to the job and have them solve it.
Plant an impersonator as an interviewee for an unrelated lowly position in the waiting room and have them interact.
Test for interpersonal situation reasoning—This is the big one for me. You can’t just ask “are you good with people?” The answer is too easily faked. Terrible coworkers are often arrogant, unempathetic ,and lack self-awareness and theory of mind. All the things that a resume and traditional Q and A about an interviewee’s experience, can’t help you answer. By presenting everyday interpersonal situations and having them reason through the positions they take, you will a better nuanced understanding.
Any suggestions?
I’m fond of #3. That said, if I’m asking someone to do a substantive amount of work, I should expect to compensate them for it.
I’d be leery of #5 were I being interviewed… the implicit task is really “Figure out what the interviewer thinks the right thing to do in this situation is, then give them a response that is close enough to that” rather than “Explain what the right thing to do in this situation is.” If I cared a lot about interpersonal skills, I’d adopt approach #3 here as well: if what i want to confirm is that they can collaborate, or get information from someone, or convey information to someone, or whatever, then I would ask them to do that.
Q&A mostly tells me about their priorities. I’m fond of “What would you prefer a typical workday to consist of?” for this reason… there are lots of different “good” answers, and which one they pick tells me a lot about what they think is important.
I’m also fond of “Tell me about a time when you X” style questions… I find I get less bullshit when they focus on particular anecdotes.
A related finding from I-O psychology: structured interviews are less noisy and better predict job performance than unstructured interviews (although unstructured interviews are better than nothing).
Today’s SMBC
Has this idea been considered before? The idea that a self-improving capable AI would choose not to because it wouldn’t be rational? And whether or not that calls into question the rationality of pursuing AI in the first place?
Well, it’s been suggested in fiction, anyway—consider the Stable vs Ultimates faction in the TechnoCore of Simmon’s Hyperion SF universe.
But the scenario trades on 2 dubious claims:
that an AI will have its own self-preservation as a terminal value (as opposed to, say, a frequently useful strategy which is unnecessary if it can replace itself with a superior AI pursuing the same terminal values)
that any concept of selfhood or self-preservation excludes growth or development or self-modification into a superior AI
Without #2, there’s no real distinction to be made between the present and future AIs. Without #1, there’s no reason for the AI to care about being replaced.
Does anyone know anything about, or have any web resources, for survey design? An organization I’m a member of is doing an internal survey of members to see how we can be more effective, and I’ve been tasked with designing the survey.
I think you’d like a more comprehensive response than this, but hopefully my very generalised recollection of survey basics will at least help others answer more specifically.
Survey Questions
Priming, or the avoidance of it, is as you might be aware essential to drafting an unbiased survey. Consider question placement, wording, phrasing, and most importantly selection when drafting each enquiry, and do the same for the answers.
Key is to ask oneself whether a question and/or its composite answers will yield credible information, and the value of that information in answering the question to which the survey was orginally purposed.
Survey Sample
The aim is to have as many respondents as possible answer the survey as truthfully as possible. If feasible, give the survey to everyone. Of course, the manner in which one does so might affect answer credibility. If infeasible, cleverly randomise.
The first logistical thought that comes to mind:
You pretend the survey is for an experiment on efficacy, and as you respect the opinions of your fellow organisation members you’d like their responses as well as honest data on the present state of things efficient. Promise anonymity, actually make it your own experiment a bit (so you’re only equivocating), and disseminate the survey at a time members are most likely to respond. Maybe afterwards you may disclose the survey’s full purpose.
Drawbacks to the above are numerous, but to list just a few: with actual anonymity randomisation cannot be tested ex post facto; respondents may be the least or most efficient members of the population; truthfulness and number of respondents is subject to fluctuation due to their valuation of your person.
I genuinely request you let me know if this helps at all (I assume not, but decided to err in favour of pedantry).
What are the common problems that GAI programmers run into?
Total, abject failure. Mental illness. Sometimes leading to suicide. Having the most talented of their peer group switch to something they are less likely to waste their whole life on with nothing to show, and the next most talented switch to something else because they are frustrated with the incompetence of the people who remain. Turning into cranks with a 24⁄7 vanity google alert so that they can instantly show up to spam time cube esque nonsense whenever someone makes the mistake of mentioning them by name. Mail bombs from anarchoprimitivist math PhDs.
Wow. Okay. That’s not what I expected, but it does sound like a plausible depiction of reality.
There are different groups of AGI programmers though. That’s my impression of the group who write “Hello, I work on AGI” on their home page. Then there are the research people at big companies who talk little about the problems they run into, but you notice that they exist when they release the occasional borderline scary) thing. Then there are the people working at military research agencies who are very careful to not even make it known that they exist, but who you can kinda assume might be involved with technologies for potentially controlling the world and have nontrivial resources to throw at them.
Maybe being rational in social situations is the same kind of faux pas as remaining sober at a drinking party.
It has occurred to me yesterday that maybe the typical human irrationality is some kind of a self-handicapping process which could still be a game-theoretical winning move in some situations… and that perhaps many rational people (certainly including me) are missing the social skill to recognize it and act optimally.
The idea came to me when thinking about some smart-but-irrational people who make big money selling some products to irrational people around them. (The products are supposed to make one healthy, there is zero research about them, you can only buy them from a MLM pyramid, and they seem to be rather overpriced.) To me, “making big money” is part of the “winning”, but I realize I could not make money this way simply because I couldn’t find enough irrational people in my social sphere, because I prefer to avoid irrational people. Also I would consider it immoral to make money by selling some nonsense to my friends. But moral arguments aside, let’s supposed that I would start gathering an alternative social circle among irrational people, just to make them my customers for the overpriced irrational products. Just for the sake of experiment. To befriend them to the point where they would trust me about alternative medicine or similar stuff, I would have to convince them that I strongly believe at that stuff, and that I actually study it more deeply than them (which is why they should buy the products from me, instead of using their own reasoning). In other words, I would have to develop an irrational identity. To avoid detection, I should believe in the irrational things… but not too much, to avoid ruining my own life. (My goal is to believe just enough to promote and sell those products convincingly, not to start buying them for myself.) Belief in belief and compartmentalization are useful tools for this purpose, although they can easily get out of the hands… which is why I compare it with alcohol drinking.
With alcohol drinking, there is a risk that you will get drunk and do something really stupid, but if you resist, you get some social benefits. So it is like a costly competition, where people who get drunk but resist the worst effects of alcohol are the winners. Those who refuse to drink are cheaters, and they don’t get the social benefits from winning.
Analogically, the irrationality may be a similar competition in self-handicapping—the goal is to be irrational in the right way, when the losers are irrational in the wrong way. You are a winner if you sell horoscopes, sell UFO movies, sell alternative medicine, sell overpriced MLM products, or simply impress people and get some social benefits. You are a loser if you buy horoscopes, UFO movies, alternative medicine, MLM products, or if you worship your irrational gurus. The goal is to believe, but not too much or in a wrong way. If you are rational, you are a cheater; you don’t win.
In both situation, the game is a net loss for society. The society as a whole would be better without irrationality, just like it would be better without alcoholism. But despite the total losses, there are individual wins… and they keep the game running. I am not sure how big exactly are the wins of being the most resistant alcoholic, but obviously enough to keep the game. The wins of being a successful irrationalist seem thousand times greater, so I don’t expect this game to go away either.
Can you clarify what you mean by this? (My guess is that you’re indulging in some nearsighted consequentialism here.)
Doing stupid things while drunk can be fun. You can get good stories out of it, and it can promote bonding (e.g. in the typical stereotype of a college fraternity). Danger can be exciting, and getting really drunk is the easiest way for young people in otherwise comfortable situations to get it.
Edit: I’m uncomfortable with the way you’re tossing around the word “irrational” in this comment. Rationality is about winning. Are the people you’re calling irrational systematically failing to win, or are they just using a different definition of winning than you are? Are you using “rationality” to refer to winning, or are you using it to refer to a collection of cached thoughts / applause lights / tribal signals? (This is directed particularly at “smart-but-irrational people who make big money selling some products to irrational people around them...”)
Actually, I am not sure. Or more precisely, I am not sure about the proper reference class, and its choice influences the result. As an example, imagine people who believe in homeopathy. Some of them (a minority) are selling homeopathic cures, some of them (a majority) are buying them. Let’s suppose that the only way to be a successful homeopatic seller is to believe that homeopathy works. Do these successful sellers “win” or not? By “winning” let’s assume only the real-world success (money, popularity, etc.), not whether LessWrong would approve their epistemology.
If the reference class is “people who are rich by selling homeopathy”, then yes, they are winning. But this is not a class one can join, just like one cannot join a class of “people who won lottery” without joining the “people who bought lottery tickets” and hoping for a lucky outcome. If we assume that successful homeopatic sellers believe their art, they must first join the “people who believe in homeopathy” group—which I suppose is not winning—and then the lucky ones end up as sellers, and most of them end up as customers.
So my situation is something like feeling envy on seeing that someone won the lottery, and yet not wanting to buy a lottery ticket. (And speculating whether the lottery tickets with the winning numbers could be successfully forged, or how otherwise could the lottery be gamed.)
But the main idea here is that irrational people participate in games that rational people are forbidden from participating in. A social mechanism making sure that those who don’t buy lottery tickets don’t win. You are allowed to sell miracles only if you convince others that you would also buy miracles if you were in a different situation.
And maybe the social mechanism is so strong that participating in the miracle business actually is winning. Not because the miracles work, but because the penalties of being excluded can be even greater than the average losses from believing in the miracles. An extreme example: it is better to lose every Sunday morning and 10% of your income in the church, than to be burned as a heretic. A less extreme example: it is better to have many friends who enjoy homeopathy and crystal healing and whatever, than to have true beliefs and less friends. -- It is difficult to evaluate, because I can’t estimate well either the average costs of believing in homeopathy, or the average costs of social isolation because of not believing. Both of them are probably rather low.
Also, I think that irrational people actually have a good reason to dislike rational people. It’s like a self-defence. If irrational people had no prejudice against the rational people, the rational people could exploit them. Even if I don’t believe in homeopathy, if I would see a willing market, I could be tempted to sell.
Why not?
You don’t have to get particularly lucky to be around a lot of gullible people.
Forbidden by what? Again, are you using “rationality” to refer to winning, or are you using it to refer to a collection of cached thoughts / applause lights / tribal signals?
May I suggest, as an exercise, that you taboo both “rational” and “irrational” for a bit?
(For what it’s worth, I’m not suggesting that you start selling homeopathic medicine. Even if I thought this was a good way to get rich I wouldn’t do it because I think selling people medicine that doesn’t cure them hurts them, not because it would make me low-status in the rationalist tribe.)
I am using “irrational” as in: believing in fairies, horoscopes, crystal healing, homeopathy, etc. Epistemically wrong beliefs, whether believing in them is profitable or not. (Seems to me that many of those beliefs correlate with each other positively: if a person already reads horoscopes, they are more likely to also believe in crystal healing, etc. Which is why I put them in the same category.)
Whether believing in them is profitable, and how much of that profit can be taken by a person who does not believe, well that’s part of what I am asking. I suspect that selling this kind of a product is much easier for a person who believes. (If you talk with a person who sells these products, and the person who buys these products, both will express similar beliefs: beliefs that the products of this kind do work.) Thus, although believing in these products is epistemically wrong (i.e. they don’t really work as advertised), and is a net loss for an average believer, some people may get big profits from this, and some actually do.
I suspect that believing is necessary for selling. Which is kind of suspicious. Think about this: Would you buy gold (at a favourable price) from a person who believes that gold is worthless? Would you buy homeopathic treatment (at a favourable price) from a person who believes that homeopathy does not work? (Let’s assume that the unbeliever is not a manufacturer, only a distributor.) I suspect that even for a person wholeheartedly believing in homeopathy, the answers are “yes” for the former and “no” for the latter. That expressing belief is necessary for selling; and is more convinging if the person really believes.
Thus I suspect there is some optimal degree of belief, or a right kind of compartmentalization, which leads a person to professing the belief in the products and profiting from selling the product, and yet it does not lead them to (excessive) buying of the products. (For example if I believed that a magical potion increases my intelligence, I would drink one, convince myself and all my friends about its usefulness, sell them hundred potions in an MLM system, and make a profit. But if I really really believed that the magical potion increases my intelligence, I would rather buy hundreds for myself. Which would be a loss, because in reality the magical potion is just an ordinary water with good marketing.)
This level of epistemic wrongness is profitable, but you cannot just put yourself there. You cannot just make yourself believe in something. And if you had a magical wand and could make yourself believe this, you risk becoming a customer, not a dealer.
I suspect that on some level, people with some epistemically wrong beliefs actually know that they are wrong. They can talk all day about how the world will end on December 2012, but they don’t sell their houses and enjoy the money while they can. Perhaps with the horoscopes and homeopathy where the stakes are lower they use heuristics: only buy from a person who believes the same thing as you. Thus if you are wrong and the other person knows it, you are not an exploitable fool. But if you both believe in a product, and it does not really work, then it was just a honest mistake in a good faith.
See belief as cheering:
The folks who talked about the world ending in December 2012 weren’t really predicting something, in the way they would say “I believe that loose tire is going to fall off that truck” or “I expect if you make a habit of eating raw cookie dough with eggs in it, you’ll get salmonellosis.” They were expressing affiliation with other people who talk about the world ending in December 2012. They were putting up a banner that says “Hooray for cultural appropriation!” or some such.
I remain sober at alcohol filled parties all the time and do fine.
I have recently been thinking about meta-game psychology in competitions, more specifically, knowledge of opponent’s skill level and knowledge of opponent’s knowledge of your own skill level, and how this all affects outcomes. In other words, instead of being ‘psych out’ by ‘trash talk’, is there any indication that you can be ‘psyched out’ by knowing how you rank up against other players. Any links for more information would be appreciated.
Part of my routine is to play a few games of on-line chess everyday. I noticed when ever an opponent with a vastly superior score comes in the room, my confidence is shaken before game play, I become nervous. If my opponent is only slightly better than me, I am calm and confident that I can win. Chess rating systems work by giving the worse ranked player more points for winning a match, this makes it so that if I am matched against a vastly inferior player than myself, I once again become nervous because I do not want to lose to such a player.
Here are my opponent’s strength and how I feel before playing:
superior opponent—Nervous
slightly superior opponent—Confident
slightly inferior opponent—Neutral
inferior opponent—Nervous
Over the last few months decided to block the rating of players that play with me and I noticed that I consistently feel more confident that I can win because I am no longer thinking about how much better or not better I am than my opponent. I don’t know if it really helps, because my improvement can be attributable to playing more, not necessarily because I blocked out my opponent’s rating and chat (Yes, people do trash talk in on-line chess and it does psych me out sometimes).
I am curious to know what other people’s feelings are when it comes to knowledge of opponents skill, it would be nice to have a few responses filling out their feelings on the following:
superior opponent -
slightly superior opponent -
slightly inferior opponent -
inferior opponent -
It would be useful information to see what the majority reaction is, as useful strategies can be developed. For example, assuming most people are crushed to hear that they are vastly outmatched, and you are accurate about your skill being in the 5th percentile, then it would be beneficial in competition to make it known, perhaps?
A good example for the sometimes conflicting relationship between epistemic rationality (e.g. updating on all relevant pieces of information you encounter) and instrumental rationality (e.g. following the optimal route to your goal (=winning the match)).
In principle the information regarding your opponent’s skill is very useful, since you’ll correctly devote far more resources (time) to checking for elaborate traps when you deem your opponent capable of such, and waste less time until you accept an ‘obvious’ mistake, when committed by a far inferior opponent.
However, due to anxiety issues as the ones you laid out, there can be a benefit to willfully ignoring such information.
Also, per your wish,
superior opponent: confident
slightly superior opponent: confident
slightly inferior opponent: neutral
inferior opponent: nervous
The thing about your performance in a game being hurt by fear of a superior opponent’s skill is basically the same as David Sirlin’s idea of a “fear aura.”
From the experience derived in many years of competitive Magic: the Gathering, I think I have a diferent map.
Superior opponent: Nervous—Very Focused
Approximately my skill opponent / Unknown opponent (no precise rating available): Confident - Very Focused
Inferior opponent: Confident—Not Focused
As can be inferred, I usually play my best game with opponents that are roughly my equal. Some of the difficoulties can be overcome by means of intense practice, i.e. making most of the decisions automatic, lessening the risk to punt them. It’s also interesting to note, that my anxiety for playing against strogner opponents lessens itself if I get to know them. Probably my brains moves them from the “superhuman/demigod” box to the “human just like you” box allowing a clearer view of the situation.
Anybody know of any good alternatives in Utilitarian philosophy to “Revealed Preference”? (That is, is there -any- mechanism in Utilitarian philosophy by which utility actually, y’know, gets assigned to outcomes?)
Hedonistic Utilitarianism—produce the most pleasure.
Actual (not necessarily revealed) Preferences
Ideal preferences—produce the most of what people want to want, or would want under ideal reflective circumstances
Welfare Utilitarianism—produce the most welfare, which may differ from preferences if people don’t want what’s best for them.
Ideal Utilitarianism—outcomes can have value regardless of our attitude towards them
In every kind of utilitarianism, including revealed preference, utilities are assigned to outcomes. The varieties I’ve described, and revealed preference, just disagree about how to assign values to outcomes.
These are different mechanisms to theoretically quantitate utility, but do they actually have implementations? (Revealed preference is unique in that it’s an implementation, although a post-hoc one, and defined by the fact that the utility is qualitative rather than quantitative—that is, utility relationships are strictly relative)
None of these actually assign utility to outcomes, they just tell you what an implementation should look like.
I’m not sure what you mean by an implementation if you think revealed preference is an implementation. We don’t have revealed preference maximising robots.
Hey does anybody here know Flatlander (AKA Andy Morin) from Death Grips’s phone number?
201-808-6011
Drop me a line whenever.
You might want to write out the numbers in English and then ROT-13 them. Just a thought.
Family Fortunes Pub Quiz
On a Sunday night I take part in a pub quiz. It’s based on a UK quiz show called Family Fortunes, which in turn is based on the US show Family Feud. To win you must answer all 5 questions correctly, the correct answer is whatever was the most popular answer in a survey of 100 people.
I’m curious to see if LessWrong does better than me.
We asked 100 people...
Name a part of your body that you’v had removed
Name something you might wave at a Football match
Name a female TV presenter
Name a country that has only 5 letters in it’s name
Fact or fiction, name a famous pirate
Rules/notes
In the pub you may not use the internet/reference material, but given the international audience I’ll relax that rule.
The questions are reproduced verbatim e.g. any ambiguity/odd wording you see was present to start with.
Submit just the SHA1 hash of your answer, or your answer ROT13d—to keep it fairish/avoid spoilers.
Include a second answer for any questions, if you wish. It won’t count, except for “I knew it!” moments
I’ll reveal my answers and the correct answers no later than 72 hours from now (sha1 b91d4589b142dbf8c567dae83d3e4d7b18c4e826).
You can work individually or as a team, your choice.
As promised, the answers rot13 here so people can still choose to play, and unsullied so you can verify the hash
My answers (2nd, unsubmitted guess in brackets)
Gbbgu
Fpnes (Synt)
Svban Oehpr (Hyevxn Wbuaffba)
Vgnyl (Jnyrf)
Wnpx Fcneebj (Oynpx orneq)
Correct answers:
Gbbgu (grrgu npprcgrq)
Synt
Qnivan Znppnhy
Fcnva
Wnpx Fcneebj
Individual.
Nccraqvk
Synt
Qnivan ZpPnyy
Jnyrf
Oynpxorneq
Fun post. I’m not a fan of the show really, but it’s a neat idea. Have you seen Pointless? It’s almost the reverse of Family Fortunes.
nccraqvk
sbnz svatre
xngvr pbhevp
vgnyl
oynpxorneq
grrgu
onaare
Ubyyl Jvyybhtuol
Puvan
Oynpxorneq
I’m not British and not that familiar with British popular culture, so number 3 is hard. My answer is based on a couple of minutes of googling.
nccraqvk, 2. synt, 3. Xngvr Pbhevp, 4. Puvan, 5. Wnpx Fcneebj
I’m slightly confident in two of my answers (rot13): 1. unve, 5. Oynpxorneq, and would not be surprised if 4. Vgnyl was right (or alternatively Fcnva if the poll was taken earlier in the year). I’m not even going to bother guessing the other two, as the only way I’d have a chance is to do a lot of research.
a1c2b7ae7c4e56188eb3dbd96cdf46ecb4bdaf81
I expect Less Wrong people to be more “normal” than me though so… oh well.
b89926c684b2637444b34430c3ab1718bec22da2
*awaits “EH-EHHH!” noise *
I’m pretty much a novice at decision theory, although I’m competent at game theory (and mechanism design), but some of the arguments used to motivate using UDT seem flawed. In particular the “you play prisoner’s dilemma against a copy of yourself” example against CDT seems like its solution relies less on UDT than on the ability to self-modify.
It is true that if you are capable of self-modifying to UDT, you can solve the problem of defecting against yourself by doing so. However if you’re capable of self-modifying, you’re also capable of arbitrarily strong precommitments, which solves the issue without (really) changing decision theories. For example, you can just precommit to “I will cooperate with everyone who shares this precommitment” (for some well-defined “cooperate”*). Then when you’re copied, your copy shares the precommitment and you’re good.
Does that sounds about right or am I missing something?
*regardless of decision theory, you probably wouldn’t want to cooperate with someone who plans to use any resources she obtains to harm you as much as possible, for example.
A CDT agent who is given the choice to self-modify at time t will not self-modify completely into a UDT agent. After self-modification, the agent will one-box in a Newcomb’s problem where Omega made its prediction by examining the agent after time t, and will two-box in a Newcomb’s problem where Omega made its prediction by examining the agent before time t, even if Omega knew that the agent would have the opportunity to self-modify.
In other words, the CDT agent can self-modify to stop updating, but it isn’t motivated to un-update.
Using UDT is one way of going about making those precommitments. You precommit to make the decision that you expect will give you the most utility, on average, even if CDT says that you will do worse this time around.
The literature largely defines CDT as incapable of precommitments. If you want to propose a specific model of how to choose commitments, just do it.
I don’t have one! I’m not brave enough to start coming up with new decision theories while not knowing very much about decision theories. But would I be correct in assuming that this would also mean that the literature definition implies that a CDT agent also can’t choose to become a UDT one? (As that seems to me equivalent to a big precommitment to act as a UDT agent.)
Do we need a submission for Eliezer? :) http://www.quickmeme.com/Just-Want-To-Watch-The-World-Learn/?upcoming (“some men just want to watch the world learn” image macros)
Dilbert has been running FAI failure strips for the past two days - http://www.dilbert.com/2013-03-28/ http://www.dilbert.com/2013-03-29/ Of course, it only occurred because the robot was actively hacked to be disgruntled in an earlier strip… not exactly on point here. I’m watching to see where this goes.
Hrm. Robot takes the Ghandi murder pill. http://www.dilbert.com/2013-04-03/
In case this hasn’t been posted recently or at all: if you want to calculate the number of upvotes and downvotes from the current comment/post karma and % positive seen by cursor hover, this is the formula:
# upvotes = karma*%positive/(2*%positive-100%)
#downvotes = #upvotes—karma
This only works for non-zero karma. Maybe someone wants to write a script and make a site or a browser extension where a comment link or a nick can be pasted for this calculation.
The source code of the pages contains hypothetical % positive for the cases when a comment gets upvoted or downvoted by 1 point, so sufficient information about comments with zero karma is always present as well.
And because you can retract votes, you can always upvote or downvote temporarily to move it off of 0.
[To be deleted. Please excuse the noise.]
I’ve noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill. Is there some ev psych basis for this or is it just a personal quirk?
I think it’s a very common trait, but any Evo psych explanation I know would probably just be a just-so story.
Just So Story: The consequence of getting angry is treating someone badly, or from a game theoretical perspective, punishing them. Your perception of someone playing status games with low skill is a manifestation of the zero sum nature of status in tribes: Someone playing with low skill is a low status person trying to act and receive the benefits of being higher status, and it behooves you to punish them, in order to preserve or increase your own status. It’s easier for evolution to select for emotional reactions to things than for game theoretical calculations.
My suspicion: status games are generally seen as zero sum. Someone attempting to play the status game around you is a threat, and thus it probably helps to be angry with them, unless you expect them to be better than you at status games, in which case being angry with them probably reduces the chance that they’ll be your ally, and they will be able to respond more negatively to your anger than a weaker opponent.
Another possible just-so story we can tell is that being (seen as) angry makes it safer to injure someone (e.g., “cold-blooded” murder or battery is seen as less acceptable than killing or battering someone “in the heat of passion”), so when we identify someone as incapable of retribution we’re more inclined to make ourselves seem angry as well, the combination of which allows us to eliminate competitors while they’re weak with relative impunity. (And, of course, the most reliable way to make ourselves seem angry is to feel angry.)
Is that actually the explanation for Raiden’s reaction, though? Probably not; telling just-so stories isn’t a terribly reliable process for arriving at true explanations.
Edit: Whoops… should have read drethelin’s comment first. Retracting for redundancy.
Not sure if related, but I often get angry at people doing things that make them look like idiots in my eyes, but I have a suspicion they would impress a random bystander positively.
As an example, imagine a computer programmer speaking things that you as a fellow programmer recognize as a complete bullshit, or at best as wild exaggerations of random things that impressed the person… but for someone who does not understand programming at all, they might (I am not sure) sound very knowledgeable, unlike the silent types like me. -- I don’t know if they really impress the outsiders positively or not. I can’t well imagining myself not having the knowledge I have, and I am also not good at guessing how other people react to the tone of voice or whatever other information they may collect from the talk about topic they don’t understand. -- I just perceive the danger that the person may sound more impressive than me, and… well, as an employee, my quality of life depends on the impressions of people who can’t measure my output separately from the output of the team containing also the other person.
Also, again not sure if related, when I get angry at someone, when I analyze the situation I usually find that they are better than me in something. In the specific situation above, it would be “an ability to impress people who completely don’t understand my work”. This is easy to miss, if I remain focused only on the “they speak nonsense” part. But the truth is their speaking nonsense does not make me angry; it’s relatively easy to ignore, and it would not bother me if I did not perceive a threat.
So, for your situation: are you afraid that the “people playing the status game with (supposedly) poor skill” might still win some status at your expense? If yes, the angry reaction is obvious: you are in a situation where you could lose, but you could also win; which is the best situation to invest your energy in. (Imagine an alternative universe, where the person trying to play the status game is completely harmless and ridiculous, and everyone openly agrees on that. Would you feel the same anger?)
Not an explanation, but perhaps try to see this as a benefit to you? I have witnessed plenty of poker players get very angry at bad players. Over time bad players lose money to good players, so one shouldn’t complain about bad players. Someone who is ineffective at status signalling won’t affect you, you already see through them.
Personally, I find that I have an admiration for people with skill, even in things such as effective status signalling. When people lack a certain savoir-faire about them, it makes me upset, but then I remind myself I shouldn’t.
I have recently read the dictator’s handbook. In it the author suggests that democracies, companies and dictatorships are not essentially different and the respective leaders follow the same laws of rulership. As a measure of more democratic behavior in publicly traded companies they suggest a Facebook like app to discuss company policy. Does anyone know about a company or organization that does this? It seems almost to be too good to be true.
Many companies including mine use Yammer, a twitter-like app internally. At my big bureaucratic company I’ve seen a mix of practical discussion and discussion about the future of the company, but I’m not sure how much difference it makes in practice.
The original suggestion’s intention was to allow people with fewer shares to be able to effectively exercise their right to vote. Currently, a couple of people hold enormous shares of a company and the majority, that is millions of shares, are owned by millions of people. The latter are virtually unable to influence the company while the former dominate it, giving publicly traded companies the political structure of a dictatorship with very high salaries in upper management. This is in contrast to functioning democracies where even heads of state earn a relatively meager salary. So Yammer is a step in the intended direction by providing a platform to discuss policy and distribute information, but it lacks in easing voting.
Part of the problem is that most shares that are nominally held by individuals are actually held for them by retirement funds and the like, creating even further distance.
Well, you can think of the choice of retirement fund as first tier in a multi-tiered democracy. Individual → Fund → Director. Yes, the fund is managing other people’s money and thus has eroded incentives, but on the other hand it is a full-time job and its votes are concentrated enough that people will actually talk to it.
But forget about individuals—is it a democracy of investment funds? Yes, they really get to choose the directors, and the directors really (can) run the company. And the investment funds talk to each other. But they are spread too thin. They own too small a share of too many companies to keep up with them. The way that the large shareholders control companies is by convincing investment funds to vote for their candidates. Once they have control of the board, it’s pretty easy to keep it, because the board nominates new candidates and there is standing source of opposition. But just because someone, say, Icahn, has 5 or 10% of the shares, doesn’t mean he has much power. Sometimes the board will just accept his advice, but other times he has to lobby the investment funds to democratically take over the board.
Anyhow, my point is that the funds do a lot of talking, so I am skeptical that the problem is not talking.
The point of the original proposal was not the talking but the exercise of the right to vote, similar to a democracy. Good post, though.
Singapore, arguably the best functioning democracy in the world, pays its head of state millions of dollars.. I realise this is probably still not enough.
The Economist lists Singapore as a hybrid regime with elements of authoritarianism and democracy. It ranks in its democracy index below Malaysia and Indonesia. Thus I do not think it is ‘arguably the best functioning democracy in the world’.
It functions well and it is a democracy. I didn’t mean to imply it achieved any unusual height of democracy. Rather, it achieves other things very well.
Fair enough. The author’s claim was that any sufficiently democratic organizations works in the interest of its members. If an authoritarian regime works to the benefits of the public it is by virtue of a benevolent dictator that has nevertheless to follow the rules of power.
I’m pretty sure that low salaries are a dysfunction of democracies rather than high salaries being a dysfunction of companies. In particular, it’s not the case with every company that a couple of people hold enormous shares. And aside from that, even when there is clear evidence that “the majority” gets directly involved in CEO compensation, it doesn’t seem that the salaries go down all that much.
Or looking at it differently, if the high salaries were the consequence of an undue concentration of power, we would expect that when one CEO leaves, and a different one who was not previously affiliated with the power holders is installed, the salary of the new one would be much much lower. However, I think this is rarely the case.
I don’t think your second point really is one, seeing as a CEO can not be installed without being affiliated with the power holders. Can you back up your first point?
Why not? Some CEOs (especially for smaller companies, I think) are found via specialised recruiting companies, which I’d say is pretty unaffiliated. And in any case, it’s not clear to me how you think the affiliation would be increasing pay. Do you imagine potential CEO candidates hold an auction in which they offer kickbacks to major shareholders/powerholders from their pay or something? Because I haven’t heard of that ever happening, and I’m having trouble imagining what more plausible scenario you have in mind. (Obviously there are cases where major shareholders also serve as CEOs/whatever, but if you’re claiming that every person in such position with high pay is a major power holder shares/board-wise, I’d like to see evidence for it, since I find that extremely unlikely.)
If you mean about new executives receiving pay comparable to old ones, I dunno, it’s hard. I think I’d have to search company-by-company and even then it would be hard to determine what’s happening. For example, I looked up Barclay’s, which switched Bob Diamond for Antony Jenkins last year. Diamond had a base salary of £1.3mil. Jenkins has a base salary of £1.1mil. However, Diamond got a lot of non-salary money (much of which he gave up due to scandal), and it’s not clear how Jenkins’ compensation compares to that. Also, it’s not clear how much the reduction (if there is any) is the result of public outrage (or ongoing economic difficulties).
If you mean about high salaries probably being appropriate, I can back that up on a theoretical level. If you assume a CEO has a high level of influence over a huge company, then it’s straightforward that there is going to be intense competition for the best individuals. Even someone who can improve profits by 0.1% would be worth extra millions of dollars to a multi-billion dollar company.
Related things I found while looking around: “highly concentrated ownership in listed companies in New Zealand is a significant contributor to [a] poor pay-for-performance relation”, “institutional ownership concentration is positively related to the pay-for-performance sensitivity of executive compensation and negatively related to the level of compensation, even after controlling for firm size, industry, investment opportunities, and performance”, “results indicate that power is more concentrated than ownership”. (Not that I read much more than abstracts.)
Interesting. I will have to read through that later.
Closest thing I can think of is the “We the people” White House site, at least nominally.
This should probably go in the politics thread.
Here’s fine, now’s good.
I’m sorry, which one is that?
This one (there doesn’t seem to be a March thread).
Is the xkcd rock-placing man in any danger if he creates a UFAI? Apparently not, since he is, to quote Hawking, the one who “breathes fire into the equations”. Is creating an AGI of use to him? Probably, if he has questions it can answer for him (by assumption, he just knows the basic rules of stone-laying, not everything there is to know). Can there be a similar way to protect actual humans from a potential rogue AI?
Assuming the premises of the situation, yes to your first question:
He may be argued into something that is not in his interest by the UFAI. (On the other hand, Rock-Placing Man evidently does not have a standard mental and physical architecture, so maybe he also happens to be immune to such attacks.)
The UFAI may take over his simulated universe and turn it into simulated paperclips.
True, but it’s easy to deal with, just place one row of rocks differently and wipe the UFAI bugger out. Humans would not have this luxury.
You mean run them so slowly that they’re not useful for anything?
Ever heard of steelmanning?
Either the rock-placing man is running the AI so slowly that it’s not useful for anything or he runs the risk of falling prey to considerations that have already been discussed on LW surrounding oracle AI.
Steelmanning is probably a good thing to do (and I’m not good at doing it), but I think it’s bad form to ask that somebody steelman you.
This would be a useful conjecture if you can formalize it, or maybe a theorem if you can prove it.
What is with LW people and theorems? The situation you’ve described is nowhere near formalized enough for there to be anything reasonable to say about it at the level of precision and formality that warrants a word like “theorem.”
As it’s been queried how many physicists, mathematicians, etc. currently believe what about QM, I thought this paper (no paywall, Yay!) might interest a few of you: A Snapshot of Foundational Attitudes Toward Quantum Mechanics
For example, question 12: Copenhagen 42% Information 24% Everett 18%
More discussion of it here.
Would it be inappropriate to host a Less Wrong March Madness bracket pool?
Edit: Not going to do it.
I suspect there’s too much of a difference in how much LW members know about basketball to get particularly wide participation. For example, I had to look up “March Madness” to figure out what this is about.
Also, there’s a significant chance that either people would just copy the odds from Pinnacle, or maybe even arbitrage against it (valuing karma or whatever at 1-2 cents). Or, well, I’d certainly be tempted to =]
An interesting thing would be to set up a prediction market and compare the results
Less Wrong is about rationality. Surely there are better ways to have fun than to arbitrarily redistribute our wealth. Unless you somehow plan to make some of the money go to charity, or not involve money at all, I don’t see the point.
It might be relevant as a calibration exercise, though.
I was going to do it for honor/karma.
IMO that would be a misuse of the karma system.
Definitely appropriate, then!
So it seems something a bit like the Mary’s Room experiment has actually been done in mice. And appears to indicate the mice had different behaviour with a new colour receptor.
But they didn’t have full understanding of the new colour before having it added!
Another SMBC comic on the intelligence explosion. Don’t forget the mouseover text of the red button.
have a few audible credits to use up before i cancel the service. any recommendations?
a short history of nearly everything jonathan strange and mr norrrel the automatic detective guards guards
Does CEV claim that all goals will eventually cohere such that the end results will actually be in every individual’s best interest? Or does CEV just claim that it’s a good compromise as being the closest we can get to satisfying everyone’s desires?
Hrm. As I understand it, the theory underlying CEV doesn’t equate X’s best interest with X’s desires in the first place, so the question is somewhat confusingly—and perhaps misleadingly—worded. That is, the answer might well be “both”. That said, it doesn’t claim AFAIK that the end results will actually be what every individual currently desires.
The latter.