Open Thread, April 1-15, 2013
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
- 7 May 2013 14:57 UTC; 12 points) 's comment on Googling is the first step. Consider adding scholarly searches to your arsenal. by (
- 8 Apr 2013 1:05 UTC; 2 points) 's comment on Meetup : LessWrong Montreal Biweekly Meetup by (
- 15 Apr 2013 20:00 UTC; 0 points) 's comment on Open Thread, April 15-30, 2013 by (
Remember, today is “Base Rate Neglect Day,” also known as April Fool’s.
I propose “Confusion Awareness Day”.
Oh, Wikipedia gave me a feel today
(snicker)
(click)
…
Poignant.
Received word a few days ago that, (unofficially, pending several unresolved questions) my GJP performance is on track to make me eligible for “super forecaster” status (last year these were picked from the top 2%).
ETA, May 9th: received the official invitation.
I’m glad to report that I am one of those who make this achievement possible by occupying the other 98%. Indeed I believe I am supporting the high ranking of a good 50% of the forecasters.
More seriously, congratulations. :)
Congratulations! I also received it (thanks not the least to your posts). I wonder how many other LWers participate and who else (if anybody) got their invitations.
I participate and was invited the first season to be a super-forecaster in the second. It is kind of a lot of work and I have been very busy, so I really quit doing anything about it at all pretty early on, but mysteriously have been invited to participate again in the third season.
We may find out a little about that; super-forecasters will form teams, so it’s somewhat likely some of us will end up on the same team.
Congrats to the others too, anyway!
I participate (http://www.gwern.net/Prediction%20markets#iarpa-the-good-judgment-project); and haven’t been invited. (While I stopped trying in season 2, my season 1 scores were merely great & not stellar enough to make it plausible that I could have made it.)
For kicks, and reminded by all my recent searching for digging up long-forgotten launch and shut down dates for Google properties, I’ve compiled a partial list of times I’ve posted searches & results on LW:
http://lesswrong.com/lw/h4e/differential_reproduction_for_men_and_women/8ovg
http://lesswrong.com/lw/h2h/i_hate_preparing_food_my_solution/8o19
http://lesswrong.com/lw/h1t/link_the_power_of_fiction_for_moral_instruction/8nkc
http://lesswrong.com/lw/gnk/link_scott_and_scurvy_a_reminder_of_the_messiness/8gfc
http://lesswrong.com/lw/g75/psa_please_list_your_references_dont_just_link/87ds
http://lesswrong.com/lw/f8x/rationality_quotes_november_2012/7tdc
http://lesswrong.com/lw/3dq/medieval_ballistics_and_experiment/7i2k
http://lesswrong.com/lw/e26/who_wants_to_start_an_important_startup/7adg
http://lesswrong.com/lw/if/your_strength_as_a_rationalist/768t
http://lesswrong.com/lw/dzb/draft_get_lucky/760r
http://lesswrong.com/lw/dx7/link_holistic_learning_ebook/74yj
http://lesswrong.com/lw/c3g/seq_rerun_quantum_nonrealism/6h7e
http://lesswrong.com/lw/bws/stupid_questions_open_thread_round_2/6g25?context=1#6g25
http://lesswrong.com/lw/bg0/cryonics_without_freezers_resurrection/68ps
http://lesswrong.com/lw/5qm/living_forever_is_hard_or_the_gompertz_curve/63ak
http://lesswrong.com/lw/7rh/cognitive_style_tends_to_predict_religious/6sfx?context=1#6sfx
http://lesswrong.com/lw/7hi/free_research_help_editing_and_article_downloads/6xcu
http://lesswrong.com/lw/aw6/global_warming_is_a_better_test_of_irrationality/61me?context=1#61me
http://lesswrong.com/lw/td/magical_categories/52uq?context=1#52uq
http://lesswrong.com/lw/ub/competent_elites/nkg?context=1#nkg
http://lesswrong.com/lw/5il/siai_an_examination/4s09
http://lesswrong.com/lw/7gy/case_study_reading_edges_financial_filings/
http://lesswrong.com/lw/m3/politics_and_awful_art/3moz?context=1#3moz
http://lesswrong.com/lw/3am/honours_dissertation/35lk?context=1#35lk
http://lesswrong.com/lw/5r/crowley_on_religious_experience/34wj?context=1#34wj
http://lesswrong.com/lw/2t0/rationality_quotes_october_2010/2qxe?context=1#2qxe
http://lesswrong.com/lw/2kl/open_thread_august_2010_part_2/2fsb
http://lesswrong.com/lw/2ab/harry_potter_and_the_methods_of_rationality/2cmp
http://lesswrong.com/lw/1lx/reference_class_of_the_unclassreferenceable/1fg3
http://lesswrong.com/lw/1lt/case_study_melatonin/1f9w
http://lesswrong.com/lw/1j7/the_amanda_knox_test_how_an_hour_on_the_internet/1dfm
http://lesswrong.com/lw/hbp/using_evolution_for_marriage_or_sex/8xn2
http://lesswrong.com/lw/hb6/links_passing_through_apiviglinkcom/8v0g
Can’t help but get the impression that even people here aren’t very good at Googling. Maybe they should be taking Google’s little search classes; knowing how to search seems like the sort of skill that would pay off constantly over a lifetime.
It appears to me that in half of these examples people hadn’t tried to google at all. It doesn’t seem particularly likely to me that the class would develop such a habit. Not that I have a better idea.
My belief is that the more familiar and skilled you are with a tool, the more willing you are to reach for it. Someone who has been programming for decades will be far more willing to write a short one-off program to solve a problem than someone who is unfamiliar and unsure about programs (even if they suspect that they could get a canned script copied from StackExchange running in a few minutes). So the unwillingness to try googling at all is at least partially a lack of googling skill and familiarity.
Since April 2013:
http://lesswrong.com/lw/hbp/using_evolution_for_marriage_or_sex/8xn2
http://lesswrong.com/lw/hb6/links_passing_through_apiviglinkcom/8v0g
http://lesswrong.com/lw/4n8/rationality_quotes_march_2011/8uoo
http://lesswrong.com/lw/h7r/open_thread_april_1530_2013/8tca
http://lesswrong.com/lw/jb9/some_notes_on_existential_risk_from_nuclear_war/a6k2
http://lesswrong.com/lw/j24/to_like_or_not_to_like/a1pj
http://lesswrong.com/lw/ing/open_thread_september_1622_2013/9ren
http://lesswrong.com/lw/ikg/yet_more_stupid_questions/9qgc
http://lesswrong.com/lw/i7t/rationality_quotes_august_2013/9ocd
http://lesswrong.com/lw/ic0/where_ive_changed_my_mind_on_my_approach_to/9m2o
http://lesswrong.com/lw/hva/open_thread_july_115_2013/9by7
http://lesswrong.com/lw/hsd/start_under_the_streetlight_then_push_into_the/98ur
http://lesswrong.com/lw/hgm/open_thread_may_1731_2013/932n
http://lesswrong.com/r/discussion/lw/jc8/harry_potter_and_the_methods_of_rationality/a7fo
This is epic.
Since December 2013:
http://lesswrong.com/lw/kid/this_is_why_we_cant_have_social_science/b40h
http://lesswrong.com/lw/kge/question_lesswrong_web_traffic_data/b2wq
http://lesswrong.com/lw/kfb/open_thread_30_june_2014_6_july_2014/b1xd
http://lesswrong.com/lw/k6r/open_thread_may_5_11_2014/avtl
http://lesswrong.com/lw/jys/what_colleges_look_for_in_extracurricular/aqn6
http://lesswrong.com/lw/jr8/open_thread_february_25_march_3/an4q
Iain (sometimes M.) Banks is dying of terminal gall bladder cancer.
Of more interest is the discussion thread on Hacker News regarding cryonics. There’s a lot of cached responses and misinformation going around on both sides.
Great point I saw in the discussion:
It’s really, really saddening that he of all people has been an outspoken deathist and now it’s depriving him of any chance whatsoever. (Well, except for hypothetical ultra-remote reconstruction by FAI or something.)
Where has he been an outspoken deathist?
In the Culture novels, he has all humans just sorta choosing to die after a millennium of life, despite there being absolutely no reason for humans to die since available resources are unlimited, almost all other problems solved, aging irrelevant, and clear upgrade paths available (like growing into a Mind).
Its not entirely clear cut. He has had characters from outside the culture describing it as a ‘fashion’ and a sign of the culture’s decadence. And the characters we do see ending their lives are generally doing it for reasons of psychological trauma.
Either way, thinking a thousand years in the culture is enough doesn’t mean he thinks 70 years on earth is enough. Has he ever made a direct comment about cryonics? I can’t find any. So its still possible eh would eb open to it given up to date information.
Stories would tend to focus on characters who are interested or involved in traumatically interesting events, so not sure how much one could infer from that.
A thousand years instead of 70 is just deathism with a slightly different n.
Eh, I kinda agree with you in a sense, but I’d say there’s still a qualitative difference if one has successfully moved away from the deathist assumption that the current status quo for life-span durations is also roughly the optimal life-span duration.
Then some form of deathism may be the truth anyway.
On the other hand, I can’t remember Banks ever suggesting that organics in the Culture would want to die after a thousand years, only that if they wanted to die they would be able to. I don’t think the later is incompatible with anti-deathism—is Lazarus Long a deathist, after all?
EDIT: On the gripping hand, there’s also a substantial bit of business in the Culture about subliming.
Instead of arguing on in this vein, I know that he’s made comments in the past about how he believes death is a natural part of life. I just can’t find the right interview now that “Iain Banks death” and variants are nearly-meaningless search terms.
If you want to search the past, go to google, search, click “Search tools,” “Any time,” “Custom range...” and fill in the “To” field with a date, such as “2008.”
I don’t recall seeing any people who are supposed to be older than a thousand years without mechanics like cryostorage/scanning; if you present a world in which pretty much everyone does want to die after a trivial time period, you’re presenting a deathist world and you may well hold deathist views.
About not subliming, specifically.
Such a character appears in the latest Culture novel, “The Hydrogen Sonata”. But he is stated to be extremely unusual.
IIRC, most inchoate Minds sublime during construction, but I could be wrong about that.
“A Few Notes on the Culture”:
I’m on the fence as to whether or not this really constitutes full-blown deathism or just a belief that sentient beings should be permitted to cause their own death.
I suspect that any cultural norm inconsistent with treating the death of important life forms as an event to be eradicated from the world is at least an enabler to “deathism” as defined locally.
There seems to be some appeal to nature floating around in it, at the very least.
Sure, death is natural. So is Ophiocordyceps, but that doesn’t mean I want parasitic mind-altering fungi in my life.
Hopefully he’ll get around to doing some sort of “where I wanted to take the Culture series; ideas which I may have spun into novels” brain dump, before he dies.
I’ve been writing blog articles on the potential of educational games, which may be of interest to some people here:
Why I’m considering a career in educational games
Videogames will revolutionize school (not necessarily the way you think)
I’d be curious to hear any comments.
I realise it’s a constructed example, but a videogame that would be even remotely accurate in modelling the causes of the fall of the Roman Empire strikes me as unrealistically ambitious. I would at any rate start out with Easter Island, which at least is a relatively small and closed system.
Another point is that, if you gave the player the same levers that the actual emperors had, it’s not completely clear that the fall could be prevented; but I suppose you could give points on doing better than historically.
Do we need a realistic simulation at all? I was thinking about how educational games could devolve into, instead of “guessing the teacher’s password”, “guessing the model of the game”… but is this a bad thing?
Sure, games about physics should be able to present a reasonably accurate model so that if you understand their model, you end up knowing something about physics… but with history:
actually, what’s the goal of studying history?
if the goal is to do well on tests, we already have a nice model for that, under the name of Anki. Of course, this doesn’t make things really fun, but still.
if we want to make students remember what happened and approximately why (that is, “should be able to write an essay about it”), we can make up an arbitrary, dumb and scripted thing, not even close to a real model, but exhibiting some mechanics that cover the actual reasons. (e.g. if one of the causes would have been “not enough well-trained soldiers”, then make “Level 8 Advanced Phalanx” the thing to build if you want to survive the next wave of attacks.)
if we’d like to see students discover general ideas throughout history, maybe build a game with the same mechanics across multiple levels? (and they also don’t need to be really accurate or realistic.)
and finally, if we want to train historians who could come up with new theories, or replacement emperors to be sent back in time to fix Rome… well, for that we would need a much better model indeed. Which we are unlikely to end up with. But do we need this level in most of the cases?
TL;DR by creating games with wildly unrealistic but textbook-accurate mechanics we are unlikely to train good emperors, but at least students would understand textbook things much more than the current “study, exam, forget” level.
If what they learned about “evolution” comes from Pokemon, then yes.
When did Pokemon become an educational game about evolution?
Pokemon is an example of what an educational game which doesn’t care about realism could look like. People should be expected to learn the game, not the reality, and that will especially be the case when the game diverges from reality to make it more fun/interesting/memorable. If you decide that the most interesting way to get people to play an interactive version of Charles Darwin collecting specimens is to make him be a trainer that battles those specimens, then it’s likely they will remember best the battles, because those are the most interesting part.
One of the research projects I got to see up close was an educational game about the Chesapeake; if I remember correctly, children got to play as a fish that swum around and ate other fish (and all were species that actually lived in the Chesapeake). If you ate enough other fish, you changed species upwards; if you got eaten, you changed species downwards. In the testing they did afterwards, they discovered that many of the children had incorporated that into their model of how the Chesapeake worked; if a trout eats enough, it becomes a shark.
I’d like to hear more about that Chesapeake result.
I’m seeing if I can find a copy of their thesis. I’ll share it if I manage to.
The GAMER thesis is here. (Also looking for an official copy.)
The ILL thesis is here.
It’s true that you don’t need a model that lets you form new theories of the downfall of the Empire; but my point is that even the accepted textbook causes would be very hard to model in a way that combines fun, challenge, and even the faintest hint of realism. Take the theory that Rome was brought down partly by climate change; what’s the Emperor supposed to do about it? Impose a carbon tax on goats? Or the theory that it was plagues what did it. Again, what’s the lever that the player can pull here? Or civil wars; what exactly is the player going to do to maintain the loyalty of generals in far-off provinces? At least in this case we begin to approach something you can model in a game. For example, you can have a dynastic system and make family members more loyal; then you have a tradeoff between the more limited recruiting pool of your family, which presumably has fewer military geniuses, versus the larger but less loyal pool of the general population. (I observe in passing that Crusader Kings II does have a loyalty-modelling subsystem of this sort, and it works quite well for its purposes. Actually I would propose that as a history-teaching game you could do a lot worse than CKII. Kaj, you may want to look into it.) Again, suppose the issue was the decline of the smallholder class as a result of the vast slaveholding plantations; to even engage with this you need a whole system for modelling politics, so that you can model the resistance to reform among the upper classes who both benefit by slavery and run most of your empire. Actually this sounds like it could make a good game, but easy to code it ain’t.
It gets even more complicated when these causes interact. A large part of the reason for the decline of smallholders and the rise of vast manors using serfs (slavery was in decline during that period), was the fact that farmers had to turn to the lords for protection from barbarians and roving bandits. The reason there were a lot of marauding bandits is that the armies were to busy fighting over who the next emperor was going to be to do their job of protecting the populace.
Dynasty Warriors and Romance of the Three Kingdoms, while heavily stylized and quite frequently diverging from actual history, nevertheless do a pretty good job of conveying the basics of the time period and region.
A big part of education today is memorization. Perhaps it is wrong, but it is going to stay here for a while anyway. And at least partially it is necessary; how else would one learn e.g. a vocabulary of a foreign language?
So while it is great to invent games that teach principles instead of memorization, let’s not forget that there is a ton of low-hanging fruit in making the memorization more pleasant. If we could just take all the memorization of elementary and high school, and turn it into one big cool game, it would probably make the world a much better place. How much resources (especially human resources) do we spend today on forcing the kids to learn things they try to avoid learning? Instead we could just give them a computer game, and leave teachers only with the task of explaining things. Everyone could get today’s high school level education without most of the frustration.
Recently I started using Anki for memorization, and it seems to work great. But I still need some minimum willpower to start it every day. For me that is easy, because with my small amounts of data, I get usually 10-20 questions a day. But if I tried to use it in real time for high-school knowledge, that would be much more. Also, today I know exactly why I am learning, but for a small child it is an externally imposed duty, with uncertain rewards in a very far future. So some additional rewards would be nice.
It could be interesting to make a school where in the morning the students would play some gamified Anki system, and in the afternoon they would work in groups or discuss topics with teachers.
Sure, that’s big too. I just didn’t talk about it as much because everyone else seems to be talking about it already.
These games already exist for many things, good enough that watching Letsplays of them are probably more efficient than most deliberately educational videos. It’s just finding them and realizing it’s that tricky.
Games relevant to this discussion include Rome: Total War and Kerbal Space Program. Look them up.
I think I would really enjoy watching civ 5 lps that simultaneously discuss world history.
That would be super cool.
Yes, these are good examples.
Have you played the Portal games? They include lots of things you mention… they introduce how to use the portal gun, for example, not by explaining stuff but giving you a simplified version first… then the full feature set… and then there are all the other things with different physical properties. I can definitely imagine some Portal Advanced game when you’ll actually have to use equations to calculate trajectories.
Nevertheless… I’d really like to be persuaded otherwise, but the ability to read Very Confusing Stuff, without any working model, and make sense of it can’t really be avoided after a while. We can’t really build a game out of every scientific paper, due to the amount of time required to write a game vs. a page of text… (even though I’d love to play games instead of reading papers. And it sounds definitely doable with CS papers. What about a conference accepting games as submissions?)
I’ve played the first Portal game for a bit, and I liked it, but haven’t finished it because puzzle games aren’t that strongly my thing. I wonder whether not liking them much is a benefit or a disadvantage for an edugame designer. :-)
True enough. But I don’t think that very much of education consists of trying to teach this skill in the first place (though one could certainly argue that it should be taught more), and having a solid background in other stuff should make it easier when you do get to that point.
What I found fascinating about Portal is the effort they made in testing the game on players. There is a play-mode with developer commentary (thought perhaps it’s only available after the first play-through) in which they comment on all the details they changed to make sure that the players learned the relevant concepts, that they didn’t forget them and that they have enough hints to solve the puzzle (for example, it’s difficult to make a player look up). It’d be awesome if educational material (not necessarily just edugames) or even whole courses were designed and tested that well.
Thanks, I saw the developer commentary option but didn’t try it out. Now that you’ve told me what it consists of, I’ll have to check it out.
One point is that while memorizing specific causes of the fall of the roman empire may not be especially useful, acquiring the self-discipline necessary to do this without a game to motivate you might be very useful.
Perhaps, but if the task doesn’t also feel interesting and worthwhile by itself, then we’re effectively teaching kids that much of learning is dull, pointless and tedious, detached from anything that would have any real-world significance, and something that you only do because the people in power force you to. That’s one of the most harmful attitudes that anyone can pick up. Let’s associate learning with something fun and interesting first, and then channel that interest into the ability to self-motivate yourself even without a game later on.
There are many people having thoughts along these lines, I think. Before forging ahead too much on your own it would be worth poking around to see what’s already being done (e.g. Valve started some kind of initiative to get Portal played in schools as part of physics classes or something).
I mentioned that I was attending a Landmark seminar. Here is my review of their free introductory class that hopefully adds to the conversation for those who want to know:
Coaches - They are the people who lead the class and I found them to be genuine in their belief in the benefits of taking the courses. These coaches were unpaid volunteers. I found their motives for coaching were for self-improvement and to some degree altruism. In short, it helped them, and they really want to share it.
Material - The intro course consists of more informative ideas rather than exercises. Their informative ideas are also trade-marked phrases, which makes it gimmicky and gives it more importance than an idea really warrants. We were not told these ideas were evidence-based. Lots of information on how to improve one’s life was thrown around but no research or empirical evidence was given. Not once was the words ” cognitive science” or “rationality” used. I speculate that the value the course gives its students is not from their informative ideas, but probably from the exercises and motivation that one gets from being actively pushed by their coaches to pursue goals.
Final thoughts - If you are rationality minded then this is not for you. I am no worse for going, and I do not believe that anyone who is rationality minded and attends will be worse off either, however I do believe that it is most likely damaging for a person’s rationality , who is naive in rationality to begin with, to attend. I have never attended CFAR but just from browsing their website I can tell that Landmark is very far from what CFAR does. I think people in general would benefit more from attending CFAR than Landmark.
I am generally still very bad at steelmanning, but I think I am now capable of a very specific form of it. Namely, when people say something that sounds like a universal statement (“foos are bar”) I have learned to at least occasionally give them the benefit of the doubt instead of assuming that they literally mean “all foos are bar” and subsequently feeling smug when I point out a single counterexample to their statement. I have seen several people do this on LW lately and I am happy to report that I am now more annoyed at the strawmanners than the strawmanned in this scenario.
It sounds to me like it has a lot in common with the noncentral fallacy. There’s a general tendency to think of groups in terms of their central members and not their noncentral ones. This both makes sneaking in connotations by noncentral labels possible, and makes “all central foos are bar” feel like the same thing as “all foos are bar”.
Even more so with the “No foo is a bar”. Those statements are most probably either very common definitions like “no mammal is a bird” and therefore not very informative, either they are improbable. Like “no man can live more than X minutes without oxygen, ever”.
In the last case, even if the X is huge, we can assume that maybe it can be done under some (unseen yet) circumstances.
In other words, don’t be too hasty with universal negations!
Why can’t people say “some foos are bar” or “foos tend to be bar”? My default interpretation of “foos are bar” is “all foos are bar”. I tend to classify confident assertions that “foos are bar” with clear counterexamples as blustering. We already know from Philip Tetlock’s work that hedgehogs who make predictions based on simple models tend to be more confident, more widely quoted in the media, and more wrong than foxes who make equivocal, better-calibrated predictions based on more complicated models.
I think there may actually be a bit of a group coordination problem here—hedgehogs gain status from appearing confident and getting quoted in the media, but they’re spreading low-quality info. So it’s a case of personal gain at the expense of group loss. I’m inclined to call people out for hedgehog-style behavior as a way of dealing with this coordination problem. (In case it’s not obvious, I frequently see hedgehog-style predictions from LW-affiliated people and find them annoying and unconvincing.)
I mean, of course they can, but sometimes they won’t. People aren’t careful with their language and it’s uncharitable to assume that people mean what you think their words should have meant instead of what they most likely actually meant.
I also think you have a different prototypical case in mind than I do. I’m thinking the kind of nitpicking where someone says something like “fire is hot” and someone responds “nuh-uh, there’s a special type of fire you can make that is actually cool to the touch” or something like that.
Fair.
People can’t/don’t say that “some foos are bar” or “foos tend to be bar” because it is often less accurate than “all foos are bar” or better yet “foos are bar”. This is because truth is fuzzy, not binary or digital. For example, “some humans have two arms” gives you very little information. Do 10 out of seven billion humans have two arms? 6.99999 billion out of 7 billion? Maybe half of humans?
By contrast the statement, “humans each have two arms” or even “all humans have two arms” is mostly true, probably better than 99% true, despite the existence of rare counter examples. You can make useful plans based on the knowledge that “all humans have two arms”.
If we see truth as binary, and allow a mostly true statement to be invalidated by a single rare counterexample, we have lost a lot of real information. If I know that 100% of humans have two arms, I have a more complete and accurate, though imperfect, view of the world than if I know only that “some humans have two arms”.
Best of all of course is if I know that 99.9834% +/- 0.0026% of humans have two arms. However absent such precise information, the statement “humans have two arms” is a pretty accurate and useful representation of reality.
Looks like Scott Adams has given Metamed a mention. (lotta m’s there...)
I find it particularly interesting because a while back he himself was a great example of a patient independently discovering, against official advice, that their rare, debilitating illness could be cured—specifically, that of losing his voice due to a neurological condition. He doesn’t mention it in the blog post though.
(At least, I think this is a better example to use than woman who found out how to regenerate her pinky.)
[Aside] I’m not sure how I feel about Scott Adams in general. I enjoyed his work a lot when I was younger, but he seems very prone to being contrarian for its own sake and over-estimating his competence in unrelated domains.
I was a big Dilbert fan in my mid-teens and bought all his books. In one of them (The Dilbert Future, I think), he has this self-confessedly serious chapter about questioning received assumptions and thinking creatively. As an example, he suggests an alternate explanation for gravity, which he claims is empirically indistinguishable from the standard theory (prima facie, at least). His bold new theory: everything in the universe is just getting bigger all the time. So when we jump in the air the Earth and our bodies get bigger so they come back into contact. Seriously. Even as a fourteen-year-old, it took me only a few minutes to think of about five reasons this could not be true.
I read that book in the late 90s, and I’ve read very little by Scott Adams since then. In recent years, I’ve heard a few people cite him as a generally smart and thoughtful guy, and I have a very hard time reconciling that description with the author of that monumentally stupid chapter.
It’s conceivable that he focuses down on things that are important to him, and is quite content to do more or less humorous BS the rest of the time.
IIRC, in that chapter, he also discussed how quantum mechanics (specifically the double slit experiment) meant that information could travel backwards in time...
I don’t remember that specifically, but it would be one of the less crazy things he says. There are sound theoretical motivations for a retro-causal account of quantum mechanics, although a successful retro-causal model of the theory is yet to be constructed (John Cramer’s transactional interpretation comes close).
However, I do remember Adams endorsing something like The Secret) in the chapter, where you can change the world to your benefit merely by wanting it enough. I don’t entirely recall if he sees this as a consequence of quantum retro-causality, but I think he does, and if that’s the case then yeah, the quantum stuff is batshit too.
Yes, he does. It’s not necessarily “wanting it enough,” though; he specifically instructs that you have to pick a sentence that describes what you want, such as “I want to get rich in the stock market”—specific, but not too specific—and write it, by hand, in a notebook designated for this purpose, at least 10 times each night. He claims that doing this, he did in fact make a lot of money in the stock market, and became the mostt popular cartoonist in the world by a metric he specified (some index, I don’t remember which).
Not really connected to the quantum stuff, and possibly not as crazy. I think he mentions some possibility that all it actually does is force you to focus on your goals, which subconsciously makes you more responsive to opportunities, or something.
Confession: I was taken in by that section too for a while … a long while. In fact, when Eliezer’s quantum physics series started, my initial reaction was, “oh, I wonder how he’s going to handle the backwards-in-time stuff!”
I agree in a lot of respects. But if you can cure such a major disorder when professionals, who are supposed to know this stuff, think it’s impossible, and do it by your own research … well, you have credibility on that issue.
I’m a little surprised he didn’t try Alexander Technique, an efficient movement method which was developed by F.M. Alexander to cure his serious problems with speaking—problems which sound a good bit like vocal dystonia.
The problem may be that F.M. Alexander was an actor, and his technique has remained best known in the theater arts community.
In other news, Too Loud, Too Bright, Too Fast, Too Tight is about people whose range of sensory comfort is mismatched to what’s generally expected. It’s a problem that doesn’t just happen to people on the autistic spectrum.
There’s some help for it—what was in the book was putting people in a non-stressful environment and gradually introducing difficult stimuli—but working with this problem is cleverly concealed under occupational therapy, where no one is likely to find it.
I’m working on an analysis of Google services/products shutdown, inspired by http://www.guardian.co.uk/technology/2013/mar/22/google-keep-services-closed
The idea is to collate as many shuttered Google services/products as possible, and still live services/products, with their start and end dates. I’m also collecting a few covariates: number of Google hits, type (program/service/physical object/other), and maybe Alexa rank of the home page & whether source code was released.
This turns out to be much more difficult than it looks because many shut downs are not prominently advertised, and many start dates are lost to our ongoing digital dark age (for example, when did the famous & popular Google Translate open? After an hour applying my excellent research skills, #lesswrong’s and no less than 5 people on Google+’s help, the best we can say is that it opened some time between 02 and 08 March 2001). Regardless, I’m up to 274 entries.
The idea is to graph the data, look for trends, and do a survival analysis with the covariates to extrapolate how much longer random Google things have to live.
Does anyone have suggestions as to additional predictive variables which could be found with a reasonable amount of effort for >274 Google things?
Do you have registered user numbers for those services where this is meaningful?
Also, I assume you’ve see this and this, but just in case you hadn’t . . .
Hm, no. I figured that I would be able to get such numbers for a handful of services at best, and it wasn’t worth the effort. (I mean, Google didn’t even release user count for Reader as far as I know. So I’d be able to get random user counts for like YouTube and Gmail and that’s it. If even those.)
Yeah, I’ve seen those.
Might the possibility of registering be an interesting variable? Or the possibility of paying money for the service?
Confession: I’m interested in this type of study because the second article I referenced mentioned that Google Voice seemed on a doomed trajectory, and I use Google Voice all the time.
Update: I’ve finished, and I’m afraid it doesn’t look good for Voice.
I appreciate—I suspected as much based on the Slate article. Even before reading it, I was constantly surprised that Google hadn’t announced Voice would cost money. And Voice is hardly central to Google’s apparent mission, the way you correctly note Calendar seems central.
I’m trying to estimate how much longer it will last—i.e. when I should start looking for a different service. Given the low likelihood of five-year survival, and that the product is about five years old, I should probably get a move on it.
Ah, that’s a good suggestion: ‘is someone paying Google for this?’ This subsumes the advertising covariate I was musing about (I didn’t bring it up because looking at a bunch of the dead services, I didn’t know how I could possibly check whether advertising was involved). I’ll add that.
And yes, I would be worried about Voice’s long-term prospects, but my impression is that it won’t be going away within the next 5 years, say. They’ve sunsetted the Blackberry app, and neglected the service, but that still leaves the main service and 3 other apps/services according to my current list.
You might want to distinguish between free, freemium (like Google Drive, with a free intro tier), and paid.
At this point I’m up to 123, so I’m not really keen on going back and recoding all of them; it’s a lot of work, and the profit variable is already a decent predictor.
It was recently brought to my attention that Eliezer Yudkowsky regards the monetary theories of Scott Sumner (short overview) to be (what we might call) a “correct contrarian cluster”, or an island of sanity when most experts (though apparently a decreasing number) believe the opposite.
I would be interested in knowing why. To me, Sumner’s views are a combination of:
a) Goodhart’s folly (“Historically, an economic metric [that nobody cared about until he started talking about] has been correlated with economic goodness; if we only targeted this metric with policy, we would get that goodness. Here are some plausible mechanisms why …”—my paraphrase, of course)
b) Belief that “hoarded” money is pure waste with no upside. (For how long? A day? A month?)
If you are likewise surprised by Eliezer’s high regard for these theories, please join me in encouraging him to explain his reasoning.
To address your (a) comment, some countries have implemented close approximations to NGDP level targeting after the 2008 crisis, and have done well. They include most obviously Iceland (despite a severe financial crisis), and some less obvious instances like Australia, Poland and Isreal. One could point to the UK as the clearest counterexample, but just about everyone agrees that they have severe structural problems, which NGDPLT is not intended to address. And even then, monetary easing has allowed the conservative government to implement fiscal austerity without crashing the economy—this was widely expected to happen and there was a lot of public concern (compare the situation in the US wrt the “fiscal cliff” and “sequestration” scares. Here too, the Fed offset the negative fiscal effect by printing money).
As for (b), nobody argues that money hoarding is a bad thing per se. But it needs to be offset, because practically all prices in the economy are expressed in terms of money, and the price system cannot take the impact without severe side effects and misallocations. Inflation targeting is a very rough way of doing this, but it’s just not good enough (see George Selgin’s book Less than Zero for an argument to this effect). ISTM that this is not well understood in the mainstream (“NK”) macro literature, where supply shocks are confusingly modeled as “markup shocks”. I have seen cutting-edge papers pointing out that these make inflation targeting unsound (sorry for not having a ref here).
None of the examples have targeted NGDP, which is what Sumner needs to be true to have supporting evidence. Rather, they had policies which, despite not specifically intending to, were followed by rising NGDP. The purported similarity to NGDPLT is typically justified on the grounds that the policy caused something related to happen, but there is a very big difference between that and directly targeting NGDP. And hence why it can’t demonstrate why targeting a metric (that, again, no one even cared about until Sumner started blogging about it) will have the causal power that is claimed of it.
I disagree; I have yet to see any anti-hoarders mention anything positive whatsoever about hoarding and take it as a given that eliminating it is bad. Landsburg says it better than I can: the very people promoting anti-hoarding policies lack any framework in which you can compare the benefits of hoarding to the hoarders against its costs, and thus know whether it’s on net bet. The best answer he gets is essentially, “well, it’s obvious that there’s a shortfall that needs to be rectified”—in other words, it’s just assumed.
To find an example of anyone saying anything positive about hoarding, you have to go to fringe Austrian economists, like in this article.
But until you’ve quantified (or at least acknowledged the existence of) the benefits of hoarding, you can’t know if these supposed misallocations are worse than the benefits given by the hoarding. You can’t even know if they are misallocations, properly understood.
For once you accept that there’s a benefit to hoarding, then the changes in prices induced by it are actually vital market signals, just like any price. Which would mean that you can’t eliminate the price change without also destroying information that the market uses to improve resource use. I mean, oil shocks cause widespread price changes, but any attempt to stop these price changes is going to worsen the misallocation problem.
Toy example to illustrate the benefits, and important signal sent by, hoarding: let’s say we have a class of typical investors, with no special non-public knowledge about specific companies. So when they invest, they invest in the economy as a whole. (Let’s say they won’t even consider using this part of their money for consumption.) But! 70% of the economy’s investment venues are unsustainable and are actually destroying value in a way not currently obvious. In that case, it would be much better for these potential investors to hoard, rather than further advance this malinvestment. Sure, they’ll starve the good 30% of projects of funds, but they’ll also pull back on the bad 70%.
So I have yet to see any actual recognition of the benefits of hoarding among this group, which puts them in a ridiculous position. If holding money is bad, then the optimal situation is for any money received to be instantly spent on something else (whether consumption or investment). But this requires that you know what you’re going to spend the money on before you earn it—which just takes us back to barter! Thus, we see the benfit of hoarding/holding money: retaining the option value when you lack certainty about what you will spend it on. It thus signals consumers’ uncertainty that they will be able to enter sustainable patterns of trade, and cannot be costlessly squashed (as another another Econ School though of interest—that it could be zeroed without negative consequence).
I think my examples do constitute supporting evidence of some kind. Yes, it would be good to have examples of countries specifically targeting NGDP, to prevent spurious correlations or Lucas critique problems. But even so, Iceland and to a lesser extent, Poland—and, to be fair, the UK—specifically accepted a rise in inflation in order to sustain demand—it wasn’t a simple case of exogenously strong RGDP growth. (I think this might also apply to Australia, actually. Their institutional framework would certainly allow for that.) This makes the evidence quite credible, although it’s not perfect by any means.
Also, Sumner was not at all the first economist to care about NGDP as a possible target. He is a prominent popularizer, but James Meade and Bennett McCallum had proposed it first.
Your example of the “benefits of hoarding” doesn’t address the very specific problems with hoarding the unit of account for all prices in the economy, when prices are hard to adjust. Yes, money has a real option value, so money hoarding might signal some kind of uncertainty. However, you have not made the case that this “signaling” has any positive effects, especially when the operation of the price system is clearly impaired. By analogy, if peanuts were the unit of account and medium of exchange, then widespread hoarding of peanuts might signal uncertainty about the next harvest. But it would still cause a recession, and it wouldn’t actually cause the relative price of peanuts to rise (or rise much at any rate), which is what might incent additional supply.
Moreover, in practice, an uncertain agent can attain most (if not all) of the benefit of hoarding money by holding some other kind of asset, such as low-risk bonds, gold or whatever the case may be. It’s not at all clear that hoarding money specifically provides any additional benefit, or that such incremental benefits could be sustained without inflicting greater costs on other agents.
Yes!
The comment is from hacker news thread about Bitcoin hitting $100. It would be cool to have him also expand more on bitcoin itself which he seems to regard as destructive but not necessarily doomed to fail? Here he entertains the idea about combining NGDP level targeting (which I don’t understand) with the best parts of Bitcoin. This all sounds very interesting.
I downloaded a Bitcoin client a couple weeks and was going to buy a few bitcoins, but the inconvenience of having to get a Mt Gox account or something made me keep on putting that off. Whoops. Hopefully this’ll teach me to be less of a procrastinator.
You think that’s bad? I considered buying a ~hundred Bitcoins after the last crash, when they were going for at less than $1, but could never be bothered. :-)
Bitcoin is a form of currency that’s supposed to be used. Now that so many people jump on the speculation train, and the rest mostly hold onto their bitcoins in the hope the price keeps rising, the practical viability of bitcoins can be called into question. I saw a stick of RAM for sale for the equivalent of over a thousand dollars. For a currency to rise so fast is terribly disruptive, and the rise (lack of supply in relation to demand) itself creates a vicious circle, since the faster it rises the more many bitcoin holders are tempted to further keep their bitcoins out of circulation.
If the price is then calculated by only the very small portion that’s up for sale (with the rest being held for often speculative purposes) - compared to a lot of prospective buyers like you who want to join the gold rush—what do you think could easily happen once some people start cashing in, the price drops, seeing the price dropping more people want to cash in, and suddenly there’s a very large portion up for sale, in conjunction with a loss of interested buyers (most don’t buy into a falling market, tautologically)?
My advice, which I may even follow myself: buy at 100, sell at 150, never look back.
Well… what I expected to happen when I downloaded the client was for the value of bitcoins to stay about the same (as it had done in the last couple months of 2012) or to rise by 10%-ish per week (as it had done in the first couple months of 2013). If I had bought some when they were at 50-ish, I would definitely be selling most of them now. And right now I don’t feel buying something that costs 20% more than it did literally yesterday.
Forgive me for stating the obvious: this sounds like the sunk cost fallacy. There’s a cost in that you did not buy coins when they were cheaper, and though this does affect how you feel about the issue, it shouldn’t (instrumental-rationally) affect your choices.
I did buy coins when they were at ~40$, and I was then regretting that I hadn’t bought more when two weeks earlier they were at 10$. When they were at 70$ I chose to buy some more—and I regretted not buying more when they were at 40$. But both my buy at 40$ and my buy at 70$ were good ones.
Now bitcoins are at around 141$ to 143$. Whether to buy or not buy at this point should depend on an estimation of whether the price is going to go up or down from here—and your estimation of how soon and how far the price of bitcoin is going to rise or crash from this point onwards. There’s always a risk and a chance.
I was going to reply “Actually, I meant it in the sense that given that their price has changed so quickly, I don’t trust their price to not fall by 40% while I’m sleeping”, but I’m afraid that that would just be a rationalization. (I might buy some if their price doesn’t change so much in the next couple days.)
Okay, then let me just warn people in general that transferring money from a bank to the usual bitcoin exchanges (mtgox, bitstamp, etc) may by itself take a couple days—they don’t tend to accept some of the faster methods like paypal.
That’s what prevented me from owning bitcoins yesterday afternoon while their price halved.
This delay has both accidentally helped and hindered me in the past—it helped when it prevented me from buying in at 30 before the first crash in 2011; it had hindered me now, when I couldn’t buy in at 90 as I had wanted before the price rose to 140.
My bitcoin transactions during the last couple months have perhaps gotten me a 5000$ gain (or so) on the whole. It’s sad to think that I could have gained four times as much if I had sold one day earlier than I did; still I came out of this round benefitting, as I did back in 2011 (back then I had perhaps gotten a 2000 or 3000$ gain).
Now, I’m debating with myself whether to reinvest the money I got on bitcoin, or if the price is going to drop further… (it’s currently around 70$ in the exchanges which are still open like bitstamp.net)
You see that cute nice near-vertical drop around 15:00 UTC yesterday? That was while I was on the bus on my way home, when I had about 0.42 bitcoins on bitcoin-24.com (I’m not crazy enough to play with more money that that at the moment). #$%&. (By this morning, I had somehow managed to get back all of the value through sheer luck by repeatedly selling and buying at the right times. Now bitcoin-24.com is down, and I don’t know whether that happened before or after my offer to sell most of the bitcoins I had left at €80/BTC was accepted. (From this graph I guess I was lucky.)
Sounds like maybe it wasn’t just timing. :-|
So do you have a to-do list that you wrote down “buy a few bitcoins” in? If not, maybe you didn’t actually procrastinate; maybe you just forgot.
I don’t.
More like, first the former, then the latter. ;-)
The free will page is obnoxious. There have been several times in recent months when I have needed to link to a description of the relationship between choice, determinism and prediction but the wiki still goes out of its way to obfuscate that knowledge.
That’s a nice thought. But it turns out that many lesswrong participants don’t try to solve it on their own. They just stay confused.
There have been some other discussions of the subject here (and countless elsewhere). Can someone suggest the best reference available that I could link to?
The free will (solution)) page?
You know you spend too much time on LW when someone mentioning paperclips within earshot startles you.
How many people will agree with a statement depends on what typeface it’s written in.
From this day forward all speculation and armchair theorizing on LessWrong should be written in Comic Sans.
For some reason, my mind is picturing that sentence written in Comic Sans. (Similar things often happen to me with auditory imagery, e.g. when I read a sentence about a city I sometimes imagine it spoken in that city’s accent, but this is the first time I recall this happening with visual imagery.)
Shouldn’t it? Isn’t epistemic hygiene correlated with font choice in known cases? I mean, if someone posts something in Comic Sans …
I’d expect that to be mostly screened off by e.g. grammar and wording, though. (If I had read that passage about asteroids as existential risk written in Comic Sans, I would probably have assumed that the person who chose the font wasn’t the same person who wrote the passage.)
Eyeballing this, the effect size is tiny. Looking at their own measurements, it is statistically significant, but barely.
ADDED: Hmm… I missed the second page. Over there is more explanation of the analysis. In particular:
Point taken. This is large enough that it might be useful. However, I don’t think it is a large enough bias to be important for rationalist.
Depends. It would certainly be interesting to know for, say, the LW default CSS. I think I’ll A/B test this Baskerville claim on gwern.net at some point.
EDIT: in progress: http://www.gwern.net/a-b-testing#fonts
My A/B test has finished: http://www.gwern.net/a-b-testing#fonts
Baskerville wasn’t the top font in the end, but the differences between the fonts were all trivial even with an ungodly large sample size of n=142,983 (split over 4 fonts). I dunno if the NYT result is valid, but if there’s any effect, I’m not seeing it in terms of how long people spend reading my website’s pages.
I’m doing a research project on attraction and various romantic strategies. I’ve made several short online courses organizing several different approaches to seduction, and am looking for men 18 and older who are interested in taking them as well as a short pre and post survey designed to gauge the effectiveness of the techniques taught. If you are want to sign up, or know anyone who might be interested you can use this google form to register. If you have any questions comment or PM me and I’ll get back to you.
ETA:Since someone mentioned publication I thought I should clarify. This is specifically a student research project so, unlike a class project, I am aiming for a peer-reviewed publication, however the odds are much slimmer than if someone more experienced/academically higher status were running it. Also, even if it doesn’t get formally published I will follow the “Gwern model”. That is to say I’ll publish my results online along with as much as my materials as I can (the courses are my own work + publicly available texts, but I only have a limited license for the measures I’m using).
Excellent! An area in which there has been far less formal research than the usefulness of the knowledge calls for. I’ll be interested in seeing your results once you publish them.
Are you also going to try to gauge how friendly to women each technique is?
That is not something the study is designed to study, however, it was a major consideration in designing the curriculums.
Alright.
Your sign up form doesn’t say anything about the amount of time/effort that you expect students to invest into the course.
Thanks for catching that. I’ve edited the instructions to be clearer. For reference here is the added text. The basic lesson format is a short reading (a few pages), an assignment applying the reading to your life, and a short follow-up/written reflection. There is some variability, but the assignments tend to be short (in the vicinity of 5 minutes) and/or designed to be worked into normal social interaction. That said, the normal social interaction part does assume that you are frequently around women that you some interest in flirting with, asking out ect. If this is not the case finding suitable women to interact with could take significantly more time
Here is a blog which asserts that a global conspiracy of transhumanists controls the media and places subliminal messages in pop music such as the Black Eyed Peas music video “Imma Be” in order to persuade people to join the future hive-mind. It is remarkably lucid and articulate given the hysterical nature of the claim, and even includes a somewhat reasonable treatment of transhumanism.
http://vigilantcitizen.com/musicbusiness/transhumanism-psychological-warfare-and-b-e-p-s-imma-be/
EDIT: I see this was previously posted back in 2010, but if you haven’t witnessed this blog yet it is worth a look.
Good to know that someone’s keeping the ol’ Illuminati flame burning. Pope Bob would be proud.
The thing I find most curious about the Illuminati conspiracy theory is that if you look at the doctrines of the historical Bavarian Illuminati, they are pretty unremarkable to any educated person today. The Illuminati were basically secular humanists — they wanted secular government, morality and charity founded on “the brotherhood of man” rather than on religious obedience, education for women, and so on. They were secret because these ideas were illegal in the conservative Catholic dictatorship of 18th-century Bavaria — which suppressed the group promptly when their security failed.
If CFAR becomes at all successful, conspiracists will start referring to it as an Illuminati group. They will not be entirely wrong.
Might I interest you in the theories of Mencius Moldbug?
Please give the poor sap a link to a summary of them; even “A gentle introduction to Unqualified Reservations” made me go tl;dr a third of the way through Part 1.
(What little I know about reactionary ideas comes from this, but I don’t know how accurate that is.)
They modeled themselves after the Freimauers and draw a lot of their membership from them. Being a member of the Illuminati required a pledge of obedience. I would be very surprised if CFAR introduces that kind of behavior. You don’t need pledges of obedience to advocate secular humanism.
Like the Freemansons the Illuminati also performed secret rituals.
That not really true. Karl Theodor who banned them was a proponent of the Englighement. He didn’t want secret groups that pledge obedience to get political power. He didn’t want his government to be overturned. A lot of French people died in the French revolution.
Offhand, I haven’t seen any LWers write about having chemical addictions, which seems a little surprising considering the number of people here. Have I missed some, or is it too embarrassing to mention, or is it just that people who are attracted to LW are very unlikely to have chemical addictions?
To busy with the internet addictions?
Could be, but it seems worth finding out.
Add a poll to your top-level comment. Suggested options: no chemical addiction, had one in the past, have one today.
Caffeine addiction is pretty popular, and I bet we have quite a few on adderall. Is that not what you mean?
Have you had a chemical addiction? [pollid:422]
Unfortunately, the poll options don’t seem to include ticky-boxes, so I don’t see an elegant way to ask about which chemicals.
As usual, caffeine addiction is so common that it needs to either be explicitly excluded or else its inclusion pointed out so readers know how meaningless the results may be for what they think of as ‘chemical addiction’.
My original thought was to phrase it as “chemical addiction generally considered destructive”, but that’s problematic, too. What about sugar?
Sugar is incredibly destructive. It is a major, perhaps the major, cause of diabetes, heart diseases, obesity and other diseases of civilization.
*wants to change answer now*
I get withdrawal symptoms if I miss too many antidepressant pills. Does that count?
If that counts I have a serious dihydrogen monoxide problem as well...
Yeah me too, I drink the stuff like water.
I wouldn’t say so. The definition of addiction is foggy enough that some discussion first would be a good idea if I want to do a more substantial poll.
Nicotine, caffeine, simple carbohydrates. (Didn’t even realize the last one until I started getting hit with withdrawal—I’ve never been addicted to sugar before. But since I’ve cut it out of my diet this last time, which I’ve done many times before without issue, I’ve started getting splitting headaches that are rapidly remedied by eating an orange.)
I have alcohol cravings from time to time, but I’m not addicted, since drinking is actually infrequent for me, and not doing so doesn’t cause me any issue. That’s another recent development which is making me consider clearing out the liquor cabinet. (I did have alcohol cravings once before, after my grandfather died. And my grandmother just died after a few years of progressive decline—she had a form of dementia, possibly Alzheimer’s—so it may be depression. I don’t -feel- depressed, but I didn’t feel depressed last time I was, either, and it was only obvious in retrospect.)
Thanks for writing that up. I probably should have realized that cravings can vary a lot for individuals, but I hadn’t thought about it. I’ve also never heard of a sugar craving which manifests as headaches—my impression is that typical sugar cravings manifest as obsessive desire without more obviously physical symptoms.
I’ve actually never had a desire for sugar. Not even when I was a child—we kept a bowl full of candy and chocolate which I almost never touched. (I preferred, odd as it may sound, things like brussel sprouts, although I’ve stopped having any desire for -those- after getting moldy ones once too often)
I crave spicy foods the way most people crave sweet foods. My favorite is spicy pickled asparagus, which is impossible to find. (Spicy pickled okra is easier, and almost as good, though.) That may actually count as an addiction as well, come to think of it. (Apparently spicy foods induce endorphin and dopamine production?)
So you’ve got a strong withdrawal reaction to sugar without having a desire for it?
Sometimes something similar happens to me with food in general—if I have eaten very little in the past dozen hours, sometimes I start feeling dizzy, lazy, and sad but not unusually hungry. (I haven’t tested whether different food groups have different effects.)
(For example, I woke up at noon this morning and now it’s almost 2 p.m., but I don’t feel particularly motivated to getting out of bed; but I know that if I got up and went eat something I’d feel much more energetic.)
This is starting to remind me of the dihydrogen monoxide joke.
Does having ATP withdrawal symptoms count as an addiction?
Does my caffeine addiction count? If I stop drinking coffee, I anticipate mild withdrawl symptoms. I periodically do this when I find myself drinking lots of coffee; a few days without increases the effectiveness of the caffeine later.
I take prescription adderall, and am decidedly less functional without it. I sometimes skip a day on the weekends. I anticipate no withdrawl symptoms, but would be far less willing to stop taking it than the caffeine.
One evening a number of years ago, I smoked a couple cigarettes at a party. For almost two weeks afterwards, I reacted to seeing or smelling cigarettes by wanting one. I didn’t have any more, and those thoughts went away.
Which of those would you count as addictions? I can imagine plenty of obvious cases either way, but the boundary seems awkward to define, and very common in the case of things like caffeine and sucrose. (I answered yes in the poll, because of the caffeine.)
For what it’s worth, what I was interested in was getting deep enough into the obvious life-wreckers that it was urgent to stop using them. Even that’s vague, of course. Alcohol has short term emotional/cognitive effects which cause much more damage faster than cigarettes can.
It may be worth creating another poll which clarifies whether or not to count socially accepted addictions such as caffeine; some people seem to have answered on the assumption that it doesn’t count, while others have answered on the assumption that it does.
Caffeine here; based on serious withdrawal symptoms on quitting.
I used to be a smoker, and I went through a phase of drinking too much alcohol when I was younger (this was especially worrying as there are many alcoholics in my family). I managed to give up smoking and my alcohol consumption is much healthier now.
I’ve also noticed that I haven’t seen many people on LW worrying about how to cut down on/give up drinking, smoking or drugs. My impression is that LWers are not all that likely to do things that are self destructive in that way.
I answered “No”, but one might quibble about whether I actually qualify as not addicted to caffeine. (I’m operationalizing “addiction” as ‘my performances when I don’t use X for a couple days are substantially worse than the baseline level before I started regularly using X in the first place, or when I stop using X for several months’. I am a bit less wakeful if I let go of caffeine for a couple days than the level I revert to when I let go of it for months, but not terribly much so and there are all sorts of confounds anyway.)
I started a blog about a month or two ago. I use it as a “people might read this so I better do what I’m committing to do!” tool.
Link: Am I There Yet?
Feel free to read/comment.
I get the impression that there is something extremely broken in my social skills system (or lack there of). Something subtle, since professionals have been unable to point this out to me.
I find that my interests rarely overlap with anyone else’s enough to sustain (or start, really) conversation. I don’t feel motivated to force myself to look at whatever everyone else is talking about in order to participate in a conversation about it.
But it feels like there’s something beyond that. I was given the custom title of “the confusenator” on one forum. I was straight-up told I was boring when I interjected in a round of bickering that interrupted a debate (also on an internet forum). I find myself being ignored in many places, even those specifically narrow enough in focus to increase interest overlap. (No, I don’t post enough at LW for me to count it at this point in time.)
In real life, I physically can’t do the all-important eyecontact thing, and I’m too selfconscious/anxious/whatever to use a great deal of volume when speaking. And I can’t see lots of things that convey important information about whether someone is available for talking to / nonverbal cues / etc. So real life, I kinda understand.
But none of those apply to the internet, and I still wind up stuck in my own little world there.
Surely I’m missing something?
Perhaps more practice?
Your writing isn’t very clear.
http://lesswrong.com/lw/ou/if_you_demand_magic_magic_wont_help/8o31 is a good example. To me it isn’t clear what point you want to make with that post.
I get the impression that you try to list a few facts that you consider to be true instead of trying to make a point. It might help you to edit your writing to remove words that don’t help you to make the point that you want to make.
When it comes to real life conversations, lack of interest overlap is rarely the main problem. Even if you know nothing about a topic you can have a conversation where the other person explains you something about the topic.
The problem is more emotional. If you are anxious than it’s hard for a conversation to flow.
*For disclosure, my own writing isn’t the clearest either. It’s still a lot better than it was in the past.
If you supply a sample or two of your writing in context from other forums, perhaps it will be easier for someone here to see a pattern of what you’re doing.
If I stay up ~4 hours past my normal waking period, I get into a flow state and it becomes really easy to read heavy literature. It’s like the part of my brain that usually wants to shift attention to something low effort is silenced. I’ve had a similar, but less intense increase in concentration after sex / masturbation.
Anyone else had that experience?
A very common phenomenon is that people are inhibited from doing work because they don’t like the quality of what they produce. If they are a little sleep-deprived or drunk, they can avoid this inhibition. I think you’re talking about something else, though.
this seems like a super important insight for creativity. Is there a way to practice caring less about initial quality? I’m thinking the obvious of just brainstorming and stream of consciousness writing with as little filter as possible.
How about meditation? Or the cognitive approach of reminding yourself that the path to excellence requires both mistakes and messing around?
Or the even more obvious of just getting drunk?
Yes, I meant cultivating it in a non-impaired state.
You don’t think practice while drunk would transfer to non-drunk? I guess there’s the issue of state-dependent memory, but I think a plausible strategy is to start your creative sessions drunk and then gradually decrease the amount of alcohol involved over time.
Alcohol is a depressant—it binds to pre-synaptic receptors for the brain’s major inhibitory neurotransmitter, gamma aminobutyric acid (GABA). The delta subunit containing GABA receptor, to which alcohol’s ethanol has now bound, allows for influx of negatively charged Chlorine into the pre-synaptic GABAergic (GABA transmitting) cell; the cell’s charge is lowered, which inhibits further action potentials. Cells that transmit GABA will inhibit other cells; hyperpolarising (making the cell’s net charge negative) the inhibitory pre-synaptic GABAergic cell dis-inhibits the post-synaptic cell, which may be excitatory or inhibitory. In the general case of the post-synaptic cell being excitatory, one’s brain will become less inhibited—which is not a good thing for cognitive computation.
Due to physics I confess to not presently comprehend, an entirely uninhibited brain will fire in synchrony. Synchrony of action potential frequency has been observed and mathematically measured to result in decreased cognitive performance: asynchronous brain activity is high performance brain activity (beta waves). I understand it from a reactivity perspective—in order to respond quickly to a stimulus, one needs to inhibit their current action and respond to that stimulus; GABAergic neurones are critical to that inhibition.
In sum, while a buzzed person may feel very happy and jumpy, their reduced cognitive ability to inhibit active firing patterns hinders cognitive performance (they are jumpy because motor neurones are being dis-inhibited, too).
With sufficient ethanol saturation voltage-gated sodium channels become less able to detect changes in the charge of their proximity; non-polar lipid-like ethanol does not conduct electricity. Impaired ability to respond to environmental changes around the cell fetters neurone firing, leading to a drunkard’s depressed, or rather retarded behaviour.
From a speculative standpoint, perhaps the increased excitability and decreased potential for inhibition conduces fewer cognitive interruptions along the lines of, “Hey, listen! To experience an instant reward go to Hyrule!” One’s thoughts, literally, cannot be stopped enough to have that thought.
While ethanol is neurodepressant overall, its effects can initially mirror those of a stimulant (‘biphasic’).
It’s still depressing neurones; the neurones it’s depressing are inhibitory neurones, which dis-inhibits excitatory neurones. Your comment prompted me to do a research-check, and it turns out I was completely wrong (don’t theorise beyond your nose, eh?). The above comment now reflects reality.
~5 hours after I usually go to bed is an incredibly productive period of time for me. So the timing doesn’t correspond, but the “part of my brain that usually wants to shift attention” does.
Is there any particular protocol on reviving previously-recurring threads that are now dormant? I had some things to put in a Group Rationality Diary entry, but there hasn’t been a post since early January. I sent cata a message a few days ago; haven’t heard back.
Alas, no such protocol exists. So just go for it.
Start a new Group Rationality Diary post.
Strong AI is hard to predict: see this recent study. Thus, my own position on Strong AI timelines is one of normative agnosticism: “I don’t know, and neither does anyone else!”
Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it’s damn hard to predict those.
In 1900, David Hilbert posed 23 unsolved problems in mathematics. Imagine trying to predict when those would be solved. His 3rd problem was solved that same year. His 7th problem was solved in 1935. His 8th problem still hasn’t been solved.
Or imagine trying to predict, back in 1990, when we’d have self-driving cars. Even in 2003 it wasn’t obvious we were very close. Now it’s 2013 and they totally work, they’re just not legal yet.
Same problem with Strong AI. We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.
We can still try. As it happens, a perfectly relevant paper was just released: “On the distribution of time-to-proof of mathematical conjectures”
They took the 144 from the Wikipedia list of conjectures; their population covariate is just an exponential equation they borrowed from somewhere. Regardless, they turn in the result one would basically expect: a constant chance of solving a problem in each time period. (In turn, this and the correlation with population suggests to me that solving conjectures is more parallel than serial: delays are related more to how much mathematical effort is being devoted to each problem.)
Nice.
I’ve been looking at postgraduate programmes in the philosophy of Artificial intelligence, (primarily but not necessarily in the UK). Does anyone have any advice or suggestions?
Why so narrow as to exclude good computer science, cognitive science, philosophy of mind, etc...programs from consideration?
No particular reason. I am looking at general Philosophy programmes and cognitive science as well.
I ask specifically about AI programme’s because its a very specialised field and it is difficult to distinguish which programmes are worth doing (as certain institutions have started up ‘AI’ programmes that are little more than pre-existing modules rearranged to make money). I figure there are enough people involved in the field here that they would have relevant expertise.
No advice from me, sorry. But I am interested in what you’d be doing. To be succinct, do you want to write for a philosophy audience, or an AI researcher audience?
I’m going to Hacker School this summer, and I need a place to stay in NYC between approximately June 1 and August 23. Does anyone want an intrepid 20-year-old rationalist and aspiring hacker splitting the rent with them?
Also, applications for this batch of Hacker School are still open, if you’re looking for something great to do this summer.
Consider contacting the NYC LW email list.
After rereading the metaethics sequence, it occurred to me a possible reason why people can enjoy (the artistic genre) of tragedy. I think there’s an argument to be made along the lines of “watching tragedy is about not feeling guilty when you can’t predict the future well enough to see what right is.”
Grading is the bane of my existence. Every time I have to grade homework assignments, I employ various tricks to keep myself working.
My normal approach is to grade 5 homework papers, take a short break, then grade 5 more. It occurred to me just now that this is similar to the “pomodoro” technique so many people here like, except work-based instead of time-based. Is the time-based method better? Should I switch?
Anyway, back to grading 5 more homework papers.
I think using Pomodoros is more fun because you can do things like record how many assignments you grade per Pomodoro. Now you can keep track of your “high score” and try to break it. Competition is fun and worth leveraging for motivation, even if it’s with your past selves.
But doesn’t that make you inclined to not read as carefully or grade as thoroughly or not leave as many comments? “Oh whatever, that was mostly right. Yay, high score!”
If you’re at the point where you need to employ tricks to finish the grading at all, then I think this is unfortunately a secondary concern. Once you can consistently finish the grading, then I think you can start worrying about its quality.
See, I always worry that the easiest way to get through grading is to just give everyone A’s regardless of what they turned in. So I feel like you somehow have to factor in a reward for quality or that’s what your system will collapse into?
I would never be tempted to do that, but that comes from a strong desire to tell people when they’re wrong which is not necessarily a good thing overall.
I’ve known for a while that for every user there’s an RSS feed of their comments, but for some reason it’s taken me a while to get in the habit of adding interesting people in Google Reader. I’m glad I have.
(Effort in adding them now isn’t wasted, since when I move from Google Reader I’ll use some sort of tool to move all my subscriptions across at once to whatever I move to)
I should do this for more people than Quirrel
Trying to get a handle on the concept of agency. EY tends to mean something extreme, like “heroic responsibility”, where all the non-heroic rest of us are NPCs. Luke’s description is slightly less ambitious: an ‘agent’ is something that makes choices so as to maximize the fulfillment of explicit desires, given explicit beliefs. Wikipedia defines is as a “capacity to act”, which is not overly useful (do ants have agency?). The LW wiki defines it as the ability to take actions which one’s beliefs indicate would lead to the accomplishment of one’s goals. This is also rather vague.
Assuming that agency is not all-or-nothing, one should be able to measure the degree/amount/strength of agency. Is this different from, say, intelligence as an “ability to reason, plan, solve problems”? Are there examples of intelligent non-agents or non-intelligent agents? Assuming the two are correlated but not identical, how does one separate them? Is there a way to orthogonalize the two?
CFAR’s notion of agency is roughly “the opposite of sphexishness,” a concept named after the behavior of a particular kind of wasp:
So ants don’t have agency. The difference between intelligence and agency seems to me to vanish for sufficiently intelligent minds but is relevant to humans. Like ArisKatsaris I think that for humans, intelligence is the ability to solve problems but agency is the ability to prioritize which problems to solve. It seems to me to be much easier to test for intelligence than for agency; I thought for a little bit awhile ago about how to test my own agency (and in particular to see how it varies with time of day, hunger level, etc.) but didn’t come up with any good ideas.
One sign of sphexishness in humans is chasing after lost purposes.
How about “agency” as the extent by which people are moved to action by deliberate thought and by preferences they’re aware of—as opposed to by habit, instinct, social expectations or various unconscious drives.
That’s pretty much similar to Luke’s definition I guess.
It’s different in that it also chooses which problems to seek to solve, in accordance to one’s own self-aware preferences .
Lots of intelligent non-agents—a pocket calculator for example.
In HP:MoR, Harry mentioned that breaking conservation of energy allows for faster-than-light signalling. Can someone explain how?
Do you mind pointing out exactly where he says that?
Chapter 2: “You turned into a cat! A SMALL cat! You violated Conservation of Energy! That’s not just an arbitrary rule, it’s implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! ”
Eliezer discussed this point in a reddit thread about a month ago—but I’m not qualified to judge how good his physics on this point are.
It’s not very good. Energy changing in time does not violate unitarity, so you cannot destroy pieces of the wave function and so you don’t get FTL in the regular quantum mechanics. You do get FTL this way in general relativity, but this is outside Harry’s knowledge (because it’s outside Eliezer’s). To actually kill a part of the wave function, you need this subsystem to have complex energy. I cannot comment on the quantum computing party of it, it’s not my area.
Edit: I’ll have to look closer at his partial cancellation argument and see if it can work.
After some more thinking, I’m still having trouble following this logic:
Specifically, I am not sure in what sense he uses the word “branches”. If this is an MWI concept, then different branches do not cancel, since they do not interact. Maybe it means different additive terms in the wave function of some subsystem? But those correspond to different orthogonal eigenstates and so they don’t cancel out, either. Maybe it is meant in terms of constructive/destructive interference, only with the destructive part in one place not being compensated by the constructive part elsewhere? This interpretation at least makes sense if you associate branches with propagation paths, but I still have no clue how to use time-dependent energy states to terminate rather than displace (in a perfectly sub-light way) interference maxima and minima.
Maybe someone else can speculate more successfully.
As far as I know, it’s because breaking conservation of energy means that relativity is borked.
Let me explain. Conservation of energy is a logical consequence of the fact that experiments performed in different places or at different speeds turn out the same way. In other words, “how fast you are going doesn’t matter” → “conservation of energy”. Equivalently, “no conservation of energy” → “how fast you are going can change things”.
We believe relativity is true in large because of how speed and position are invariant in physics (iirc, this is the insight used to generate the theory of relativity in the first place). Once the reasons to believe in relativity go out the window, so does its baggage—specifically, the injunction against FTL.
As a trained (though non-practicing) physicist, I would like to point out that your comment varies between wrong and meaningless. Conservation of energy is a consequence of the time symmetry, i.e. the time variable not being explicitly present in the Lagrangian or Hamiltonian description of the system under consideration (see Noether’s theorem). There are perfectly good relativistic Lagrangians where energy is not conserved (usually because the system is not closed).
That describes conservation of momentum, if anything.
Also note that “global” energy is most emphatically not conserved in our expanding universe, and not even well-defined. All that is defined and (locally) conserved is the stress-energy tensor field.
There is also no injunction against FTL in either special or general relativity, though for different reasons. In SR FTL leads to time travel, while in GR it leads to the initial value problem being not well-posed, a rather technical point.
I had some students complaining about test-taking anxiety! One guy came in and solved the last midterm problem 5 minutes after he had turned in the exam, so I think this is a real thing. One girl said that calling it something that’s not “exam” made her perform better. However, it seems like none of them had ever really confronted the problem? They just sort of take tests and go “Oh yeah, I should have gotten that. I’m bad at taking tests.”
Have any of you guys experienced this? If so, have you tried to tackle it head-on? It seems like there should be a handy tool-box of things to do when experiencing anxiety during a test. I personally don’t have this problem, so I have no idea. (I get a little nervous and take a minute to breathe and I’m fine. And avoid drinking coffee on exam days!)
Is this meant to imply that you didn’t previously think this is a real thing or that you hadn’t heard of it until now? It’s apparently a well-studied phenomenon, I think I know people who experience it, and it’s completely consistent with my current model of human psychology.
Nono, I believed it. I just didn’t want people commenting “your students are just complaining to weasel out a better grade from you,” because I had some people telling me that students sometimes try to befriend TA’s and suck up to them. Though I guess it’s not that relevant that these particular students had it. I was just surprised at how bad it was. It’s almost like as soon as the test is over, you can think again? I sorta figured people would seek treatment for something that serious.
I think there’s a double typical mind fallacy here. You were surprised because your mind doesn’t work their way, and it doesn’t occur to them to do anything about it because they just think that’s what tests feel like. Also, an anxiety disorder is tantamount to a mild mental illness, and people still have a lot of hangups about seeking mental health services in general.
Yeah, I think you’re right because when when people say they get nervous before tests, I think, “Oh sure, I get nervous too!” But not to the point where I spend half the time sitting there, unable to write anything down.
I’m a bit concerned that a lot of the treatment options on that page are drugs. Is it really safe to drug people before their brain is supposed to do mathy things? Is it cheating? Do any of the people you know have any handy CBT-style rituals that help calm them down? I think from now on I’m also going to persuade professors to call exams “quizzes” or something.
Probably not more unsafe than drugging them other times. As for performance… most anxiolytic substances impair mental function somewhat. It’s what they are notorious for (ie. Valium and ethanol). Still, the effects aren’t strong enough that crippling anxiety wouldn’t be worse. On the other hand a few things like phenibut and aniracetam could lead to somewhat increased performance even beside from anxiolytic effects.
No. There isn’t (usually) a rule against it so it isn’t cheating. (Sometimes there are laws against prescription substances, but that is different. That makes you a criminal not a cheater!)
I guess I understand using drugs for other mental disorders (the persistent ones that interfere with more areas of life) but it weirds me out that we create this bizarre social construct called “tests” that give people crippling anxiety … and then we solve the problem with drugs. Instead of developing alternative models for testing people. (Although there are probably correlations and people with test anxiety might get it for other things as well?)
I got nothin’. Have you tried making an anonymous survey and surveying your Facebook friends? That’s what I would try.
I think this has to do with the difference between work and curiosity mode. In curiosity mode solving problems is much easier, but stress reliably kills it. Once the stress is gone, the answers come pouring out.
It’s extremely common with certain learning disabilities, like dyslexia and to a lesser extent dyscalculia. For many people, it’s the time limit, rather than the seriousness of the task itself, and eliminating the time limit to take the test permits them to finish it without issue (frequently within the time limit!).
In the class I TA for, the students can go to the professor’s office hours after the midterm / final, and if they can solve the problem there, they still get… half of the points? I wonder how that one affects test-taking performance.
Also, this whole thing seems to be annoyingly resistant to Bayesian updates… “Every time I’m anxious I perform bad, and now I’m worried about being too worried for this exam”, and, since performing bad is a very valid prediction in this state of mind, worry is there to stay.
Maybe if the tests are called “quizzes” the students end up in the other stable state of “not being worried”?
I feel like it’s the students’ responsibility to calibrate their own personal correct amount of worry that it takes to make them study, regardless of what the thing is called? (Like if I say “This quiz is worth 50% of your grade,” they should be able to tell that it’s not really a quiz.) But at the same time, it sounds like some brains have this worry horizon where once they start worrying, then it’s all they can do. So we need to somehow calibrate the scariness of exams so that only a very small percentage of people fall off the worry horizon, because people who fail from not studying can just start studying. The stable state of not being worried is a good place! ^_^
This kind of reminds me of all of the (non-technical) articles about game addiction and how it’s in the designers’ best interest to keep everyone hooked but still high-functioning enough that we won’t outlaw WoW the way we outlaw harmful, addictive narcotics.
Brains are such a mess. ^_^
I watched an awesome movie, and now I’m coasting in far mode. I really like being in far mode, but is this useful? What if I don’t want to lose my awesome-movie high?
Are there some things that far mode is especially good for? Should I be managing finances in this state? Reading a textbook? Is far mode instrumentally valuable in any way? Or should I make the unfortunate transition back to near mode?
Based on the description at the LW wiki, it sounds like far mode is a good time to evaluate how risk-averse you’ve been and whether there are risky opportunities you should be taking that you previously weren’t taking because of risk-aversion.
How much do we know about reasoning about subjective concepts? Bayes’ law tells you how probable you should consider any given black-and-white no-room-for-interpretation statement, but it doesn’t tell you when you should come up with a new subjective concept, nor (I think) what to do once you’ve got one.
You may be interested in the literature on “concept learning”, a topic in computational cognitive science. Researchers in this field have sought to formalize the notion of a concept, and to develop methods for learning these concepts from data. (The concepts learned will depend on which specific data the agent encounters, and so this captures the some of the subjectivity you are looking for.)
In this literature, concepts are usually treated as probability distributions over objects in the world. If you google “concept learning” you should find some stuff.
“Subjective” seems uselessly broad. Can you give a more specific example?
Well, I guess that by “subjective concepts”, I mean every concept that doesn’t have a formal mathematical definition. So stuff like “simple”, “similar”, “beautiful”, “alive”, “dead”, “feline”, and so on through the entire dictionary.
The only theory-of-subjective-concepts I’ve come across is the example of bleggs and rubes. Suppose that, among a class of objects, five binary variables are strongly correlated with each other; then it is useful to postulate a latent variable stating which of two types the object is. This latent variable is the “subjective concept” in this case.
Think of subjective concepts as heuristics that help you describe models of the world. Evaluate those models based on their predictions. (Grounding everything in terms of predictions is a great way to keep your thinking focused. Otherwise it’s too easy to go on and on about beauty or whatever without ever saying anything that actually controls your anticipations.)
Have you read the rest of 37 Ways That Words Can Be Wrong?
Hazards of botched IT Cost overruns are nothing compared to what can go wrong when you actually use the software.
Software which can answer “is this obviously stupid?” would be a step towards FAI.
Toby Ord gave a Google Tech Talk on efficient charity and QALYs this march.
Does anyone here have thoughts on the x-risk implications of Bitcoin? Rebalancing is a way to make money off of high-volatility investments like Bitcoin (the more volatility, the more money you make through rebalancing). If lots of people included Bitcoin in their portfolios, and started rebalancing them this way, then the price of Bitcoin would also become less volatile as a side effect. (It might even start growing in price at whatever the market rate of return for stocks/bonds/etc. is, though I’d have to think about that.)
So given that I could spread this meme on how you can get paid to decrease Bitcoin’s volatility, should I do it?
Why wouldn’t you?
It would take time and effort and having Bitcoin be a legitimate alternative currency might have unforseen negative consequences, e.g.
http://techcrunch.com/2013/04/13/beyond-the-bitcoin-bubble/
So I’m running through the Quantum Mechanics sequence, and am about 2⁄3 of the way through. Wanted to check in here to ask a few questions, and see if there aren’t some hidden gotchas from people knowledgeable about the subject who have also read the sequence.
My biggest hangup so far has been understanding when it is that different quantum configurations sum, versus when they don’t. All of the experiments from the earlier posts (such as distinct configurations) seem to indicate that configurations sum when they are in the “same” time and place. Eliezer indicates at some point that this is “smeared” in some sense, perhaps due to the fact that all particles are smeared in space in time; therefore if two “particles” in different worlds don’t arrive at the same place at exactly the same time, the smearing will cause the tail end of their amplitude distributions to still interact, resulting in a less perfect collision with somewhat partial results to what would have happened in the perfect experiment.
The hangup becomes an issue, barring any of my own misunderstanding (which is of course likely), when he starts talking about macroscopic other worlds. He goes so far as to say that when a quantum event is “observed,” what really happens is that different versions of the experimenter become decohered with the various potential states of the particle.
Several things don’t seem quite right here. First, Eliezer seems to imply here that brains only work (to the extent that they can have beliefs capable of being acted on) when they work digitally, with at least some neurons having definite on or off states. What happens to the conservation of probability volume due to Liouville’s Theorem described in Classical Configuration Spaces? Or maybe I misunderstand here, and the probability volumes actually do become sharply concentrated in two positions. But then why is it not possible for probability volumes to become usually or always sharply concentrated in one position, giving us, for all practical purposes, a single world?
Backing up a bit though. What keeps different worlds from interacting? Eliezer implies in Decoherence that one important reason that decohered particles are such is a separation in space. What I fail to understand, if there is not some specified other axis, is why the claim stands that different but similar worlds (different only along that axis) fail to interact! According to his interpretation (or my interpretation of his interpretation) of quantum entanglement, your observation of a polarized particle at one end of a light-year limits the versions of your friend (who observed the tangled particle) that you are capable of meeting when you compare notes in the middle. But why do you just as easily not meet any other version of your friend? What is the invisible axis besides space and time that decoheres worlds, if we meet at the same place and time no matter what we observe?
More importantly, what keeps neurons which are at the same space and time from interacting with their other-world counterparts, as if they were as real as their this-world self?
Unless I’m completely off here, couldn’t there be many fewer possible worlds than Eliezer suggests? In extremely controlled experiments, we observe decoherence on rather macroscopic levels, but isn’t “controlled” precisely the point? In most normal quantum interactions, isn’t there always going to be interference between worlds? And what if that interference by the nature of the fundamental laws just so happens to have some property (maybe a sort of race condition) that causes, usually, microscopic other worlds to merge? On average, if possible worlds become macroscopic enough, still-real interactions between the worlds become increasingly likely, and they are no longer “other worlds” but actually-interacting same-world, to the point where no two differently configured sets of neurons could ever observe differently.
I should stop here before I carry on any early-introduced fallacy to increasingly absurd conclusions. Would be very interested in how to resolve my confusion here.
I assume you mean this section:
He’s not exactly saying that brains only work digitally—they don’t; neuron activation isn’t only about electrical impulses—he’s just talking about one particular process that happens in the brain. At least, as far as I can tell.
They certainly don’t work only digitally, but the suggestion seems to be that for most brain states at the level of “belief” it is required that at least some neurons have definite states, if only in the sense of “neuron A is firing at some definite analog value.”
I don’t know anything about quantum computing, so please tell me if this idea makes sense… if you imagine many-worlds, can it help you develop better intuitions about quantum algorithms? Anyone tried that? Any resuts?
I assume an analogy: In mathematics, proper imagination can see you some results faster, even if you could get the same results by computation. For example it is easier to imagine a “sphere” than a “set of points with distance at most D from a given center C”. You can see that an intersection of a sphere and a plane is a circle faster than you can solve the corresponding equations. Even if computationally the sphere is the same as the given set of points, imagination runs much faster on the visual model.
Analogically, the copenhagen interpretation and many-world interpretation should give same results. Yet, is it possible than one of them would be more imagination-friendly? Would it be possible to immediately “see” the results in one model, which have to be mathematically calculated by the other model? Could then one of these models be a comparative advantage for a quantum programmer?
To avoid misunderstanding: I don’t suggest using imagination instead of computation. I only suggest using an imagination to guess a result, and then use a proper mathematical proof to confirm it. Just like the “an intersection of a sphere and a plane is either nothing, or a point, or a circle” can be translated to equations and verified analytically, but is much easier to remember this way.
Are you familiar with the Quantum Bomb Tester?
Are you aware that David Deutsch is (1) the loudest proponent of MWI and (2) the inventor* of the quantum computer? Moreover, he claimed that MWI lead him there. He also predicted that quantum computers would convince everyone else of MWI. So far, that claim doesn’t look very plausible.
I am skeptical of the possibility of many worlds contributing to imagination. I prefer the phrase “no collapse” to the phrase “many worlds” because there are a lot of straw men associated with the latter phrase. But phrasing it as a negative shows that’s it’s really a subset of Copenhagen QM, and thus shouldn’t require more or different imagination. You might say that the first incarnation of many worlds is Schrödinger’s Cat, which everyone talks about, regardless of interpretation.
There is some discussion of the fruitfulness here; in particular Scott Aaronson says “I think Many-Worlds does a better job than its competitors...at emphasizing the aspect of QM—the exponentiality of Hilbert space—that most deserves emphasizing.”
* Manin, Feynman, and maybe other people could claim that title, too, but I think they were all independent. Moreover, I think Deutsch was the first person to produce a quantum algorithm that he could prove was better than a classical algorithm; he exploited QM rather than saying it was hard. It is this exploitation that he attributes to MWI.
Deutsch discusses his predecessors, but he didn’t know about Manin. I think Manin’s contribution is all in the 3 paragraph Appendix (p25).
I didn’t know about David Deutsch, thanks for the information!
Then perhaps the only advantage is that you don’t have to waste your time worrying “what if my proposed solution is already so big that the wavefunction will collapse before it computes the result”. But to get this advantage, you don’t really have to believe in MWI. It’s enough to profess belief in colapse, but ignore the consequences of that belief while designing algorithms, which is something humans excel at.
Shameless self-promotion:
http://sthomme.wordpress.com/
Recently, for a philosophy course on(roughly) the implications of AI for society, I wrote an essay on whether we should take fears about AI risks seriously, and I had the thought that it might be worth posting to LW discussion. Is there/would there be interest in such a thing? THB, there’s not a great deal of original content, but I’d still be interest in the comments of anyone who is interested.
LW Women: Submissions on Misogyny was moved to main, but the article doesn’t show up as New, Promoted, or Recent.
I’m not sure if this is the right place for this, but I’ve just read a scary article that claims that “The financial system as a whole functions as a hostile AI”, and I was wondering what LW thinks of that.
There have been various threads in the past about whether corporations can be considered AIs. The general consensus seems to be “not in the sense of ‘AI’ that this community is concerned with.”
Sudden Clarity.
(The OB memes wasn’t me.)
In Anki, LaTeX is rendered too large. Does anyone know an effective fix?
EDIT: I found one. In Anki, LaTeX is rendered to an image and from then on treated as one. Adding
to a new line of the “Styling” section of the “Card Type” for whatever Note you’re using rescales all the images in that card type. So provided you don’t use LaTeX and images on the same Note then this fixes all your problems.
ignore this experiment
ignore this experiment
ignore this experiment.
ignore this experiment
What can you usefully do with underutilised processing power? (E.g. spare computer and server time).
So far the best I can come up with is running Folding@home. But it seems like there should be a way to sell server space etc.
Remember the power consumption entailed: http://www.gwern.net/Charity%20is%20not%20about%20helping
Damn, I’m embarrassed to say the tradeoff had never occurred to me. Even after doing research on other similar types of power consumption. I guess I had just assumed processor power was a fixed overhead.
Site suggestion:
When somebody attempts to post a comment with the words “why”, “comment”, and “downvoted”, it should open a prompt directing them to an FAQ explaining most likely reasons for their downvote, and also warning them prior to actually submitting the comment that it’s likely to be unproductive and just lead to more downvotes.
(Personally I think this site needs to have a little more patience with people asking these questions, as they almost always come from new users who are still getting accustomed to the community norms, but that’s just me.)
This suggestions conflicts with the advice of the Welcome Thread, which says:
And yet I persistently see requests for explanations downvoted. The advice of the welcome thread does not actually correspond to downvoting behavior.
If the advice of the welcome thread doesn’t match the actual LW norms, we should change either the welcome thread or the norms.
Do the requests remain downvoted? In my experience, they may be downvoted for a while, but then get voted back up.
It depends a bit on why the original post was downvoted. Asking for explanation when the problem is obvious, or on a forbidden topic tends not to get back to neutral.
Obvious to regular users != obvious to new user
Monkeymind knew why he was being downvoted.
Edit: But, I agree with your point that many community norms that will get one downvoted are not accessible to new members.
The Welcome Thread doesn’t set official policy.
In my opinion, downvoting is necessary for forum moderation, and people don’t downvote enough. It is rather easy to get a lot of karma by simply writing a lot, because the average karma of a comment (this is just my estimate) is around 1. I would prefer if the average was closer to 0.
Asking about downvoting is ok, per se (assuming that the person does not do it with every single damned comment which fell to −1 temporarily). But sometimes it seems to contain a connotation that “you should not downvote my comments unless you explain why”. Which I completely disagree with and consider it actively harmful, so I automatically downvote any comment that feels like this. (Yes, there is a chance that I misunderstood the author’s intentions. Well, I am not omniscient, and I don’t want to get paralyzed by my lack of omniscience.)
This and other variants of it have been tried on other forums before, with no real change in new user behavior.
I just downvote people complaining about downvotes
Do you make a distinction between complaining and asking?
A little? Most “asking about downvotes” are functionally indistinguishable from complaints although phrased as a question in the sense of “I don’t understand why I’m getting downvoted (implying it doesn’t make sense and you are wrong to do so)”. If someone posts an in depth post and it gets downvoted and they ask for which specific parts of their giant post were bad, I give that more leeway.
If I see a question about downvotes that’s below 0, I’m going to upvote it.
I don’t think I need to ask why that got downvoted.
MWI sat on a wall,
MWI had a great fall.
All the king’s comments and all the king’s men
Couldn’t put all the worlds together again.