I’ve posted it here.
JStewart
I think you should post this as its own thread in Discussion.
This has been proposed before, and on LW is usually referred to as “Oracle AI”. There’s an entry for it on the LessWrong wiki, including some interesting links to various discussions of the idea. Eliezer has addressed it as well.
See also Tool AI, from the discussions between Holden Karnofsky and LW.
Count me surveyed.
Interesting. I wonder to what extent this corrects for people’s risk-aversion. Success is evidence against the riskiness of the action.
Having circular preferences is incoherent, and being vulnerable to a money pump is a consequence of that.
I knew that if I had 0.95Y I would trade it for (0.95^2)Z, which I would trade for (0.95^3)X, then actually I’d be trading 1X for (0.95^3)X, which I’m obviously not going to do.
This means that you won’t, in fact, trade your X for .95Y. That in turn means that you do not actually value X at .9Y, and so the initially stated exchange rates are meaningless (or rather, they don’t reflect your true preferences).
Your strategy requires you to refuse all trades at exchange rates below the money-pumpable threshold, and you’ll end up only making trades at exchange rates that are non-circular.
Judging from the comments this is receiving on Hacker News, this post is a mindkiller. HN is an audience more friendly to LW ideas than most, so this is a bad sign. I liked it, but unfortunately it’s probably unsuitable for general consumption.
I know we’ve debated the “no politics” norm on LW many times, but I think a distinction should be made when it comes to the target audience of a post. In posts aimed to make a contribution to “raising the sanity waterline”, I think we’re shooting ourselves in the foot by invoking politics.
I like the combination of conciseness and thoroughness you’ve achieved with this.
There are a couple of specific parts I’ll quibble about:
Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves, artificial general intelligence.
“The Automation of Science” section seems weaker to me than the others, perhaps even superfluous. I think the line I’ve quoted is the crux of the problem; I highly doubt that the development of AGI will be driven by any such motivations.
Will we be able to build an artificial general intelligence? Yes, sooner or later.
I assign a high probability to the proposition that we will be able to build AGI, but I think a straight “yes” is too strong here.
Out of curiosity, what are your current thoughts on the arguments you’ve laid out here?
I agree. I’ve noticed an especially strong tendency to premature generalization (including in myself) in response to people asking for advice. Tell people what your experiences were, not (just) the general conclusions you drew from them.
Is Omega even necessary to this problem?
I would consider transferring control to staply if and only if I were sure that staply would make the same decision were our positions reversed (in this way it’s reminiscent of the prisoner’s dilemma). If I were so convinced, then shouldn’t I consider staply’s argument even in a situation without Omega?
If staply is in fact using the same decision algorithms I am, then he shouldn’t even have to voice the offer. I should arrive at the conclusion that he should control the universe as soon as I find out that it can produce more staples than paperclips, whether it’s a revelation from Omega or the result of cosmological research.
My intuition rebels at this conclusion, but I think it’s being misled by heuristics. A human could not convince me of this proposal, but that’s because I can’t know we share decision algorithms (i.e. that s/he would definitely do the same in my place).
This looks to me like a prisoner’s dilemma problem where expected utility depends on a logical uncertainty. I think I would cooperate with prisoners who have different utility functions as long as they share my decision theory.
(Disclaimers: I have read most of the relevant LW posts on these topics, but have never jumped into discussion on them and claim no expertise. I would appreciate corrections if I misunderstand anything.)
I dunno, I think it is. It took me several hours of reflection to realize that it could be framed in those terms. The show didn’t do any breaking.
Yes, thanks. I wanted to use strikethrough but a) I couldn’t figure out how to do it in LW’s markdown and b) it wouldn’t work anyway if you copy/paste to rot13.com like I do.
I mostly agree with you. In particular I really liked that Znqbxn’f jvfu jrag fb sne nf gb erjevgr gur havirefr. Gur fbhepr bs ure rzbgvbaf orvat sbe gur zntvpny tveyf naq gur pehrygl bs gur onetnva gurl znqr, V jnf npghnyyl n yvggyr jbeevrq va gur yrnq-hc gb gur svanyr gung ure jvfu jbhyqa’g or zbzragbhf rabhtu.
Ng gur fnzr gvzr, gubhtu gur jvfu raqrq hc ovt rabhtu gb or n fngvfslvat raq, V guvax vg’f cerggl rnfl gb jbaqre jul fur pbhyqa’g tb shegure. Gur arj havirefr vf arneyl vqragvpny gb gur byq bar, evtug qbja gb vaqvivqhny crbcyr. Gur zntvpny tveyf ab ybatre unir gb or chg vagb fhpu n ubeevoyr ab-jva fvghngvba, ohg gurl fgvyy unir gb evfx gurve yvirf. Naq sbe gur abezny crbcyr, gurer’f ab creprcgvoyr punatr ng nyy.
Gur bayl ceboyrz V unq jvgu guvf vf Xlhorl’f pynvz gb gehr travr-yvxr cbjref. Ryvrmre jebgr n cbfg frireny lrnef ntb nobhg guvf: http://lesswrong.com/lw/ld/thehiddencomplexityofwishes/. Gehr travr cbjref znxr n fgbel vzcbffvoyr gb ernfba nobhg. Gur ceboyrz vf gung jvfuvat vf n irel vyy-qrsvarq vqrn. Jung vs bar jvfurq gb or vzzbegny? Gung jvfu vf irel pybfr gb Znzv’f, ohg fgevpgyl fhcrevbe. Ubj pbhyq gung jbex? Jbhyq gung fnir ure sebz gur sngr bs orpbzvat n jvgpu? Gur fgbel pna’g fheivir fbzrguvat yvxr gung. Vg’f whfg nf rnfl gb fvzcyl jvfu rirelbar vzzbegny vs gung jbexrq. Jbhyq Xlhorl tenag fbzrguvat yvxr n jvfu sbe vzzbegnyvgl va gur snfuvba bs Ryvrmre’f “hafnsr travr”, ol fhoiregvat gur vagrag oruvaq gur jvfu? Sebz jung jr frr va gur fubj, nyzbfg pregnvayl abg.
V guvax vg dhvgr oryvrinoyr gung Znzv jbhyqa’g guvax bs guvf nf fur ynl qlvat. Creuncf gur fubeg-fvtugrqarff bs Xlbxb’f naq Ubzhen’f jvfurf pna or fbzrjung cynhfvoyl whfgvsvrq ol gurve genhzn naq gurve lbhgu. Ohg jvgu gurvef, naq rfcrpvnyyl jvgu Fnlnxn’f, V srry yvxr lbh unir ab pubvpr ohg gb vaibxr n ybg bs “orpnhfr gur jevgref fnvq fb”. Fnlnxn naq Znqbxn fcrag dhvgr n ybg bs gvzr pbafvqrevat gur cebfcrpg bs orpbzvat znubh fubhwb naq guvaxvat nobhg jvfurf.
Gurer ner, bs pbhefr, yvzvgf orfvqrf gur tveyf’ vzntvangvbaf. Gur jvfu zhfg nyfb pbzr sebz gurve fgebatrfg srryvatf, juvpu va gurbel urycf rkcynva gur fznyy fpnyr bs gur jvfurf gurl pubfr (V’z ybbxvat ng lbh, Fnlnxn). Naq juvyr vg’f gehr gurfr punenpgref nera’g ZbE!Uneel, gurer’f fbzr cerggl rtertvbhf snvyher gb fuhg hc naq zhygvcyl. (Guvf vf unys bs gur ernfba V chg n fznyy qvfpynvzre abg gb tb vagb guvf fubj rkcrpgvat n fgebat qbfr bs engvbanyvgl)
Nyfb, jr yrnea sebz Xlhorl gung abg rirelbar’f jvfurf ner perngrq rdhny; gur “fvmr” bs gur jvfu qrcraqf ba lbhe cbjre nf n zntvpny tvey. (Qvq Xlhorl gryy Fnlnxn guvf rkcyvpvgyl va gur pbairefngvba jurer ur erirnyf gung Znqbxn unf terng cbgragvny cbjre? V frrz gb erzrzore uvz fnlvat gura gung gur jvfu fur pbhyq znxr jbhyq or uhtr). Vg’f pyrneyl abg nf fvzcyr nf whfg “jvfu sbe nalguvat”, gura. V guvax Xlhorl jbhyq unir gb whfg ershfr gb tenag fbzrguvat yvxr gur vzzbegnyvgl jvfu, be nalguvat ryfr gung jbhyq pyrneyl whfg oernx guvatf. Abg rabhtu cbjre.
Vg’f gur nobir genva bs gubhtug gung yrnqf zr gb pbapyhqr gung juvyr Xlhorl tyvoyl gryyf gur cbgragvny erpehvgf gung gurl pna jvfu sbe nalguvat ng nyy, ur’f bapr ntnva qrprvivat gurz. Vapvqragnyyl, n jvfu bs gur zntavghqr bs Znqbxn’f zhfg unir orra jryy orlbaq jung rira Xlhorl gubhtug jnf jvguva uvf cbjref. Vs gur Vaphongbef gubhtug gung erjevgvat gur ehyrf bs gur havirefr jrer cbffvoyr gura vg qbrfa’g znxr zhpu frafr gung gurl’q snvy gb gel gb qb guvf gurzfryirf. Rira zber rkcyvpvgyl, VVEP ur rkcerffrf dhvgr fbzr fhecevfr jura fur svanyyl znxrf ure pbagenpg.
V ubcr gur nobir nqrdhngryl rkcynvaf zl cevbe pbzzrag. Sebz urer ba ner zl gubhtugf ba gur raqvat, naq jul V ernyyl yvxrq vg rira gubhtu V guvax vg’f cerggl aba-engvbany (vgf aba-engvbanyvgl orvat gur bgure unys gur ernfba sbe zl qvfpynvzre).
Fb univat ernfbarq guhf sne, jr pbzr gb Znqbxn. Znqbxn, nybar nzbat nyy zntvpny tveyf, npghnyyl qbrf unir gur cbjre gb punatr guvatf ba n tenaq fpnyr. Fb ubj qbrf ure jvfu fgnpx hc? V unq gb cbaqre sbe n juvyr nsgre V svavfurq gur fubj va beqre gb qrpvqr ubj V sryg nobhg vg. Gehr, ure jvfu yvgrenyyl punatrq gur ehyrf bs gur havirefr. Ohg ba gur bgure unaq, pbafvqre gur ulcbgurgvpny jbeyq jurer Xlhorl vf abg n ylvat onfgneq^U^U^U^U^U^U pbzcrgrag tnzr gurbevfg naq gurer’f ab ubeevoyr sngr va fgber sbe gur tveyf. Bhe jbeyq, zber be yrff. Tvira n jvfu, gurer ner na njshy ybg bs bgure ubeevoyr guvatf gung pbhyq fgnaq gb punatr. Gur erfhyg bs Znqbxn’f jvfu bayl oevatf hf hc gb gur yriry bs gung jbeyq, pbzcyrgr jvgu nyy vgf qrngu naq zvfrel. Vf gurer ab jvfu gung pbhyq unir vzcebirq guvatf zber guna Znqbxn’f qvq? V frr ab ernfba gb oryvrir gung.
Ohg vf vg gbb unefu ba gur fubj gb gel gb ubyq vg gb fbzrguvat yvxr n genafuhznavfg fgnaqneq? Fher, V’q yvxr gb frr ure qb fbzrguvat ernyyl nzovgvbhf jvgu gur shgher bs uhznavgl, ohg V’z abg gur bar jevgvat vg. Whfg orpnhfr V’z crefbanyyl vagrerfgrq va genafuhznavfz rgp fubhyqa’g yrnq zr gb qvat gur fubj hasnveyl.
Lrg V srry yvxr gur fubj vgfrys qbrf yvir hc gb n uvture fgnaqneq sbe ernyvfz/engvbanyvgl guna gur infg znwbevgl bs navzr. Gur rcvfbqr jurer Xlhorl rkcynvaf gur uvfgbel bs gur Vaphongbef’ pbagnpg jvgu uhznavgl ernyyl fgehpx zr. Vagryyvtrag nyvraf gung ner abg bayl vagryyvtrag, ohg npghnyyl vauhzna? Abg gur fbeg bs guvat lbh rkcrpg bhg bs znubh fubhwb. Guvf fubj unf zber fpv-sv pubcf guna zbfg bs gur navzr gung npghnyyl trg pynffvsvrq nf fpv-sv. Nqq gb gung gur trahvar rssbeg gb unir gur punenpgref naq gurve npgvbaf znxr frafr naq srry ernyvfgvp, naq gur (V guvax) boivbhf snpg gung gur jevgref xarj ubj qvssvphyg jvfuvat jbhyq or gb jbex vagb gur fgbel va n jnl gung znxrf frafr. Guvf vf gur fghss gung grzcgrq zr gb cbfg n erpbzzraqngvba sbe guvf fubj ba YJ.
Gur irel raq fbeg bs gbbx zr ol fhecevfr, va pbagenfg gb nyy guvf. Qhevat gur frdhrapr va juvpu Znqbxn “nfpraqf”, V engure rkcrpgrq gur erfhyg gb ybbx zber… qvssrerag. Rira tenagvat gung ure jvfu jnf gur znkvzhz fur pbhyq unir qbar, V sbhaq vg fhecevfvat gung gur raq erfhyg jbhyq or gur fnzr jbeyq vafgrnq bs bar jvgu n irel qviretrag uvfgbel. Naq jurer qvq gurfr arj rarzvrf fhqqrayl fubj hc sebz, abj gung gurer ner ab jvgpurf?
Guvaxvat nobhg vg zber, V neevirq ng na vagrecergngvba gung ynvq gb erfg zl harnfr jvgu gur raqvat. Gur xrl gb vg jnf gur arj rarzvrf, jubfr pbairavrag nccrnenapr yrsg vagnpg rira gur arprffvgl sbe gur tveyf gb svtug. V guvax gung sebz gur ortvaavat gur jevgref jrer nvzvat sbe n fhogyr nffnhyg ba gur sbhegu jnyy.
Znqbxn’f jvfu jnf gb ghea ure fubj vagb n abezny znubh fubhwb.
I nearly posted exactly this earlier today.
It’s an excellent show, though don’t expect too much rationality. Madoka is no HP:MoR, but since there is very little rationality-relevant content in anime it does stand out.
For me it was a case of two independent interests unexpectedly having some crossover. As a fan of SHAFT (the animation studio) and mahou shoujo in general, it was a given I was going to watch Madoka. Then fhcreuhzna vagryyvtraprf naq vasbezngvba-nflzzrgevp artbgvngvba?
In a classic mahou shoujo setup like this, with magical powers and wish-granting etc, an obvious-to-LWers objection will be “Why doesn’t someone just wish for no one to suffer/die ever again?”. Certainly MoR!Harry would have handled this world a lot differently. I was expecting to just have to live with this oversight in an otherwise impressively coherent setting. But I think by the end of the show even that can be justified if you really want to, based on Xlhorl’f ercrngrq qrprcgvbaf (naq gurersber gur pbzcyrgr ynpx bs perqvovyvgl bs uvf pynvzf bs hayvzvgrq cbjre gb tenag jvfurf), gur rkgerzryl lbhat cebgntbavfgf, naq gur nccnerag “ehyr” gung gur zntavghqr bs gur jvfu qrcraqf ba gur qrcgu bs srryvat naq qrfver sbe vg. Madoka is not MoR!Harry, after all.
As one of the 83.5%, I wish to point out that you’re misinterpreting the results of the poll. The question was: “Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?” This is not the same as “unfriendly AI is the most worrisome existential risk”.
I think that unfriendly AI is the most likely existential risk to wipe out humanity. But I think that an AI singularity is likely farther off than 2100. I voted for an engineered pandemic, because that and nuclear war were the only two risks I thought decently likely to occur before 2100, though a >90% wipeout of humanity is still quite unlikely.
edit: I should note that I have read the sequences and it is because of Eliezer’s writing that I think unfriendly AI is the most likely way for humanity to end.
I just took the survey. I was pretty sure I remembered the decade of Newton’s book, but I was gambling on the century and I lost.
I think quibbles over definitions and wording of most of the probability questions would change my answers by up to a couple of orders of magnitude.
Lastly, I really wanted some way to specify that I thought several xrisks were much more likely than the rest (for example, [nuclear weapons, engineered pandemic] >> others).
My central objection is that this feels like a very un-LessWrongish way to approach a problem. A grab bag of unrelated and unsourced advice is what I might expect to see on the average blog.
Not only is there basically no analysis of what we’re trying to do and why, but the advice is a mixed bag. If one entry on the list completely dominates most of the others in terms of effectiveness (and is a prerequisite to putting the others to good use), I don’t expect it to be presented as just another member of the list. A few other entries on the list I consider to be questionable advice or based on mistaken assumptions.
Upon reread I fear this comes across as much harsher criticism than I intend it to be, because I really do think this is one of the most valuable skills to be cultivated. It’s also a thorny problem that attracts a lot of bullshit, being particularly vulnerable to generalization from one example. I’m glad Lukeprog posted this.
Edit: Grouchymusicologist has already covered silly grammar-nazism, passives, and Strunk and White, complete with the Languagelog links I was looking for.
\25. Write like you talk. When possible, use small, old, Germanic words.
I think this one should be deleted. The first sentence of it is wrong as written, but the idea behind it is expressed clearly and sufficiently in #26 anyway. People do not talk in grammatical, complete sentences.
As for the second half, do you really look up etymologies as you write? I have only the vaguest sense of the origins of the vast majority of words in English, and this despite taking 5 years of French in school. This advice doesn’t look like it was actually meant to be followed in any practical sense, and I would need some convincing that it’s even a good idea.
\14. Almost always list things in threes, in ascending order of the word length of the list item.
This advice seems similarly quite arbitrary and unmotivated.
Is this not kosher? The minimum karma requirement seems like an anti-spam and anti-troll measure, with the unfortunate collateral damage of temporarily gating out some potential good content. The post seems clear to me as good content, and my suggestion to MazeHatter in the open thread that this deserved its own thread was upvoted.
If that doesn’t justify skirting the rule, I can remove the post.