This is interesting. People who are vulnerable to the donor illusion either have some of their money turned into utilons, or are taught a valuable lesson about the donor illusion, possibly creating more utilons in the long term.
Giles
This is useful to me as I’ll be attending the March workshop. If I successfully digest any of the insights presented here then I’ll have a better platform to start from. (Two particular points are the stuff about the parasympathetic nervous system, which I’d basically never heard of before, and the connection between the concepts of “epistemic rationality” and “knowing about myself” which is more obvious-in-retrospect).
Thanks for the write-up!
And yes, I’ll stick up at least a brief write-up of my own after I’m done. Does LW have an anti-publication-bias registry somewhere?
There’s probably better stuff around, but it made me think of Hanson’s comments in this thread:
There’s probably better stuff around, but it made me think of Hanson’s comments in this thread:
just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well
I think linking this concept in my mind to the concept of the Chinese Room might be helpful. Thanks!
More posts like this please!
As part of Singularity University’s acquisition of the Singularity Summit, we will be changing our name and …
OK, this is big news. Don’t know how I missed this one.
Appoint a chief editor. Chief’s most important job would be to maintain a list of what most urgently needs adding or expanding in the wiki, and posting a monthly Discussion post reminding people about these. (Maybe choosing a different theme each month and listing a few requested edits in that category, together with a link to the wiki page that contains the full list).
When people make these changes, they can add a comment and chief editor (or some other high status figure) will respond with heaps of praise.
People will naturally bring up any other topics they’d like to see on the wiki or general comments about the wiki. Chief editor should take account of these and where relevant bring them up with the relevant people (e.g. the programmers).
Do you think “ugh” should be listed as a response to survey questions? (Or equivalently a check box that says “I’ve left some answers blank due to ugh field rather than due to not reading the question”—not possible with the current LW software, just brainstorming)
This might be helpful—thanks.
My answer for Exercise would be “I am trying this hack right now and so the results haven’t come in yet” (so I answered “write-in”).
I answered “I feel I should try” for lukeprog’s algorithm, but it’s really more of a “I’ll put it on my list of hacks to try at some point, but with low priority as there’s a whole bunch of others I should try first”.
I like the title too, especially as it gives no information about what the survey is going to be about. (Still might be distorted as people’s productivity experiences might correlate with how much time they spend filling in surveys on LW… but not sure there’s much that we can do about that)
and if the AI can tell if its in a simulation vs the real world then its not really a test at all.
The AI would probably assign at least some probability to “the humans will try to test me first, but do a poor job of it so I can tell whether I’m in a sim or not”
If I understand Will’s response correctly (under “Earmarking”), it’s best to think of GWWC, 80K, EAA and LYCS as separate organizations (at least in terms of whose money will be used for what, which is what really matters). I don’t know if this addresses your concern though.
I admit it makes the actual physical donation process look slightly clunky (no big shiny donate button), but my impression is they’re not targeting casual donors so much so this may not be such a problem.
This is really detailed, and exceeded my expectations! Thank you!
Oh wow, totally wasn’t expecting you to go ahead and answer that particular list of questions. Thanks for being so proactive!
Questions 7-11 aren’t really relevant to FHI. Question 16 is relevant (at least the the “are there other orgs similar to you?” part) but I’m guessing you’d answer no to that?
The other answers are helpful, thanks!
Actually, the relevant thing isn’t whether it’s superlinear but whether a large AI/firm is more innovative than a set of smaller ones with the same total size. I was assuming that the latter would be linear, but it’s probably actually sublinear as you’d expect different AIs/firms to be redundantly researching the same thing.
Big thank you to Hanson for helping illuminate what it is he thinks they’re actually disagreeing about, in this comment:
Eliezer, it may seem obvious to you, but this is the key point on which we’ve been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?
Just a thought: given a particular state-of-the-art, does an AI’s innovation rate scale superlinearly with its size? If it does, an AI could go something like foom even if it chose to trade away all of its innovations, as it would stay more productive than all of its smaller competitors and just keep on growing.
The analogy with firms would suggest it’s not like this; the analogy with brains is less clear. Also I get the sense that this doesn’t correctly describe Yudkowsky’s foom (which is somehow more meta-level than that).
the only person so far to actually answer the goddamn prompt
What’s worse is I wasn’t even consciously aware that I was doing that. I’ll try and read posts more carefully in the future!
OK—I wasn’t too sure about how these ones should be worded.
Any advice on how to set one up? In particular how to add entries to it retrospectively—I was thinking about searching the comments database for things like “I intend to”, “guard against”, “publication bias” etc. and manually find the relevant ones. This is somewhat laborious, but the effect I want to avoid is “oh I’ve just finished my write-up (or am just about to), now I’ll go and add the original comment to the anti-publication bias registry”.
On the other hand it seems like anyone can safely add anyone else’s comment to the registry as long as it’s close enough in time to when the comment was written.
Any advice? (I figured if you’re involved at CFAR you might know a bit about this stuff).