This isn’t necessarily- if you have to think about using that link as charity while shopping, it could decrease your likelihood of doing other charitable things (which is why you should set up a redirect so you don’t have to think about it, and you always use it every time!)
Sysice
It might be useful to feature a page containing what we, you know, actually think about the basilisk idea. Although the rationalwiki page seems to be pretty solidly on top of google search, we might catch a couple people looking for the source.
If any XKCD readers are here: Welcome! I assume you’ve already googled what “Roko’s Basilisk” is. For a better idea of what’s going on with this idea, see Eliezer’s comment on the xkcd thread (linked in Emile’s comment), or his earlier response here.
You seem to be saying that people can’t talk, think about, or discuss topics unless they’re currently devoting their life towards that topic with maximum effectiveness. That seems… incredibly silly.
Your statements seem especially odd considering that there are people currently doing all of the things you mentioned (which is why you knew to mention them).
Did the survey (a couple days ago).
I wasn’t here for the last survey- are the results predominantly discussed here and on Yvain’s blog?
I find it very useful to have posts like these as an emotional counter to the echo chamber effect. Obviously this has little or no effect on the average LW reader’s factual standpoint, but reminds us both of the heuristical absurdity of our ideas, and how much we have left to accomplish.
True. I’ve always read things around that speed by default, though, so it’s not related to speedreading techniques, and I don’t know how to improve the average person’s default speed.
This matches my experience. Speed reading software like Textcelerator is nice when I want to go through a fluff story at 1200 WPM, but anything remotely technical requires me to be at 400-600 at most, and speedreading does not fundamentally affect this limit.
HPMOR is an excellent choice.
What’s your audience like? A book club (presumed interest in books, but not significantly higher maturity or interest in rationality than baseline), a group of potential LW readers, some average teenagers?
The Martian (Andy Weir) would be a good choice for a book-club-level group- very entertaining to read and promotes useful values. Definitely not of the “awareness raising” genre, though.
If you think a greater than average amount of them would be interested in rationality, I’d consider spending some time on Ted Chiang’s work- only short stories at the moment, but very well received, great to read, and brings up some very good points that I’d bet most of your audience hasn’t considered.
Edit: Oh, also think about Speaker for the Dead.
Giving What We Can recommends over 10% of income. I currently donate what I can spare when I don’t need the money, and have precommitted to 50% of my post-tax income in the event that I acquire a job that pays over $30,000 a year (read: once I graduate college). The problem with that is that you already have a steady income and have arranged your life around it- it’s much easier to not raise expenses in response to income than it is to lower expenses from a set income.
Like EStokes said, however, the important thing isn’t to get caught up in how much you should be donating in order to meet some moral requirement. It’s to actually give in a way that you, yourself, can give. We all do what we can :)
How I interpreted the problem- it’s not that identical agents have different utility functions, it’s just that different things happen to them. In reality, what’s behind the door is behind the door, while in the simulation rewards X with something else. X is only unaware of whether or not he’s in a simulation before he presses the button- obviously once he actually receives the utility he can tell the difference. Although the fact that nobody else has stated this makes me unsure. OP, can you clarify a little bit more?
It’s tempting to say that, but I think pallas actually meant what he wrote. Basically, hitting “not sim” gets you a guaranteed 0.9 utility. Hitting “sim” gets you about 0.2 utility, getting closer as the number of copies increases. Even though each person strictly prefers “sim” to “not-sim,” and a CDT agent would choose sim, it appears that choosing “not-sim” gets you more expected utility.
Edit: not-sim has higher expected utility for an entirely selfish agent who does not know whether he is simulated or not, because his choice affects not only his utility payout, but also acasually affects his state of simulation. Of course, this depends on my interpretation of anthropics.
Most of what I know about CEV comes from the 2004 Yudkowsky paper. Considering how many of his views have changed in similar timeframes, and how the paper states multiple times that CEV is a changing work in progress, this seems like a bad thing for my knowledge of the subject. Has there been any significant public changes since then, or are we still debating based on that paper?
I’m interested in your statement that “other people” have estimates that are only a few decades off from optimistic trends. Although not very useful for this conversation, my impression is that a significant portion of informed but uninvolved people place a <50% chance of significant superintelligence occurring within the century. For context, I’m a LW reader and a member of that personality cluster, but none of the people I am exposed to are. Can you explain why your contacts make you feel differently?
I don’t disagree with you- this would, indeed, be a sad fate for humanity, and certainly a failed utopia. But the failing here is not inherent to the idea of an AGI that takes action on its own to improve humanity- it’s of one that doesn’t do what we actually want it to do, a failure to actually achieve friendliness.
Speaking of what we actually want, I want something more like what’s hinted at in the fun theory sequence than one that only slowly improves humanity over decades, which seems to be what you’re talking about here. (Tell me if I misunderstood, of course.)
...Which, of course, this post also accomplishes. On second thought, continue!
The answer is, as always, “it depends.” Seriously , though- I time discount to an extent, and I don’t want to stop totally. I prefer more happiness to less, and I don’t want to stop. (I don’t care about ending date, and I’m not sure why I would want to). If a trade off exists between starting date, quality, and duration of a good situation, I’ll prefer one situation over the other based on my utility function. A better course of action would be to try and get more information about my utility function, rather than debating which value is more sacred than the rest.
I’ve voted, but for sake of clear feedback- I just made my first donation ($100) to MIRI, directly as a result of both this thread and the donation-matching. This thread alone would not have been enough, but I would not have found out about the donation-matching without this thread. I had no negative feelings from having this thread in my recent posts list.
Consider this a positive pattern reinforced :)
MMEU fails as a decision theory that we actually want for the same reason that laypeople’s intuitions about AI fail- it’s rare to have a proper understanding of how powerful the phrases “maximum” and “minimum” are. As a quick example, actually following MMEU means that a vacuum metastability event is the best thing that could possibly happen to the universe, because it removes the possibility of humanity being tortured for eternity. Add in the fact that it doesn’t allow you to deal with infinitesimals correctly (e.g. Pascal’s Wager should never fail to convince an MMEU agent), and I’m seriously confused as to the usefulness of this.
“active investment with an advisor is empirically superior to passive non-advised investment for most people.” Can you source this?