Whose Goodreads accounts do you follow?
Artaxerxes
If you buy a Humble Bundle these days, it’s possible to use their neat sliders to allocate all of the money you’re spending towards charities of your choice via the PayPal giving fund, including Lesswrong favourites like MIRI, SENS and the Against Malaria Foundation. This appears to me to be a relatively interesting avenue for charitable giving, considering that it is (at least apparently) as effective per dollar spent as a direct donation would be to these charities.
Contrast this with buying games from the Humble Store, which merely allocates 5% of money spent to a chosen charity, or using Amazon Smile which allocates a miniscule 0.5% of the purchase price of anything you buy. While these services are obviously a lot more versatile in terms of the products on offer, they to me are clearly more something you set up if you’re going to be buying stuff anyway rather than what this appears to be to me, a particular opportunity.
Here are a couple of examples of the kinds of people for whom I think this might be worthwhile:
People who are interested in video games or comics or whatever including any that are available in Humble Bundles to purchase them entirely guilt-free, with the knowledge that the money is going to organisations they like.
People who are averse to more direct giving and donations for whatever reason to be able to support organisations they approve of in a more comfortable, transactional way, in a manner similar to buying merchandise.
People who may be expected to give gifts as part of social obligation, and for whom giving gifts of the kinds of products offered in these bundles is appropriate, to do so while all of the money spent goes to support their pet cause.
Can anyone explain to me what non-religious spirituality means, exactly? I had always thought it was an overly vague to meaningless new age term in that context but I’ve been hearing people like Sam Harris use the term unironically, and 5+% of LW are apparently “atheist but spiritual” according to the last survey, so I figure it’s worth asking to find out if I’m missing out on something not obvious. The wikipedia page describes a lot of distinct, different ideas when it isn’t impenetrable, so that didn’t help. There’s one line there where it says
The term “spiritual” is now frequently used in contexts in which the term “religious” was formerly employed.
and that’s mostly how I’m familiar with its usage as well.
This is a really good comment, and I would love to hear responses to objections of this flavour from Eliezer etc.
Saying “we haven’t had a nuclear exchange with Russia yet, therefor our foreign policy and diplomatic strategy is good” is an obvious fallacy. Maybe we’ve just been lucky.
I mean it’s less about whether or not it’s good as much as it is trying to work out the likelihood of whether policies resulting from Trump’s election are going to be worse. You can presuppose that current policies are awful and still think that Trump is likely to make things much worse.
Like, reading through Yudkowsky’s stuff, his LW writings and HPMOR, there is the persistent sense that he is 2 guys.
One guy is like “Here are all of these things you need to think about to make sure that you are effective at getting your values implemented”. I love that guy. Read his stuff. Big fan.
Other guy is like “Here are my values!” That guy...eh, not a fan. Reading him you get the idea that the whole “I am a superhero and I am killing God” stuff is not sarcastic.
It is the second guy who writes his facebook posts.
Yes, I agree with this sentiment and am relieved someone else communicated it so I didn’t have to work out how to phrase it.
I don’t share (and I don’t think my side shares), Yudkowsky’s fetish for saving every life. When he talks about malaria nets as the most effective way to save lives, I am nodding, but I am nodding along to the idea of finding the most effective way to get what you want done, done. Not at the idea that I’ve got a duty to preserve every pulse.
I don’t think Yudkowsky think malaria nets are the best use of money anyway, even if they are in the short term the current clearest estimate as to where to put your money in in order to maximise lives saved. In that sense I don’t think you disagree with him, he doesn’t fetishize preserving pulses in the same way that you don’t. Or at least, that’s what I remember reading. First thing I could find corroborating that model of his viewpoint is his interview with Horgan.
There is a conceivable world where there is no intelligence explosion and no superintelligence. Or where, a related but logically distinct proposition, the tricks that machine learning experts will inevitably build up for controlling infrahuman AIs carry over pretty well to the human-equivalent and superhuman regime. Or where moral internalism is true and therefore all sufficiently advanced AIs are inevitably nice. In conceivable worlds like that, all the work and worry of the Machine Intelligence Research Institute comes to nothing and was never necessary in the first place, representing some lost number of mosquito nets that could otherwise have been bought by the Against Malaria Foundation.
There’s also a conceivable world where you work hard and fight malaria, where you work hard and keep the carbon emissions to not much worse than they are already (or use geoengineering to mitigate mistakes already made). And then it ends up making no difference because your civilization failed to solve the AI alignment problem, and all the children you saved with those malaria nets grew up only to be killed by nanomachines in their sleep. (Vivid detail warning! I don’t actually know what the final hours will be like and whether nanomachines will be involved. But if we’re happy to visualize what it’s like to put a mosquito net over a bed, and then we refuse to ever visualize in concrete detail what it’s like for our civilization to fail AI alignment, that can also lead us astray.)
I think that people who try to do thought-out philanthropy, e.g., Holden Karnofsky of Givewell, would unhesitatingly agree that these are both conceivable worlds we prefer not to enter. The question is just which of these two worlds is more probable as the one we should avoid. And again, the central principle of rationality is not to disbelieve in goblins because goblins are foolish and low-prestige, or to believe in goblins because they are exciting or beautiful. The central principle of rationality is to figure out which observational signs and logical validities can distinguish which of these two conceivable worlds is the metaphorical equivalent of believing in goblins.
I think it’s the first world that’s improbable and the second one that’s probable. I’m aware that in trying to convince people of that, I’m swimming uphill against a sense of eternal normality – the sense that this transient and temporary civilization of ours that has existed for only a few decades, that this species of ours that has existed for only an eyeblink of evolutionary and geological time, is all that makes sense and shall surely last forever. But given that I do think the first conceivable world is just a fond dream, it should be clear why I don’t think we should ignore a problem we’ll predictably have to panic about later. The mission of the Machine Intelligence Research Institute is to do today that research which, 30 years from now, people will desperately wish had begun 30 years earlier.
Also, on this:
Yes, electing Hillary Clinton would have been a better way to ensure world prosperity than electing Donald Trump would. That is not what we are trying to do. We want to ensure American prosperity.
Especially here, I’m pretty sure Eliezer is more concerned about general civilisational collapse and other globally negative outcomes which he sees as non-trivially more likely with Trump as president. I don’t think this is as much of a difference in values and specifically differences with regards to how much you each value each level of the concentric circles of the proximal groups around you. At the very least, I don’t think he would agree that a Trump presidency would be likely to result in improved American prosperity over Clinton.
I just want to point out that Yudkowsky is making the factual mistake of modeling us as being shitty at achieving his goals, when in truth we are canny at achieving our own.
I think this is probably not what’s going on, I honestly think Eliezer is being more big picture about this, in the sense that he is concerned more about increased probability of doomsday scenarios and other outcomes unambiguously bad for most human goals. That’s the message I got from his facebook posts anyway.
LessWrong has made me if anything more able to derive excitement and joy from minor things, so if I were you I would check if LW is really to blame or otherwise find out if there are other factors causing this problem.
You didn’t link to your MAL review for Wind Rises!
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity by Holden Karnofsky. Somehow missed this when it was posted in May.
Compare, for example, Thoughts on the Singularity Institute (SI) one of the most highly upvoted posts ever on LessWrong.
Edit: See also Some Key Ways in Which I’ve Changed My Mind Over the Last Several Years
What’s the worst case scenario involving climate change given that for some reason no large scale wars occur due to its contributing instability?
Climate change is very mainstream, with plenty of people and dollars working on the issue. LW and LW-adjacent groups discuss many causes that are thought to be higher impact and have more room for attention.
But I realised recently that my understanding of climate change related risks could probably be better, and I’m not easily able to compare the scale of climate change related risks to other causes. In particular I’m interested in estimations of metrics such as lives lost, economic cost, and similar.
If anyone can give me a rundown or point me in the right direction that would be appreciated.
Sure, but that doesn’t change all the tax he evaded.
There is this.
Not to mention all that tax evasion never actually got resolved.
CGP Grey has read Bostrom’s Superintelligence.
Transcript of the relevant section:
Q: What do you consider the biggest threat to humanity?
A: Last Q&A video I mentioned opinions and how to change them. The hardest changes are the ones where you’re invested in the idea, and I’ve been a techno-optimist 100% all of my life, but [Superintelligence: Paths, Dangers, Strategies] put a real asterisk on that in a way I didn’t want. And now Artificial Intelligence is on my near term threat list in a deeply unwelcome way. But it would be self-delusional to ignore a convincing argument because I don’t want it to be true.
I like this how this response describes motivated cognition, the difficulty of changing your mind and the Litany of Gendlin.
He also apparently discusses this topic on his podcast, and links to the amazon page for the book in the description of the video.
Grey’s video about technological unemployment was pretty big when it came out, and it seemed to me at the time that he wasn’t too far off of realising that there were other implications of increasing AI capability that were rather plausible as well, so it’s cool to see that it happened.
This exists, at least.
Took it!
It ended somewhat more quickly this time.
Typo question 42
Yes but I don’t think it’s logical conclusions apply for other reasons
Dawkins’ Greatest Show on Earth is pretty comprehensive. The shorter the work as compared to that, the more you risk missing widely held misconceptions people have.
Not a guide, but I think the vocab you use matters a lot. Try tabooing ‘rationality’, the word itself mindkills some people straight to straw vulcan etc. Do the same with any other words that have the same effect.
I recall being taught to argue towards the predetermined point of view in schools and extra-curriculum activities like debating. Is that counterproductive or suboptimal?
This has been talked about before. One suggestion is to not make it a habit.
I’ve heard good things about Dan Carlin’s podcasts about history but I’ve never been sure which to listen to first. Is this a good choice, or does it assume you’ve heard some of his other ones, or perhaps are other podcasts better to listen to first?