They use the number of stars in the observable universe instead of the number of stars in the whole universe. This ruins their calculation. I wrote a little more here
amcknight
Here’s an eerie line showing about 200 new Cryonics Institute members every 3 years.
Charity Science, which fundraises for GiveWell’s top charities, needs $35k to keep going this year. They’ve been appealing to non-EAs from the Skeptics community and lot’s of other folks and kind of work as a pretty front-end for GiveWell. More here. (Full disclosure, I’m on their Board of Directors.)
A more precise way to avoid the oxymoron is “logically impossible epistemic possibility”. I think ‘Epistemic possibility’ is used in philosophy in approximately the way you’re using the term.
Links are dead. Is there anywhere I can find your story now?
Done! Ahhh, another year another survey. I feel like I did one just a few months ago. I wish I knew my previous answers about gods, aliens, cryonics, and simulators.
I don’t have an answer but here’s a guess: For any given pre-civilizational state, I imagine there are many filters. If we model these filters as having a kill rate then my (unreliable stats) intuition tells me that a prior on the kill rate distribution should be log-normal. I think this suggests that most of the killing happens on the left-most outlier but someone better at stats should check my assumptions.
It sounds like CSER could use a loan. Would it be possible for me to donate to CSER and to get my money back if they get $500k+ in grants?
From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build the competencies he or she will need to navigate that situation well.
More broadly, math talent may be relevant to other technological breakthroughs over the coming century; and tech shifts have historically impacted human well-being quite a lot relative to the political issues of any given day.
I’m extremely interested in this being spelled out in more detail. Can you point me to any evidence you have of this?
Finally did it. I’d like exactly 7 karma please.
For the goal of eventually creating FAI, it seems work can be roughly divided into making the first AGI (1) have humane values and (2) keep those values. Current attention seems to be focused on the 2nd category of problems. The work I’ve seen in the first category: CEV (9 years old!), Paul Christiano’s man-in-a-box indirect normativity, Luke’s decision neuroscience, Daniel Dewey’s value learning… I really like these approaches but they are only very early starting points compared to what will eventually be required.
Do you have any plans to tackle the humane values problem? Do MIRI-folk have strong opinions on which direction is most promising? My worry is that if this problem really is as intractable as it seems, then working on problem (2) is not helpful, and our only option might be to prevent AGI from being developed through global regulation and other very difficult means.
Are you thinking of this 80k hours post?
This may be just about vegetarians around me, but often people who are into vegetarianism are also into other forms of food limitations
I think I’ve noticed this a bit since switching to a vegan(ish) diet 4 months ago. My guess is that once a person starts making diet restrictions, it becomes much easier to make diet restrictions, and once a person starts learning where their food comes from, it becomes easier to find reasons to make diet restrictions (even dumb reasons).
Value drift fits your constraints. Our ability to drift accelerates as enhancement technologies increase in power. If values drift substantially and in undesirable ways because of, e.g. peacock contests, (a) our values lose what control they currently have (b) could significantly lose utility because of the fragility of value (c) is not an extinction event (d) seems as easy to effect as x-risk reduction.
I can’t figure out what you mean by:
Hiding animal suffering probably makes us “more ethical”.
Do you mean that it just makes us appear more ethical?
One major difference is that you are talking about what to care about and Eliezer was talking about what to expect.
What do you mean?
According to the PhilPapers survey results, 4.3% believe in idealism (i.e. Berkeley-style reality).
This seems to me like a major spot where the dualistic model of self-and-world gets introduced into reinforcement learning AI design (which leads to the Anvil Problem). It seems possible to model memory as part of the environment by simply adding I/O actions to the list of actions available to the agent. However, if you want to act upon something read, you either need to model this by having atomic read-and-if-X-do-Y actions, or you still need some minimal memory to store the previous item(s) read in.
But the lower bound of this is still well below one. We can’t use our existence in the light cone to infer there’s at least about one per light cone. There can be arbitrarily many empty light cones.