From London, now living in the Santa Cruz mountains.
Paul Crowley
It is, of course, third-party visible that Eliezer-2010 *says* it’s going well. Anyone can say that, but not everyone does.
I note that nearly eight years later, the preimage was never revealed.
Actually, I have seen many hashed predictions, and I have never seen a preimage revealed. At this stage, if someone reveals a preimage to demonstrate a successful prediction, I will be about as impressed as if someone wins a lottery, noting the number of losing lottery tickets lying about.
- Jan 2, 2020, 3:18 AM; 10 points) 's comment on Habryka’s Shortform Feed by (
Half formed thoughts towards how I think about this:
Something like Turing completeness is at work, where our intelligence gains the ability to loop in on itself, and build on its former products (eg definitions) to reach new insights. We are at the threshold of the transition to this capability, half god and half beast, so even a small change in the distance we are across that threshold makes a big difference.
As such, if you observe yourself to be in a culture that is able to reach technologically maturity, you’re probably “the stupidest such culture that could get there, because if it could be done at a stupider level then it would’ve happened there first.”
Who first observed this? I say this a lot, but I’m now not sure if I first thought of it or if I’m just quoting well-understood folklore.
May I recommend spoiler markup? Just start the line with >!
Another (minor) “Top Donor” opinion. On the MIRI issue: agree with your concerns, but continue donating, for now. I assume they’re fully aware of the problem they’re presenting to their donors and will address it in some fashion. If they do not might adjust next year. The hard thing is that MIRI still seems most differentiated in approach and talent org that can use funds (vs OpenAI and DeepMind and well-funded academic institutions)
I note that this is now done. As I have for so many things here. Great work team!
Spoiler space test
Rot13′s content, hidden using spoiler markup:
Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. Additionally, they already have a larger budget than any other organisation (except perhaps FHI) and a large amount of reserves.
Despite FHI producing very high quality research, GPI having a lot of promising papers in the pipeline, and both having highly qualified and value-aligned researchers, the requirement to pre-fund researchers’ entire contract significantly increases the effective cost of funding research there. On the other hand, hiring people in the bay area isn’t cheap either.
This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year.
I think of CSER and GCRI as being relatively comparable organisations, as 1) they both work on a variety of existential risks and 2) both primarily produce strategy pieces. In this comparison I think GCRI looks significantly better; it is not clear their total output, all things considered, is less than CSER’s, but they have done so on a dramatically smaller budget. As such I will be donating some money to GCRI again this year.
ANU, Deepmind and OpenAI have all done good work but I don’t think it is viable for (relatively) small individual donors to meaningfully support their work.
Ought seems like a very valuable project, and I am torn on donating, but I think their need for additional funding is slightly less than some other groups.
AI Impacts is in many ways in a similar position to GCRI, with the exception that GCRI is attempting to scale by hiring its part-time workers to full-time, while AI Impacts is scaling by hiring new people. The former is significantly lower risk, and AI Impacts seems to have enough money to try out the upsizing for 2019 anyway. As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019.
The Foundational Research Institute have done some very interesting work, but seem to be adequately funded, and I am somewhat more concerned about the danger of risky unilateral action here than with other organisations.
I haven’t had time to evaluate the Foresight Institute, which is a shame because at their small size marginal funding could be very valuable if they are in fact doing useful work. Similarly, Median and Convergence seem too new to really evaluate, though I wish them well.
The Future of Life institute grants for this year seem more valuable to me than the previous batch, on average. However, I prefer to directly evaluate where to donate, rather than outsourcing this decision.
I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. The current situation, with a binary employed/not-employed distinction, and upfront payment for uncertain output, seems suboptimal. I also hope to significantly reduce overhead (for everyone but me) by not having an application process or any requirements for grantees beyond having produced good work. This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
I think the Big Rationalist Lesson is “what adjustment to my circumstances am I not making because I Should Be Able To Do Without?”
Just to get things started, here’s a proof for #1:
Proof by induction that the number of bicolor edges is odd iff the ends don’t match. Base case: a single node has matching ends and an even number (zero) of bicolor edges. Extending with a non-bicolor edge changes neither condition, and extending with a bicolor edge changes both; in both cases the induction hypothesis is preserved.
From what I hear, any plan for improving MIRI/CFAR space that involves the collaboration of the landlord is dead in the water; they just always say no to things, even when it’s “we will cover all costs to make this lasting improvement to your building”.
Of course I should have tested it before commenting! Thanks for doing so.
Spoiler markup. This post has lots of comments which use ROT13 to disguise their content. There’s a Markdown syntax for this.
I note that this is now done.
I note that this is now done.
“If you’re running an event that has rules, be explicit about what those rules are, don’t just refer to an often-misunderstood idea” seems unarguably a big improvement, no matter what you think of the other changes proposed here.
I notice your words are now larger thanks to the excellence of this comment!
Excellent, my words will finally get the prominence they deserve!
When does voting close? EDIT: “This vote will close on Sunday March 18th at midnight PST.”
I thought of a similar example to you for big-low-status, but I couldn’t think of an example I was happy with for small-high-status. Every example I could think of was one where someone is visually small, but you already know they’re high status. So I was struck when your example also used someone we all know is high status! Is there a pose or way of looking which both looks small and communicates high status, without relying on some obvious marker like a badge or a crown?
Hey, looks like you’re still active on the site, would be interested to hear your reflections on these predictions ten years on—thanks!