Thanks!
Xodarap
You said
If you “withdraw from a cause area” you would expect that if you have an organization that does good work in multiple cause areas, then you would expect you would still fund the organization for work in cause areas that funding wasn’t withdrawn from. However, what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations, where if you are associated with a certain set of ideas, or identities or causes, then no matter how cost-effective your other work is, you cannot get funding from OP
I’m wondering if you have a list of organizations where Open Phil would have funded their other work, but because they withdrew from funding part of the organization they decided to withdraw totally.
This feels very importantly different from good ventures choosing not to fund certain cause areas (and I think you agree, which is why you put that footnote).
what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations
is there a list of these somewhere/details on what happened?
METR is hiring ML Research Engineers and Scientists
Thanks for writing this up! I wonder how feasible it is to just do a cycle of bulking and cutting and then do one of body recomposition and compare the results. I expect that the results will be too close to tell a difference, which I guess just means that you should do whichever is easier.
I think it would be helpful for helping others calibrate, though obviously it’s fairly personal.
Possibly too sensitive, but could you share how the photos performed on Photfeeler? Particularly what percentile attractiveness?
Sure, I think everyone agrees that marginal returns to labor diminish with the number of employees. John’s claim though was that returns are non-positive, and that seems empirically false.
We have Wildeford’s Third Law: “Most >10 year forecasts are technically also AI forecasts”.
We need a law like “Most statements about the value of EA are technically also AI forecasts”.
Yep that’s fair, there is some subjectivity here. I was hoping that the charges from SDNY would have a specific amount that Sam was alleged to have defrauded, but they don’t seem to.
Regarding $4B missing: adding in Anthropic gets another $4B on the EA side of the ledger, and founders pledge another $1B. The value produced by Anthropic is questionable, and maybe negative of course, but I think by the strict definition of “donated or built in terms of successful companies” EA comes out ahead.
(And OpenAI gets another $80B, so if you count that then I think even the most aggressive definition of how much FTX defrauded is smaller. But obviously OAI’s EA credentials are dubious.)
EA has defrauded much more money than we’ve ever donated or built in terms of successful companies
FTX is missing $1.8B. OpenPhil has donated $2.8B.
I do think it’s at the top of frauds in the last decade, though that’s a narrower category.
Nikola went from a peak market cap of $66B to ~$1B today, vs. FTX which went from ~$32B to [some unknown but non-negative number].
I also think the Forex scandal counts as bigger (as one reference point, banks paid >$10B in fines), although I’m not exactly sure how one should define the “size” of fraud.[1]
I wouldn’t be surprised if there’s some precise category in which FTX is the top, but my guess is that you have to define that category fairly precisely.
- ^
Wikipedia says “the monetary losses caused by manipulation of the forex market were estimated to represent $11.5 billion per year for Britain’s 20.7 million pension holders alone” which, if anywhere close to true, would make this way bigger than FTX, but I think the methodology behind that number is just guessing that market manipulation made foreign-exchange x% less efficient, and then multiplying through by x%, which isn’t a terrible methodology but also isn’t super rigorous.
- ^
Oh yeah, just because it’s a reference point that doesn’t mean that we should copy them
I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements.
I claim YCombinator is a counter example.
(The existence of one counterexample obviously doesn’t disagree with the “almost any” claim.)
IMO the EA community has had a reckoning, a post-mortem, an update, etc. far more than most social or political movements would (and do) in response to similar misbehavior from a prominent member
As a reference point: fraud seems fairly common in ycombinator backed companies, but I can’t find any sort of postmortem, even about major things like uBiome where the founders are literally fugitives from the FBI.
It seems like you could tell a fairly compelling story that YC pushing founders to pursue risky strategies and flout rules is upstream of this level of fraudulent behavior, though I haven’t investigated closely.
My guess is that they just kind of accept that their advice to founders is just going to backfire 1-2% of the time.
- 27 Oct 2023 2:37 UTC; 1 point) 's comment on Book Review: Going Infinite by (
Debate series: should we push for a pause on the development of AI?
Thanks for the questions!
I feel a little confused about this myself; it’s possible I’m doing something wrong. (The code I’m using is the `get_prob` function in the linked notebook; someone with LLM experience can probably say if that’s broken without understanding the context.) My best guess is that human intuition has a hard time conceptualizing just how many possibilities exist; e.g. “Female”, “female”, “F”, “f” etc. are all separate tokens which might realistically be continuations.
I haven’t noticed anything; my guess is that there probably is some effect but it would be hard to predict ex ante. The weights used to look up information about “Ben” are also the weights used to look up information about “the Eiffel Tower”, so messing with the former will also mess with the latter, though I don’t really understand how.
A thing I would really like to do here is better understand “superposition”. A really cool finding would be something like: messing with the “gender” dimension of “Ben” is the same as messing with the “architected by” dimension of “the Eiffel Tower” because the model “repurposes” the gender dimension when talking about landmarks since landmarks don’t have genders. But much more research would be required here to find something like that.
My guess is that this is just randomness. It would be interesting to force the random seed to be the same before and after modification and see how much it actually changes.
Gender Vectors in ROME’s Latent Space
Thanks! I mentioned anthropic in the post, but would similarly find it interesting if someone did a write up about cohere. It could be that OAI is not representative for reasons I don’t understand.
I think the claim is that things with more exposure to AI are more expensive.