The predictions that seemed (somewhat) overly paranoid of yours were more about Anthropic than OpenPhil, and the dynamic seemed similar and I didn’t check that hard while writing the comment. (maybe some predictions about how/why the OpenAI board drama went down, which was at the intersection of all three orgs, which I don’t think have been explicitly revealed to have been “too paranoid” but I’d still probably take bets against)
(I think I agree that overall you were more like “not paranoid enough” than “too paranoid”, although I’m not very confident)
My sense is my predictions about Anthropic have also not been pessimistic enough, though we have not yet seen most of the evidence. Maybe a good time to make bets.
I kinda don’t want to litigate it right now, but, I was thinking “I can think of one particular Anthropic prediction Habryka made that seemed false and overly pessimistic to me”, which doesn’t mean I think you’re overall uncalibrated about Anthropic, and/or not pessimistic enough.
And (I think Habryka got this but for benefit of others), a major point of my original comment was not just “you might be overly paranoid/pessimistic in some cases”, but, ambiguity about how paranoid/pessimistic is appropriate to be results in some kind of confusing, miasmic social-epistemic process (where like maybe you are exactly calibrated on how pessimistic to be, but it comes across as too aggro to other people, who pushback). This can be bad whether you’re somewhat-too-pessimistic, somewhat-too-optimistic, or exactly calibrated.
My recollection is that Habryka seriously considered hypotheses that involved worse and more coordinated behavior than reality, but that this is different from “this was his primary hypothesis that he gave the most probability mass to”. And then he did some empiricism and falsified the hypotheses and I’m glad those hypotheses were considered and investigated.
Here’s an example of him giving 20-25% to a hypothesis about conspiratorial behavior that I believe has turned out to be false.
Yep, that hypothesis seems mostly wrong, though I more feel like I received 1-2 bits of evidence against it. If the board had stabilized with Sam being fired, even given all I know, I would have still thought a merger with Anthropic to be like ~5%-10% likely.
The predictions that seemed (somewhat) overly paranoid of yours were more about Anthropic than OpenPhil, and the dynamic seemed similar and I didn’t check that hard while writing the comment. (maybe some predictions about how/why the OpenAI board drama went down, which was at the intersection of all three orgs, which I don’t think have been explicitly revealed to have been “too paranoid” but I’d still probably take bets against)
(I think I agree that overall you were more like “not paranoid enough” than “too paranoid”, although I’m not very confident)
My sense is my predictions about Anthropic have also not been pessimistic enough, though we have not yet seen most of the evidence. Maybe a good time to make bets.
I kinda don’t want to litigate it right now, but, I was thinking “I can think of one particular Anthropic prediction Habryka made that seemed false and overly pessimistic to me”, which doesn’t mean I think you’re overall uncalibrated about Anthropic, and/or not pessimistic enough.
And (I think Habryka got this but for benefit of others), a major point of my original comment was not just “you might be overly paranoid/pessimistic in some cases”, but, ambiguity about how paranoid/pessimistic is appropriate to be results in some kind of confusing, miasmic social-epistemic process (where like maybe you are exactly calibrated on how pessimistic to be, but it comes across as too aggro to other people, who pushback). This can be bad whether you’re somewhat-too-pessimistic, somewhat-too-optimistic, or exactly calibrated.
My recollection is that Habryka seriously considered hypotheses that involved worse and more coordinated behavior than reality, but that this is different from “this was his primary hypothesis that he gave the most probability mass to”. And then he did some empiricism and falsified the hypotheses and I’m glad those hypotheses were considered and investigated.
Here’s an example of him giving 20-25% to a hypothesis about conspiratorial behavior that I believe has turned out to be false.
Yep, that hypothesis seems mostly wrong, though I more feel like I received 1-2 bits of evidence against it. If the board had stabilized with Sam being fired, even given all I know, I would have still thought a merger with Anthropic to be like ~5%-10% likely.