I think the critique of Redwood Research made a few valid points. My own critique of Redwood would go something like:
they hired too few support staff to keep their primary researchers well supported and happy, and thus had unnecessarily high turnover
they hired too high a proportion of junior researchers, in an unsettled phase of life without high likelihood of sticking with a current job, again contributing to too much turnover and to a lack of researchers who knew what to expect from a workplace and how to maintain their work-life balance.
Not much of a critique, honestly. A reasonable mistake that a lot of start-ups led by young inexperienced people would make, and certainly something fixable. Also, they have longer AGI timelines than me, and thus are not acting with what I see as sufficient urgency. But I don’t think that it’s necessarily fair for me to critique orgs for having their own well-considered opinions on this different from my own. I’m not even sure if them having my timelines would improve their output any.
This critique on the other hand seems entirely invalid and counterproductive. You criticize Conjecture’s CEO for being… a charismatic leader good at selling himself and leading people? Because he’s not… a senior academic with a track record of published papers? Nonsense. Expecting the CEO to be the primary technical expert seems highly misguided to me. The CEO needs to know enough about the technical aspects to be able to hire good technical people, and then needs to coordinate and inspire those people and promote the company. I think Connor is an excellent pick for this, and your criticisms of him are entirely beside the point, and also rather rude.
Conjecture, and Connor, seem to actually be trying to do something which strikes at the heart of the problem. Something which might actually help save us in three years from now when the leading AI labs have in their possession powerful AGI after a period of recursive self-improvement by almost-but-not-quite-AGI. I expect this AGI will be too untrustworthy to make more than very limited use of. So then, looking around for ways to make use of their newfound dangerous power, what will they see? Some still immature interpretability research. Sure. And then? Maybe they’ll see the work Conjecture has started and realize that breaking down the big black magic box into smaller more trustworthy pieces is one of the best paths forward. Then they can go knocking on Conjecture’s door, collect the research so far, and finish it themselves with their abundant resources.
My criticism of their plan is primarily: you need even more staff and more funding to have a better chance of this working. Which is basically the opposite of the conclusion you come to.
As for the untrustworthiness of their centralized infohazard policy… Yeah, this would be bad if the incentives were for the central individual to betray the world for their own benefit. That’s super not the case here. The incentive is very much the opposite. For much the same reason that I feel pretty trusting of the heads of Deepmind, OpenAI, and Anthropic. Their selfish incentives to not destroy themselves and everyone they love are well aligned with humanity’s desire to not be destroyed. Power-seeking in this case is a good thing! Power over the world through AGI, to these clever people, clearly means learning to control that untrustworthy AGI… thus means learning how to save the world. My threat model says that the main danger comes from not the heads of the labs, but the un-safety-convinced employees who might leave to start their own projects, or outside people replicating the results the big labs have achieved but with far fewer safety precautions.
I think reasonable safety precautions, like not allowing unlimited unsupervised recursive self-improvement, not allowing source code or model weights to leave the lab, sandbox testing, etc can actually be quite effective in the short term in protecting humanity from rogue AGI. I don’t think surprise-FOOM-in-a-single-training-run-resulting-in-a-sandbox-escaping-superintelligence is a likely threat model. I think a far more likely threat model is foolish amatuers or bad actors tinkering with dangerous open source code and stumbling into an algorithmic breakthrough they didn’t expect and don’t understand and foolishly releasing it onto the web.
I think putting hope in compute governance is a very limited hope. We can’t govern compute for long, if at all, because there will be huge reductions in compute needed once more efficient training algorithms are found.
You criticize Conjecture’s CEO for being… a charismatic leader good at selling himself and leading people? Because he’s not… a senior academic with a track record of published papers? Nonsense. Expecting the CEO to be the primary technical expert seems highly misguided to me.
Yeah this confiused me a little too. My current job (in soil science) has a non academic boss, and a team of us boffins, and he doesn’t need to be an academic, because its not his job, he just has to know where the money comes from, and how to stop the stakeholders from running away screaming when us soil nerds turn up to a meeting and start emitting maths and graphs out of our heads. Likewise the previous place I was at, I was the only non PhD haver on technical staff (being a ‘mere’ postgrad) and again our boss wasn’t academic at all. But he WAS a leader of men and herder of cats, and cat herding is probably a more important skill in that role than actually knowing what those cats are taking about.
And it all works fine. I dont need an academic boss, even if I think an academic boss would be nice. I need a boss who knows how to keep the payroll from derailing, and I suspect the vast majority of science workers feel the same way.
Note that we don’t criticize Connor specifically, but rather the lack of a senior technical expert on the team in general (including Connor). Our primary criticisms of Connor don’t have to do with his leadership skills (which we don’t comment on this at any point in the post).
I’m confused about the disagree votes. Can someone who disagree-voted say which of the following claims they disagreed with: 1. Omega criticized the lack of a senior technical expert on Conjecture’s team.
2. Omega’s primary criticisms of Connor doesn’t have to do with his leadership skills.
3. Omega did not comment on Connorship’s leadership skills at any point in the post.
Nathan Helm-Burger’ used a different notion of “leadership” (like a startup CEO) to criticise the post and Omega responded to it by saying something about “management” leadership, which doesn’t respond to Nathan’s comment really.
Ah I see. Hmm, if I say “Yesterday I said X,” people-who-talk-like-me will interpret contextless disagreement with that claim as “Yesterday I didn’t say X” and not as “X is not true.” Perhaps this is a different communication norm from LW standards, in which case I’ll try to interpret future agree/disagree comments in that light.
I agree from quickly looking at Beren’s LinkedIn page that he seems like a technical expert (I don’t know enough about ML to have a particularly relevant inside-view about ML technical expertise).
BTW, from the comment to the EA forum cross-post, I discovered that Beren reportedly left Conjecture very recently. That’s indeed a negative update on Conjecture for me (maybe not as much as he specifically left but rather that this indicates a high turnover rate), but regardless, this doesn’t apply to the inference made by Omega in this report, along the lines that “Conjecture’s research is iffy because they don’t have senior technical experts and don’t know what are they doing”, because this wasn’t true until very recently and probably still isn’t true (overwhelmingly likely there are other technical experts who are still working at Conjecture), so this doesn’t invalidate or stain the research that has been done and published previously.
I think the critique of Redwood Research made a few valid points. My own critique of Redwood would go something like:
they hired too few support staff to keep their primary researchers well supported and happy, and thus had unnecessarily high turnover
they hired too high a proportion of junior researchers, in an unsettled phase of life without high likelihood of sticking with a current job, again contributing to too much turnover and to a lack of researchers who knew what to expect from a workplace and how to maintain their work-life balance.
Not much of a critique, honestly. A reasonable mistake that a lot of start-ups led by young inexperienced people would make, and certainly something fixable. Also, they have longer AGI timelines than me, and thus are not acting with what I see as sufficient urgency. But I don’t think that it’s necessarily fair for me to critique orgs for having their own well-considered opinions on this different from my own. I’m not even sure if them having my timelines would improve their output any.
This critique on the other hand seems entirely invalid and counterproductive. You criticize Conjecture’s CEO for being… a charismatic leader good at selling himself and leading people? Because he’s not… a senior academic with a track record of published papers? Nonsense. Expecting the CEO to be the primary technical expert seems highly misguided to me. The CEO needs to know enough about the technical aspects to be able to hire good technical people, and then needs to coordinate and inspire those people and promote the company. I think Connor is an excellent pick for this, and your criticisms of him are entirely beside the point, and also rather rude.
Conjecture, and Connor, seem to actually be trying to do something which strikes at the heart of the problem. Something which might actually help save us in three years from now when the leading AI labs have in their possession powerful AGI after a period of recursive self-improvement by almost-but-not-quite-AGI. I expect this AGI will be too untrustworthy to make more than very limited use of. So then, looking around for ways to make use of their newfound dangerous power, what will they see? Some still immature interpretability research. Sure. And then? Maybe they’ll see the work Conjecture has started and realize that breaking down the big black magic box into smaller more trustworthy pieces is one of the best paths forward. Then they can go knocking on Conjecture’s door, collect the research so far, and finish it themselves with their abundant resources.
My criticism of their plan is primarily: you need even more staff and more funding to have a better chance of this working. Which is basically the opposite of the conclusion you come to.
As for the untrustworthiness of their centralized infohazard policy… Yeah, this would be bad if the incentives were for the central individual to betray the world for their own benefit. That’s super not the case here. The incentive is very much the opposite. For much the same reason that I feel pretty trusting of the heads of Deepmind, OpenAI, and Anthropic. Their selfish incentives to not destroy themselves and everyone they love are well aligned with humanity’s desire to not be destroyed. Power-seeking in this case is a good thing! Power over the world through AGI, to these clever people, clearly means learning to control that untrustworthy AGI… thus means learning how to save the world. My threat model says that the main danger comes from not the heads of the labs, but the un-safety-convinced employees who might leave to start their own projects, or outside people replicating the results the big labs have achieved but with far fewer safety precautions.
I think reasonable safety precautions, like not allowing unlimited unsupervised recursive self-improvement, not allowing source code or model weights to leave the lab, sandbox testing, etc can actually be quite effective in the short term in protecting humanity from rogue AGI. I don’t think surprise-FOOM-in-a-single-training-run-resulting-in-a-sandbox-escaping-superintelligence is a likely threat model. I think a far more likely threat model is foolish amatuers or bad actors tinkering with dangerous open source code and stumbling into an algorithmic breakthrough they didn’t expect and don’t understand and foolishly releasing it onto the web.
I think putting hope in compute governance is a very limited hope. We can’t govern compute for long, if at all, because there will be huge reductions in compute needed once more efficient training algorithms are found.
Yeah this confiused me a little too. My current job (in soil science) has a non academic boss, and a team of us boffins, and he doesn’t need to be an academic, because its not his job, he just has to know where the money comes from, and how to stop the stakeholders from running away screaming when us soil nerds turn up to a meeting and start emitting maths and graphs out of our heads. Likewise the previous place I was at, I was the only non PhD haver on technical staff (being a ‘mere’ postgrad) and again our boss wasn’t academic at all. But he WAS a leader of men and herder of cats, and cat herding is probably a more important skill in that role than actually knowing what those cats are taking about.
And it all works fine. I dont need an academic boss, even if I think an academic boss would be nice. I need a boss who knows how to keep the payroll from derailing, and I suspect the vast majority of science workers feel the same way.
Note that we don’t criticize Connor specifically, but rather the lack of a senior technical expert on the team in general (including Connor). Our primary criticisms of Connor don’t have to do with his leadership skills (which we don’t comment on this at any point in the post).
I’m confused about the disagree votes. Can someone who disagree-voted say which of the following claims they disagreed with:
1. Omega criticized the lack of a senior technical expert on Conjecture’s team.
2. Omega’s primary criticisms of Connor doesn’t have to do with his leadership skills.
3. Omega did not comment on Connorship’s leadership skills at any point in the post.
Beren Millidge is not a senior technical expert?
Nathan Helm-Burger’ used a different notion of “leadership” (like a startup CEO) to criticise the post and Omega responded to it by saying something about “management” leadership, which doesn’t respond to Nathan’s comment really.
Ah I see. Hmm, if I say “Yesterday I said X,” people-who-talk-like-me will interpret contextless disagreement with that claim as “Yesterday I didn’t say X” and not as “X is not true.” Perhaps this is a different communication norm from LW standards, in which case I’ll try to interpret future agree/disagree comments in that light.
I agree from quickly looking at Beren’s LinkedIn page that he seems like a technical expert (I don’t know enough about ML to have a particularly relevant inside-view about ML technical expertise).
I think the (perhaps annoying) fact is that LW readers aren’t a monolith and different people interpret disagreement votes differently.
BTW, from the comment to the EA forum cross-post, I discovered that Beren reportedly left Conjecture very recently. That’s indeed a negative update on Conjecture for me (maybe not as much as he specifically left but rather that this indicates a high turnover rate), but regardless, this doesn’t apply to the inference made by Omega in this report, along the lines that “Conjecture’s research is iffy because they don’t have senior technical experts and don’t know what are they doing”, because this wasn’t true until very recently and probably still isn’t true (overwhelmingly likely there are other technical experts who are still working at Conjecture), so this doesn’t invalidate or stain the research that has been done and published previously.