We appreciate your detailed reply outlining your concerns with the post.
Our understanding is that your key concern is that we are judging Conjecture based on their current output, whereas since they are pursuing a hits-based strategy we should expect in the median case for them to not have impressive output. In general, we are excited by hits-based approaches, but we echo Rohin’s point: how are we meant to evaluate organizations if not by their output? It seems healthy to give promising researchers sufficient runway to explore, but $10 million dollars and a team of twenty seems on the higher end of what we would want to see supported purely on the basis of speculation. What would you suggest as the threshold where we should start to expect to see results from organizations?
We are unsure where else you disagree with our evaluation of their output. If we understand correctly, you agree that their existing output has not been that impressive, but think that it is positive they were willing to share preliminary findings and that we have too high a bar for evaluating such output. We’ve generally not found their preliminary findings to significantly update our views, whereas we would for example be excited by rigorous negative results that save future researchers from going down dead-ends. However, if you’ve found engaging with their output to be useful to your research then we’d certainly take that as a positive update.
Your second key concern is that we provide limited evidence for our claims regarding the VCs investing in Conjecture. Unfortunately for confidentiality reasons we are limited in what information we can disclose: it’s reasonable if you wish to consequently discount this view. As Rohin said, it is normal for VCs to be profit-seeking. We do not mean to imply these VCs are unusually bad for VCs, just that their primary focus will be the profitability of Conjecture, not safety impact. For example, Nat Friedman has expressed skepticism of safety (e.g. this Tweet) and is a strong open-source advocate, which seems at odds with Conjecture’s info-hazard policy.
We have heard from multiple sources that Conjecture has pitched VCs on a significantly more product-focused vision than they are pitching EAs. These sources have either spoken directly to VCs, or have spoken to Conjecture leadership who were part of negotiation with VCs. Given this, we are fairly confident on the point that Conjecture is representing themselves differently to separate groups.
We believe your third key concern is our recommendations are over-confident. We agree there is some uncertainty, but think it is important to make actionable recommendations, and based on the information we have our sincerely held belief is that most individuals should not work at Conjecture. We would certainly encourage individuals to consider alternative perspectives (including expressed in this comment) and to ultimately make up their own mind rather than deferring, especially to an anonymous group of individuals!
Separately, I think we might consider the opportunity cost of working at Conjecture higher than you. In particular, we’d generally evaluate skill-building routes fairly highly: for example, being a research assistant or PhD student in academia, or working in an ML engineering position in an applied team at a major tech company. These are generally close to capabilities-neutral, and can make individuals vastly more productive. Given the limited information on CogEm it’s hard to assess whether it will or won’t work, but we think there’s ample evidence that there are better places to develop skills than Conjecture.
We wholeheartedly agree that it is important to maintain high epistemic standards during the critique. We have tried hard to differentiate between well-established facts, our observations from sources, and our opinion formed from those. For example, the About Conjecture section focuses on facts; the Criticisms and Suggestions section includes our observations and opinions; and Our Views on Conjecture are more strongly focused on our opinions. We’d welcome feedback on any areas where you feel we over-claimed.
Meta: Thanks for taking the time to respond. I think your questions are in good faith and address my concerns, I do not understand why the comment is downvoted so much by other people.
1. Obviously output is a relevant factor to judge an organization among others. However, especially in hits-based approaches, the ultimate thing we want to judge is the process that generates the outputs to make an estimate about the chance of finding a hit. For example, a cynic might say “what has ARC-theory achieve so far? They wrote some nice framings of the problem, e.g. with ELK and heuristic arguments, but what have they ACtUaLLy achieved?” To which my answer would be, I believe in them because I think the process that they are following makes sense and there is a chance that they would find a really big-if-true result in the future. In the limit, process and results converge but especially early on they might diverge. And I personally think that Conjecture did respond reasonably to their early results by iterating faster and looking for hits. 2. I actually think their output is better than you make it look. The entire simulators framing made a huge difference for lots of people and writing up things that are already “known” among a handful of LLM experts is still an important contribution, though I would argue most LLM experts did not think about the details as much as Janus did. I also think that their preliminary research outputs are pretty valuable. The stuff on SVDs and sparse coding actually influenced a number of independent researchers I know (so much that they changed their research direction to that) and I thus think it was a valuable contribution. I’d still say it was less influential than e.g. toy models of superposition or causal scrubbing but neither of these were done by like 3 people in two weeks. 3. (copied from response to Rohin): Of course, VCs are interested in making money. However, especially if they are angel investors instead of institutional VCs, ideological considerations often play a large role in their investments. In this case, the VCs I’m aware of (not all of which are mentioned in the post and I’m not sure I can share) actually seem fairly aligned for VC standards to me. Furthermore, the way I read the critique is something like “Connor didn’t tell the VCs about the alignment plans or neglects them in conversation”. However, my impression from conversation with (ex-) staff was that Connor was very direct about their motives to reduce x-risks. I think it’s clear that products are a part of their way to address alignment but to the best of my knowledge, every VC who invested was very aware about what their getting into. At this point, it’s really hard for me to judge because I think that a) on priors, VCs are profit-seeking, and b) different sources said different things some of which are mutually exclusive. I don’t have enough insight to confidently say who is right here. I’m mainly saying, the confidence of you surprised me given my previous discussions with staff. 4. Regarding confidence: For example, I think saying “We think there are better places to work at than Conjecture” would feel much more appropriate than “we advice against...” Maybe that’s just me. I just felt like many statements are presented with a lot of confidence given the amount of insight you seem to have and I would have wanted them to be a bit more hedged and less confident. 5. Sure, for many people other opportunities might be a better fit. But I’m not sure I would e.g. support the statement that a general ML engineer would learn more in general industry than with Conjecture. I also don’t know a lot about CoEm but that would lead me to make weaker statements than suggesting against it.
Thanks for engaging with my arguments. I personally think many of your criticisms hit relevant points and I think a more hedged and less confident version of your post would have actually had more impact on me if I were still looking for a job. As it is currently written, it loses some persuasion on me because I feel like you’re making too broad unqualified statements which intuitively made me a bit skeptical of your true intentions. Most of me thinks that you’re trying to point out important criticism but there is a nagging feeling that it is a hit piece. Intuitively, I’m very averse against everything that looks like a click-bait hit piece by a journalist with a clear agenda. I’m not saying you should only consider me as your audience, I just want to describe the impression I got from the piece.
appreciate you sharing your impression of the post. It’s definitely valuable for us to understand how the post was received, and we’ll be reflecting on it for future write-ups.
1) We agree it’s worth taking into account aspects of an organization other than their output. Part of our skepticism towards Conjecture – and we should have made this more explicit in our original post (and will be updating it) – is the limited research track record of their staff, including their leadership. By contrast, even if we accept for the sake of argument that ARC has produced limited output, Paul Christiano has a clear track record of producing useful conceptual insights (e.g. Iterated Distillation and Amplification) as well as practical advances (e.g. Deep RL From Human Preferences) prior to starting work at ARC. We’re not aware of any equally significant advances from Connor or other key staff members at Conjecture; we’d be interested to hear if you have examples of their pre-Conjecture output you find impressive.
We’re not particularly impressed by Conjecture’s process, although it’s possible we’d change our mind if we knew more about it. Maintaining high velocity in research is certainly a useful component, but hardly sufficient. The Builder/Breaker method proposed by ARC feels closer to a complete methodology. But this doesn’t feel like the crux for us: if Conjecture copied ARC’s process entirely, we’d still be much more excited about ARC (per-capita). Research productivity is a product of a large number of factors, and explicit process is an important but far from decisive one.
In terms of the explicit comparison with ARC, we would like to note that ARC Theory’s team size is an order of magnitude smaller than Conjecture. Based on ARC’s recent hiring post, our understanding is the theory team consists of just three individuals: Paul Christiano, Mark Xu and Jacob Hilton. If ARC had a team ten times larger and had spent close to $10 mn, then we would indeed be disappointed if there were not more concrete wins.
2) Thanks for the concrete examples, this really helps tease apart our disagreement.
We are overall glad that the Simulators post was written. Our view is that it could have been much stronger had it been clearer which claims were empirically supported versus hypotheses. Continuing the comparison with ARC, we found ELK to be substantially clearer and a deeper insight. Admittedly ELK is one of the outputs people in the TAIS community are most excited by so this is a high bar.
The stuff on SVDs and sparse coding [...] was a valuable contribution. I’d still say it was less influential than e.g. toy models of superposition or causal scrubbing but neither of these were done by like 3 people in two weeks.
This sounds similar to our internal evaluation. We’re a bit confused by why “3 people in two weeks” is the relevant reference class. We’d argue the costs of Conjecture’s “misses” need to be accounted for, not just their “hits”. Redwood’s team size and budget are comparable to that of Conjecture, so if you think that causal scrubbing is more impressive than Conjecture’s other outputs, then it sounds like you agree with us that Redwood was more impressive than Conjecture (unless you think the Simulator’s post is head and shoulders above Redwood’s other output)?
Thanks for sharing the data point this influenced independent researchers. That’s useful to know, and updates us positively. Are you excited by those independent researchers’ new directions? Is there any output from those researchers you’d suggest we review?
3) We remain confident in our sources regarding Conjecture’s discussion with VCs, although it’s certainly conceivable that Conjecture was more open with some VCs than others. To clarify, we are not claiming that Connor or others at Conjecture did not mention anything about their alignment plans or interest in x-risk to VCs (indeed, this would be a barely tenable position for them given their public discussion of these plans), simply that their pitch gave the impression that Conjecture was primarily focused on developing products. It’s reasonable for you to be skeptical of this if your sources at Conjecture disagree; we would be interested to know how close to the negotiations those staff were, although understand this may not be something you can share.
4) We think your point is reasonable. We plan to reflect this recommendation and will reply here when we have an update.
5) This certainly depends on what “general industry” refers to: a research engineer at Conjecture might well be better for ML skill-building than, say, being a software engineer at Walmart. But we would expect ML teams at top tech companies, or working with relevant professors, to be significantly better for skill-building. Generally we expect quality of mentorship to be one of the most important components of individuals developing as researchers and engineers. The Conjecture team is stretched thin as a result of rapid scaling, and had few experienced researchers or engineers on staff in the first place. By contrast, ML teams at top tech companies will typically have a much higher fraction of senior researchers and engineers, and professors at leading universities comprise some of the best researchers in the field. We’d be curious to hear your case for Conjecture as skill building; without that it’s hard to identify where our main disagreement lies.
(crossposted from the EA Forum)
We appreciate your detailed reply outlining your concerns with the post.
Our understanding is that your key concern is that we are judging Conjecture based on their current output, whereas since they are pursuing a hits-based strategy we should expect in the median case for them to not have impressive output. In general, we are excited by hits-based approaches, but we echo Rohin’s point: how are we meant to evaluate organizations if not by their output? It seems healthy to give promising researchers sufficient runway to explore, but $10 million dollars and a team of twenty seems on the higher end of what we would want to see supported purely on the basis of speculation. What would you suggest as the threshold where we should start to expect to see results from organizations?
We are unsure where else you disagree with our evaluation of their output. If we understand correctly, you agree that their existing output has not been that impressive, but think that it is positive they were willing to share preliminary findings and that we have too high a bar for evaluating such output. We’ve generally not found their preliminary findings to significantly update our views, whereas we would for example be excited by rigorous negative results that save future researchers from going down dead-ends. However, if you’ve found engaging with their output to be useful to your research then we’d certainly take that as a positive update.
Your second key concern is that we provide limited evidence for our claims regarding the VCs investing in Conjecture. Unfortunately for confidentiality reasons we are limited in what information we can disclose: it’s reasonable if you wish to consequently discount this view. As Rohin said, it is normal for VCs to be profit-seeking. We do not mean to imply these VCs are unusually bad for VCs, just that their primary focus will be the profitability of Conjecture, not safety impact. For example, Nat Friedman has expressed skepticism of safety (e.g. this Tweet) and is a strong open-source advocate, which seems at odds with Conjecture’s info-hazard policy.
We have heard from multiple sources that Conjecture has pitched VCs on a significantly more product-focused vision than they are pitching EAs. These sources have either spoken directly to VCs, or have spoken to Conjecture leadership who were part of negotiation with VCs. Given this, we are fairly confident on the point that Conjecture is representing themselves differently to separate groups.
We believe your third key concern is our recommendations are over-confident. We agree there is some uncertainty, but think it is important to make actionable recommendations, and based on the information we have our sincerely held belief is that most individuals should not work at Conjecture. We would certainly encourage individuals to consider alternative perspectives (including expressed in this comment) and to ultimately make up their own mind rather than deferring, especially to an anonymous group of individuals!
Separately, I think we might consider the opportunity cost of working at Conjecture higher than you. In particular, we’d generally evaluate skill-building routes fairly highly: for example, being a research assistant or PhD student in academia, or working in an ML engineering position in an applied team at a major tech company. These are generally close to capabilities-neutral, and can make individuals vastly more productive. Given the limited information on CogEm it’s hard to assess whether it will or won’t work, but we think there’s ample evidence that there are better places to develop skills than Conjecture.
We wholeheartedly agree that it is important to maintain high epistemic standards during the critique. We have tried hard to differentiate between well-established facts, our observations from sources, and our opinion formed from those. For example, the About Conjecture section focuses on facts; the Criticisms and Suggestions section includes our observations and opinions; and Our Views on Conjecture are more strongly focused on our opinions. We’d welcome feedback on any areas where you feel we over-claimed.
(cross-posted from EAG)
Meta: Thanks for taking the time to respond. I think your questions are in good faith and address my concerns, I do not understand why the comment is downvoted so much by other people.
1. Obviously output is a relevant factor to judge an organization among others. However, especially in hits-based approaches, the ultimate thing we want to judge is the process that generates the outputs to make an estimate about the chance of finding a hit. For example, a cynic might say “what has ARC-theory achieve so far? They wrote some nice framings of the problem, e.g. with ELK and heuristic arguments, but what have they ACtUaLLy achieved?” To which my answer would be, I believe in them because I think the process that they are following makes sense and there is a chance that they would find a really big-if-true result in the future. In the limit, process and results converge but especially early on they might diverge. And I personally think that Conjecture did respond reasonably to their early results by iterating faster and looking for hits.
2. I actually think their output is better than you make it look. The entire simulators framing made a huge difference for lots of people and writing up things that are already “known” among a handful of LLM experts is still an important contribution, though I would argue most LLM experts did not think about the details as much as Janus did. I also think that their preliminary research outputs are pretty valuable. The stuff on SVDs and sparse coding actually influenced a number of independent researchers I know (so much that they changed their research direction to that) and I thus think it was a valuable contribution. I’d still say it was less influential than e.g. toy models of superposition or causal scrubbing but neither of these were done by like 3 people in two weeks.
3. (copied from response to Rohin): Of course, VCs are interested in making money. However, especially if they are angel investors instead of institutional VCs, ideological considerations often play a large role in their investments. In this case, the VCs I’m aware of (not all of which are mentioned in the post and I’m not sure I can share) actually seem fairly aligned for VC standards to me. Furthermore, the way I read the critique is something like “Connor didn’t tell the VCs about the alignment plans or neglects them in conversation”. However, my impression from conversation with (ex-) staff was that Connor was very direct about their motives to reduce x-risks. I think it’s clear that products are a part of their way to address alignment but to the best of my knowledge, every VC who invested was very aware about what their getting into. At this point, it’s really hard for me to judge because I think that a) on priors, VCs are profit-seeking, and b) different sources said different things some of which are mutually exclusive. I don’t have enough insight to confidently say who is right here. I’m mainly saying, the confidence of you surprised me given my previous discussions with staff.
4. Regarding confidence: For example, I think saying “We think there are better places to work at than Conjecture” would feel much more appropriate than “we advice against...” Maybe that’s just me. I just felt like many statements are presented with a lot of confidence given the amount of insight you seem to have and I would have wanted them to be a bit more hedged and less confident.
5. Sure, for many people other opportunities might be a better fit. But I’m not sure I would e.g. support the statement that a general ML engineer would learn more in general industry than with Conjecture. I also don’t know a lot about CoEm but that would lead me to make weaker statements than suggesting against it.
Thanks for engaging with my arguments. I personally think many of your criticisms hit relevant points and I think a more hedged and less confident version of your post would have actually had more impact on me if I were still looking for a job. As it is currently written, it loses some persuasion on me because I feel like you’re making too broad unqualified statements which intuitively made me a bit skeptical of your true intentions. Most of me thinks that you’re trying to point out important criticism but there is a nagging feeling that it is a hit piece. Intuitively, I’m very averse against everything that looks like a click-bait hit piece by a journalist with a clear agenda. I’m not saying you should only consider me as your audience, I just want to describe the impression I got from the piece.
(cross-posted from EAF)
appreciate you sharing your impression of the post. It’s definitely valuable for us to understand how the post was received, and we’ll be reflecting on it for future write-ups.
1) We agree it’s worth taking into account aspects of an organization other than their output. Part of our skepticism towards Conjecture – and we should have made this more explicit in our original post (and will be updating it) – is the limited research track record of their staff, including their leadership. By contrast, even if we accept for the sake of argument that ARC has produced limited output, Paul Christiano has a clear track record of producing useful conceptual insights (e.g. Iterated Distillation and Amplification) as well as practical advances (e.g. Deep RL From Human Preferences) prior to starting work at ARC. We’re not aware of any equally significant advances from Connor or other key staff members at Conjecture; we’d be interested to hear if you have examples of their pre-Conjecture output you find impressive.
We’re not particularly impressed by Conjecture’s process, although it’s possible we’d change our mind if we knew more about it. Maintaining high velocity in research is certainly a useful component, but hardly sufficient. The Builder/Breaker method proposed by ARC feels closer to a complete methodology. But this doesn’t feel like the crux for us: if Conjecture copied ARC’s process entirely, we’d still be much more excited about ARC (per-capita). Research productivity is a product of a large number of factors, and explicit process is an important but far from decisive one.
In terms of the explicit comparison with ARC, we would like to note that ARC Theory’s team size is an order of magnitude smaller than Conjecture. Based on ARC’s recent hiring post, our understanding is the theory team consists of just three individuals: Paul Christiano, Mark Xu and Jacob Hilton. If ARC had a team ten times larger and had spent close to $10 mn, then we would indeed be disappointed if there were not more concrete wins.
2) Thanks for the concrete examples, this really helps tease apart our disagreement.
We are overall glad that the Simulators post was written. Our view is that it could have been much stronger had it been clearer which claims were empirically supported versus hypotheses. Continuing the comparison with ARC, we found ELK to be substantially clearer and a deeper insight. Admittedly ELK is one of the outputs people in the TAIS community are most excited by so this is a high bar.
This sounds similar to our internal evaluation. We’re a bit confused by why “3 people in two weeks” is the relevant reference class. We’d argue the costs of Conjecture’s “misses” need to be accounted for, not just their “hits”. Redwood’s team size and budget are comparable to that of Conjecture, so if you think that causal scrubbing is more impressive than Conjecture’s other outputs, then it sounds like you agree with us that Redwood was more impressive than Conjecture (unless you think the Simulator’s post is head and shoulders above Redwood’s other output)?
Thanks for sharing the data point this influenced independent researchers. That’s useful to know, and updates us positively. Are you excited by those independent researchers’ new directions? Is there any output from those researchers you’d suggest we review?
3) We remain confident in our sources regarding Conjecture’s discussion with VCs, although it’s certainly conceivable that Conjecture was more open with some VCs than others. To clarify, we are not claiming that Connor or others at Conjecture did not mention anything about their alignment plans or interest in x-risk to VCs (indeed, this would be a barely tenable position for them given their public discussion of these plans), simply that their pitch gave the impression that Conjecture was primarily focused on developing products. It’s reasonable for you to be skeptical of this if your sources at Conjecture disagree; we would be interested to know how close to the negotiations those staff were, although understand this may not be something you can share.
4) We think your point is reasonable. We plan to reflect this recommendation and will reply here when we have an update.
5) This certainly depends on what “general industry” refers to: a research engineer at Conjecture might well be better for ML skill-building than, say, being a software engineer at Walmart. But we would expect ML teams at top tech companies, or working with relevant professors, to be significantly better for skill-building. Generally we expect quality of mentorship to be one of the most important components of individuals developing as researchers and engineers. The Conjecture team is stretched thin as a result of rapid scaling, and had few experienced researchers or engineers on staff in the first place. By contrast, ML teams at top tech companies will typically have a much higher fraction of senior researchers and engineers, and professors at leading universities comprise some of the best researchers in the field. We’d be curious to hear your case for Conjecture as skill building; without that it’s hard to identify where our main disagreement lies.