I don’t think it’s unreasonable to distrust doom arguments for exactly this reason?
Amalthea
I agree that dunking on OS communities has apparently not been helpful in these regards. It seems kind of orthogonal to being power-seeking though. Overall, I think part of the issue with AI safety is that the established actors (e.g. wide parts of CS academia) have opted out of taking a responsible stance, e.g. compared to recent developments in biosciences and RNA editing. Partially, one could blame this on them not wanting to identify too closely with, or grant legitimacy to, the existing AI safety community at the time. However, a priori, it seems more likely that it is simply due to the different culture in CS vs life sciences, with the former lacking the deep culture of responsibility for their research (in particular as far as they’re connected to e.g. Silicon Valley startup culture).
The casual boosting of Sam Altman here makes me quite uncomfortable, and there’s probably better examples: One could argue that his job isn’t “paying” him as much as he’s “taking” things by unilateral action and being a less than trustworthy actor. Other than that, this was an interesting read!
I found Ezra Vogel’s biography of Deng Xiaoping to be on a comparable level.
On a brief reading, I found this to strike a refreshingly neutral and factual tone. I think it could be quite useful as a reference point.
You mean specifically that an LLM solved it? Otherwise Deepmind’s work will give you many examples. (Although there’ve been surprisingly little breakthroughs in math yet)
Note that LLMs, while general, are still very weak in many important senses.
Also, it’s not necessary to assume that LLM’s are lying in wait to turn treacherous. Another possibility is that trained LLMs are lacking the mental slack to even seriously entertain the possibility of bad behavior, but that this may well change with more capable AIs.
I agree with the first sentence. I agree with the second sentence with the caveat that it’s not strong absolute evidence, but mostly applies to the given setting (which is exactly what I’m saying).
People aren’t fixed entities and the quality of their contributions can vary over time and depend on context.
That said, It also appears to me that Eliezer is probably not the most careful reasoner, and appears indeed often (perhaps egregiously) overconfident. That doesn’t mean one should begrudge people finding value in the sequences although it is certainly not ideal if people take them as mantras rather than useful pointers and explainers for basic things (I didn’t read them, so might have an incorrect view here). There does appear to be some tendency to just link to some point made in the sequences as some airtight thing, although I haven’t found it too pervasive recently.
You’re describing a situational character flaw which doesn’t really have any bearing on being able to reason carefully overall.
I’m echoing other commenters somewhat, but—personally—I do not see people being down-voted simply for having different viewpoints. I’m very sympathetic to people trying to genuinely argue against “prevailing” attitudes or simply trying to foster a better general understanding. (E.g. I appreciate Matthew Barnett’s presence, even though I very much disagree with his conclusions and find him overconfident). Now, of course, the fact that I don’t notice the kind of posts you say are being down-voted may be because they are sufficiently filtered out, which indeed would be undesirable from my perspective and good to know.
When you have a role in policy or safety, it may usually be a good idea not to voice strong opinions on any given company. If you nevertheless feel compelled to do so by circumstances, it’s a big deal if you have personal incentives against that—especially if they’re not disclosed.
Might be good to estimate the date of the recommendation—as the interview where Carmack mentioned this was in 2023, a rough guess might be 2021/22?
It might not be legal reasons specifically, but some hard-to-specify mix of legal reasons/intimidation/bullying. While it’s useful to discuss specific ideas, it should be kept in mind that Altman doesn’t need to restrict his actions to any specific avenue that could be neatly classified.
I’d like to listen to something like this in principle, but it has really unfortunate timing with the further information that’s been revealed, making it somewhat less exciting. It would be interesting to hear how/whether the participants believes change.
Have you ever written anything about why you hate the AI safety movement? I’d be quite curious to hear your perspective.
I think the best bet is to vote for a generally reasonable party. Despite their many flaws, it seems like Green Party or SPD are the best choices right now. (CDU seems to be too influenced in business interests, the current FDP is even worse)
The alternative would be to vote for a small party with a good agenda to help signal-boost them, but I don’t know who’s around these days.
It’s not an entirely unfair characterization.
Half a year ago, I’d have guessed that OpenAI leadership, while likely misguided, was essentially well-meaning and driven by a genuine desire to confront a difficult situation. The recent series of events has made me update significantly against the general trustworthiness and general epistemic reliability of Altman and his circle. While my overall view of OpenAI’s strategy hasn’t really changed, my likelihood of them possibly “knowing better” has dramatically gone down now.
Thanks for clarifying. I do agree with the broader point that one should have a sort of radical uncertainty about (e.g.) a post AGI world. I’m not sure I agree it’s a big issue to leave that out of any given discussion though, since it shifts probability mass from any particular describable outcome to the big “anything can happen” area. (This might be what people mean by “Knightian uncertainty”?)