I’ve found use of the term catastrophe/catastrophic in discussions of SB 1047 makes it harder for me to think about the issue. The scale of the harms captured by SB 1047 has a much much lower floor than what EAs/AIS people usually term catastrophic risk, like $0.5bn+ vs $100bn+. My view on the necessity of pre-harm enforcement, to take the lens of the Anthropic letter, is very different in each case. Similarly, while the Anthropic letter talks about the the bill as focused on catastrophic risk, it also talks about “skeptics of catastrophic risk”—surely this is about eg not buying that AI will be used to start a major pandemic, rather than whether eg there’ll be an increase in the number of hospital systems subject to ransomware attacks bc of AI.
Rebecca
Perhaps when you share the post with friends you could quote some of the bits focused on progressive concerns?
a dramatic hardware shift like that is likely going to mean a significant portion of progress up until that shift in topics like interpretability and alignment may be going out the window.
Why is this the case?
The weights could be stolen as soon as the model is trained though
unless the nondisparagement provision was mutual
This could be true for most cases though
That seems like a valuable argument. It might be worth updating the wording under premise 2 to clarifying this? To me it reads as saying that the configuration, rather than the aim, of OpenAI was the major red flag.
My impression is that post-board drama, they’ve de-emphasised the non-profit messaging. Also in a more recent interview Sam said basically ‘well I guess it turns out the board can’t fire me’ and that in the long term there should be democratic governance of the company. So I don’t think it’s true that #8-10 are (still) being pushed simultaneously with the others.
I also haven’t seen anything that struck me as communicating #3 or #11, though I agree it would be in OpenAI’s interest to say those things. Can you say more about where you are seeing that?
So the argument is that Open Phil should only give large sums of money to (democratic) governments? That seems too overpowered for the OpenAI case.
In that case OP’s argument would be saying that donors shouldn’t give large sums of money to any sort of group of people, which is a much bolder claim
I was more focused on the ‘company’ part. To my knowledge there is no such thing as a non-profit company?
Noting that while Sam describes the provision as being about “about potential equity cancellation”, the actual wording says ‘shall be cancelled’ not ‘may be cancelled’, as per this tweet from Kelsey Piper: https://x.com/KelseyTuoc/status/1791584341669396560
Instances in history in which private companies (or any individual humans) have intentionally turned down huge profits and power are the exception, not the rule.
OpenAI wasn’t a private company (ie for-profit) at the time of the OP grant though.
Is that not what Altman is referring to when he talks about vested equity? My understanding was employees had no other form of equity besides PPUs, in which case he’s talking non-misleadingly about the non-narrow case of vested PPUs, ie the thing people were alarmed about, right?
What do you mean by pseudo-equity?
Did OpenAI have the for-profit element at that time?
Sure, but you weren’t providing reasons to not believe the argument, or reasons why your interpretation is at least as implausible
Zvi has already addressed this—arguing that if (D) was equivalent to ‘has a similar cost to >=$500m in harm’, then there would be no need for (B) and (C) detailing specific harms, you could just have a version of (D) that mentions the $500m, indicating that that’s not a sufficient condition. I find that fairly persuasive, though it would be good to hear a lawyer’s perspective
Why does that mean you shouldn’t post it?
I think calling this a strategic meaning is not that helpful. I would say the issue is that “isolated” is underspecified. It’s not like there was a fully fleshed out account that was then backtracked on, it’s more like: what was the isolation? were they isolated from literally everyone who wasn’t Kat, Emerson or Drew, or were they isolated /pushed to isolate more than is healthy from people they didn’t need to have their ‘career face’ on for? We now know the latter was meant, but either was plausible.
Being homeless sucks, it’s pretty legitimate to want to avoid that