I thought Superalignment was a positive bet by OpenAI, and I was happy when they committed to putting 20% of their current compute (at the time) towards it. I stopped thinking about that kind of approach because OAI already had competent people working on it. Several of them are now gone.
It seems increasingly likely that the entire effort will dissolve. If so, OAI has now made the business decision to invest its capital in keeping its moat in the AGI race rather than basic safety science. This is bad and likely another early sign of what’s to come.
I think the research that was done by the Superalignment team should continue happen outside of OpenAI and, if governments have a lot of capital to allocate, they should figure out a way to provide compute to continue those efforts. Or maybe there’s a better way forward. But I think it would be pretty bad if all that talent towards the project never gets truly leveraged into something impactful.
I think the research that was done by the Superalignment team should continue happen outside of OpenAI and, if governments have a lot of capital to allocate, they should figure out a way to provide compute to continue those efforts. Or maybe there’s a better way forward. But I think it would be pretty bad if all that talent towards the project never gets truly leveraged into something impactful.
Strongly agree; I’ve been thinking for a while that something like a public-private partnership involving at least the US government and the top US AI labs might be a better way to go about this. Unfortunately, recent events seem in line with it not being ideal to only rely on labs for AI safety research, and the potential scalability of automating it should make it even more promising for government involvement. [Strongly] oversimplified, the labs could provide a lot of the in-house expertise, the government could provide the incentives, public legitimacy (related: I think of a solution to aligning superintelligence as a public good) and significant financial resources.
Ilya is brilliant and seems to really see the horizon of the tech, but maybe isn’t the best at the business side to see how to sell it.
But this is often the curse of the ethically pragmatic. There is such a focus on the ethics part by the participants that the business side of things only sees that conversation and misses the rather extreme pragmatism.
As an example, would superaligned CEOs in the oil industry fifty years ago have still only kept their eye on quarterly share prices or considered long term costs of their choices? There’s going to be trillions in damages that the world has taken on as liabilities that could have been avoided with adequate foresight and patience.
If the market ends up with two AIs, one that will burn down the house to save on this month’s heating bill and one that will care if the house is still there to heat next month, there’s a huge selling point for the one that doesn’t burn down the house as long as “not burning down the house” can be explained as “long term net yield” or some other BS business language. If instead it’s presented to executives as “save on this month’s heating bill” vs “don’t unhouse my cats” leadership is going to burn the neighborhood to the ground.
(Source: Explained new technology to C-suite decision makers at F500s for years.)
The good news is that I think the pragmatism of Ilya’s vision on superalignment is going to become clear over the next iteration or two of models and that’s going to be before the question of models truly being unable to be controlled crops up. I just hope that whatever he’s going to be keeping busy with will allow him to still help execute on superderminism when the market finally realizes “we should do this” for pragmatic reasons and not just amorphous ethical reasons execs just kind of ignore. And in the meantime I think given the present pace that Anthropic is going to continue to lay a lot of the groundwork on what’s needed for alignment on the way to superalignment anyways.
I thought Superalignment was a positive bet by OpenAI, and I was happy when they committed to putting 20% of their current compute (at the time) towards it. I stopped thinking about that kind of approach because OAI already had competent people working on it. Several of them are now gone.
It seems increasingly likely that the entire effort will dissolve. If so, OAI has now made the business decision to invest its capital in keeping its moat in the AGI race rather than basic safety science. This is bad and likely another early sign of what’s to come.
I think the research that was done by the Superalignment team should continue happen outside of OpenAI and, if governments have a lot of capital to allocate, they should figure out a way to provide compute to continue those efforts. Or maybe there’s a better way forward. But I think it would be pretty bad if all that talent towards the project never gets truly leveraged into something impactful.
Strongly agree; I’ve been thinking for a while that something like a public-private partnership involving at least the US government and the top US AI labs might be a better way to go about this. Unfortunately, recent events seem in line with it not being ideal to only rely on labs for AI safety research, and the potential scalability of automating it should make it even more promising for government involvement. [Strongly] oversimplified, the labs could provide a lot of the in-house expertise, the government could provide the incentives, public legitimacy (related: I think of a solution to aligning superintelligence as a public good) and significant financial resources.
It’s going to have to.
Ilya is brilliant and seems to really see the horizon of the tech, but maybe isn’t the best at the business side to see how to sell it.
But this is often the curse of the ethically pragmatic. There is such a focus on the ethics part by the participants that the business side of things only sees that conversation and misses the rather extreme pragmatism.
As an example, would superaligned CEOs in the oil industry fifty years ago have still only kept their eye on quarterly share prices or considered long term costs of their choices? There’s going to be trillions in damages that the world has taken on as liabilities that could have been avoided with adequate foresight and patience.
If the market ends up with two AIs, one that will burn down the house to save on this month’s heating bill and one that will care if the house is still there to heat next month, there’s a huge selling point for the one that doesn’t burn down the house as long as “not burning down the house” can be explained as “long term net yield” or some other BS business language. If instead it’s presented to executives as “save on this month’s heating bill” vs “don’t unhouse my cats” leadership is going to burn the neighborhood to the ground.
(Source: Explained new technology to C-suite decision makers at F500s for years.)
The good news is that I think the pragmatism of Ilya’s vision on superalignment is going to become clear over the next iteration or two of models and that’s going to be before the question of models truly being unable to be controlled crops up. I just hope that whatever he’s going to be keeping busy with will allow him to still help execute on superderminism when the market finally realizes “we should do this” for pragmatic reasons and not just amorphous ethical reasons execs just kind of ignore. And in the meantime I think given the present pace that Anthropic is going to continue to lay a lot of the groundwork on what’s needed for alignment on the way to superalignment anyways.