Ilya is brilliant and seems to really see the horizon of the tech, but maybe isn’t the best at the business side to see how to sell it.
But this is often the curse of the ethically pragmatic. There is such a focus on the ethics part by the participants that the business side of things only sees that conversation and misses the rather extreme pragmatism.
As an example, would superaligned CEOs in the oil industry fifty years ago have still only kept their eye on quarterly share prices or considered long term costs of their choices? There’s going to be trillions in damages that the world has taken on as liabilities that could have been avoided with adequate foresight and patience.
If the market ends up with two AIs, one that will burn down the house to save on this month’s heating bill and one that will care if the house is still there to heat next month, there’s a huge selling point for the one that doesn’t burn down the house as long as “not burning down the house” can be explained as “long term net yield” or some other BS business language. If instead it’s presented to executives as “save on this month’s heating bill” vs “don’t unhouse my cats” leadership is going to burn the neighborhood to the ground.
(Source: Explained new technology to C-suite decision makers at F500s for years.)
The good news is that I think the pragmatism of Ilya’s vision on superalignment is going to become clear over the next iteration or two of models and that’s going to be before the question of models truly being unable to be controlled crops up. I just hope that whatever he’s going to be keeping busy with will allow him to still help execute on superderminism when the market finally realizes “we should do this” for pragmatic reasons and not just amorphous ethical reasons execs just kind of ignore. And in the meantime I think given the present pace that Anthropic is going to continue to lay a lot of the groundwork on what’s needed for alignment on the way to superalignment anyways.
It’s going to have to.
Ilya is brilliant and seems to really see the horizon of the tech, but maybe isn’t the best at the business side to see how to sell it.
But this is often the curse of the ethically pragmatic. There is such a focus on the ethics part by the participants that the business side of things only sees that conversation and misses the rather extreme pragmatism.
As an example, would superaligned CEOs in the oil industry fifty years ago have still only kept their eye on quarterly share prices or considered long term costs of their choices? There’s going to be trillions in damages that the world has taken on as liabilities that could have been avoided with adequate foresight and patience.
If the market ends up with two AIs, one that will burn down the house to save on this month’s heating bill and one that will care if the house is still there to heat next month, there’s a huge selling point for the one that doesn’t burn down the house as long as “not burning down the house” can be explained as “long term net yield” or some other BS business language. If instead it’s presented to executives as “save on this month’s heating bill” vs “don’t unhouse my cats” leadership is going to burn the neighborhood to the ground.
(Source: Explained new technology to C-suite decision makers at F500s for years.)
The good news is that I think the pragmatism of Ilya’s vision on superalignment is going to become clear over the next iteration or two of models and that’s going to be before the question of models truly being unable to be controlled crops up. I just hope that whatever he’s going to be keeping busy with will allow him to still help execute on superderminism when the market finally realizes “we should do this” for pragmatic reasons and not just amorphous ethical reasons execs just kind of ignore. And in the meantime I think given the present pace that Anthropic is going to continue to lay a lot of the groundwork on what’s needed for alignment on the way to superalignment anyways.