Hello, I have a question. I hope someone with more knowledge can help me answer it.
There is evidence suggesting that building an AGI requires plenty of computational power (at least early on) and plenty of smart engineers/scientists. The companies with the most computational power are Google, Facebook, Microsoft and Amazon. These same companies also have some of the best engineers and scientists working for them. A recent paper by Yann LeCun titled A Path Towards Autonomous Machine Intelligence suggests that these companies have a vested interest in actually building an AGI. Given these companies want to create an AGI, and given that they have the scarce resources necessary to do so, I conclude that one of these companies is likely to build an AGI.
If we agree that one of these companies is likely to build an AGI, then my question is this: is it most pragmatic for the best alignment researchers to join these companies and work on the alignment problem from the inside? Working alongside people like LeCun and demonstrating to them that alignment is a serious problem and that solving it is in the long-term interest of the company.
Assume that an independent alignment firm like Redwood or Anthropic actually succeeds in building an “alignment framework”. Getting such framework into Facebook and persuading Facebook to actually use it remains to be an unaddressed challenge. Given that people like Chris Olah used to work at Google but left tells me that there is something crucial missing from my model. Could someone please enlighten me?
Don’t have any relevant knowledge, but it’s a tradeoff between having some influence and actually doing alignment research? It’s better for persuasion to have an alignment framework, especially if only advantage you have as safety team employee is being present at the meetings where everyone discuss biases in AI systems. It would be better if it was just “Anthropic, but everyone listens to them”, but changing it to be like that spends time you could spend solving alignment.
Hello, I have a question. I hope someone with more knowledge can help me answer it.
There is evidence suggesting that building an AGI requires plenty of computational power (at least early on) and plenty of smart engineers/scientists. The companies with the most computational power are Google, Facebook, Microsoft and Amazon. These same companies also have some of the best engineers and scientists working for them. A recent paper by Yann LeCun titled A Path Towards Autonomous Machine Intelligence suggests that these companies have a vested interest in actually building an AGI. Given these companies want to create an AGI, and given that they have the scarce resources necessary to do so, I conclude that one of these companies is likely to build an AGI.
If we agree that one of these companies is likely to build an AGI, then my question is this: is it most pragmatic for the best alignment researchers to join these companies and work on the alignment problem from the inside? Working alongside people like LeCun and demonstrating to them that alignment is a serious problem and that solving it is in the long-term interest of the company.
Assume that an independent alignment firm like Redwood or Anthropic actually succeeds in building an “alignment framework”. Getting such framework into Facebook and persuading Facebook to actually use it remains to be an unaddressed challenge. Given that people like Chris Olah used to work at Google but left tells me that there is something crucial missing from my model. Could someone please enlighten me?
Don’t have any relevant knowledge, but it’s a tradeoff between having some influence and actually doing alignment research? It’s better for persuasion to have an alignment framework, especially if only advantage you have as safety team employee is being present at the meetings where everyone discuss biases in AI systems. It would be better if it was just “Anthropic, but everyone listens to them”, but changing it to be like that spends time you could spend solving alignment.