I read the beginning of the debate and skimmed the end, so I might have missed something, but it really feels like it’s missing a discussion about the economic incentives of AI developers.
Like, the debaters talk about saving the world, and that’s cool, but… Let’s assume “saving the world” (defined as “halting aging and world hunger, having reliable cryogenics, colonizing other planets, and having enough food and resources to accommodate the growing pool of humans”) takes 200 years. After then, you get a post-scarcity society, but in the meantime (or at least for the first 100 years), you still have a capitalist society with competing interests driven by profit and self-preservation.
During those 100 years, what are the incentives for AI companies to use the methods discussed here instead of “whatever the hell gives the best results fastest for the cheapest price”? (what the debaters call “the default path”)
Especially since the difference between training methods is extremely technical and nuanced, so any regulating entity (governments, the EU, even Google’s safety team) would have a hard time establishing specific rules.
Both Scott and I are aware of the existence of these incentives, and probably roughly agree on their strength, so I didn’t bring them up.
(The purpose of this debate was for Scott and I to resolve our disagreements, not to present the relevant considerations / argue for our particular sides / explain things for an audience.)
I’d imagine Scott would say something along the lines of “yeah, the incentives are there and hard to overcome, but we are doomed-by-default so any plan is going to involve some hard step”.
I read the beginning of the debate and skimmed the end, so I might have missed something, but it really feels like it’s missing a discussion about the economic incentives of AI developers.
Like, the debaters talk about saving the world, and that’s cool, but… Let’s assume “saving the world” (defined as “halting aging and world hunger, having reliable cryogenics, colonizing other planets, and having enough food and resources to accommodate the growing pool of humans”) takes 200 years. After then, you get a post-scarcity society, but in the meantime (or at least for the first 100 years), you still have a capitalist society with competing interests driven by profit and self-preservation.
During those 100 years, what are the incentives for AI companies to use the methods discussed here instead of “whatever the hell gives the best results fastest for the cheapest price”? (what the debaters call “the default path”)
Especially since the difference between training methods is extremely technical and nuanced, so any regulating entity (governments, the EU, even Google’s safety team) would have a hard time establishing specific rules.
Both Scott and I are aware of the existence of these incentives, and probably roughly agree on their strength, so I didn’t bring them up.
(The purpose of this debate was for Scott and I to resolve our disagreements, not to present the relevant considerations / argue for our particular sides / explain things for an audience.)
I’d imagine Scott would say something along the lines of “yeah, the incentives are there and hard to overcome, but we are doomed-by-default so any plan is going to involve some hard step”.