Tl;dr—We must enlist and educate professional politicians, reporters and policymakers to talk about alignment.
This interaction between Peter Doocy and Karine Jean-Pierre (YouTube link below) is representative of how EY’s time article has been received in many circles.
I see a few broad fronts of concern in LLM’s.
Privacy and training feedback loop
Attribution, copyright and monetization related payment considerations
Economic displacement considerations (job losses and job gains etc) and policy responses needed
Alignment
Of these, alignment is likely the most important (at least in my opinion—on this point opinions seem to genuinely vary) and has had a very long long intellectual effort behind it, and yet recent discourse has framed it often as if it were unserious and hyperbolic (eg “doomerism”)
One objection that often comes up is that ChatGPT and Bing are aligned with our values and anyone can try it for themselves. Another objection I keep coming across is that alignment folks aren’t experts in AI implementation and thus lack the insights that an implementor possesses from directly working on alignment as part of the implementation process.
The way I see it, these and other objections don’t have to be true nor have to be perfect to land effectively in the public sphere of discourse - they just have to seem coherent, logical and truthy to a lay audience.
ChatGPT’s appeal is that anyone can immediately experience it, and partake in judging its value to our collective lives—AI is no longer an esoteric concept (most people still don’t think of social media feeds explicitly as AI).
The discussions for and against AI and various aspects of policy now take place in a manner accessible to all, and whether we like it or not, such discussions must now comport with public norms of discourse and persuasion. I really think that we must enlist and educate professional politicians, reporters and policymakers to do this work.
Tl;dr—We must enlist and educate professional politicians, reporters and policymakers to talk about alignment.
This interaction between Peter Doocy and Karine Jean-Pierre (YouTube link below) is representative of how EY’s time article has been received in many circles.
I see a few broad fronts of concern in LLM’s.
Privacy and training feedback loop
Attribution, copyright and monetization related payment considerations
Economic displacement considerations (job losses and job gains etc) and policy responses needed
Alignment
Of these, alignment is likely the most important (at least in my opinion—on this point opinions seem to genuinely vary) and has had a very long long intellectual effort behind it, and yet recent discourse has framed it often as if it were unserious and hyperbolic (eg “doomerism”)
One objection that often comes up is that ChatGPT and Bing are aligned with our values and anyone can try it for themselves. Another objection I keep coming across is that alignment folks aren’t experts in AI implementation and thus lack the insights that an implementor possesses from directly working on alignment as part of the implementation process.
The way I see it, these and other objections don’t have to be true nor have to be perfect to land effectively in the public sphere of discourse - they just have to seem coherent, logical and truthy to a lay audience.
ChatGPT’s appeal is that anyone can immediately experience it, and partake in judging its value to our collective lives—AI is no longer an esoteric concept (most people still don’t think of social media feeds explicitly as AI).
The discussions for and against AI and various aspects of policy now take place in a manner accessible to all, and whether we like it or not, such discussions must now comport with public norms of discourse and persuasion. I really think that we must enlist and educate professional politicians, reporters and policymakers to do this work.