What do you all think could be the impact of these changes on reliability of LLMs? Are any of you directly involved? Feel free to respond here or reach out to me directly, alex.morozov@evivapartners.org, or if you have confidential tips, my Signal handle is alex.5757
Hello—I’m a physician and writer working on a book about evidence in medicine. I’ve been thinking a lot about alignment of LLMs. Saw this article:
https://www.wired.com/story/ai-safety-institute-new-directive-america-first/
What do you all think could be the impact of these changes on reliability of LLMs? Are any of you directly involved? Feel free to respond here or reach out to me directly, alex.morozov@evivapartners.org, or if you have confidential tips, my Signal handle is alex.5757
Thank you!
Alex