Hello, I came across this forum while reading an AI research paper where the authors quoted from Yudkowsky’s “Hidden Complexity of Wishes.” The linked source brought me here, and I’ve been reading some really exceptional articles ever since.
By way of introduction, I’m working on the third edition of my book “Inside Cyber Warfare” and I’ve spent the last few months buried in AI research specifically in the areas of safety and security. I view AGI as a serious threat to our future for two reasons. One, neither safety nor security have ever been prioritized over profits by corporations dating all the way back to the start of the industrial revolution. And two, regulation has only ever come to an industry after a catastrophe or a significant loss of life has occurred, not before.
I look forward to reading more of the content here, and engaging in what I hope will be many fruitful and enriching discussions with LessWrong’s members.
Hi Jeffrey! Glad to see more cybersecurity people taking the issue seriously.
Just so you know, if you want to introduce someone to AGI risk, the best way I know of to introduce laymen to the problem is for them to read Scott Alexander’s Superintelligence FAQ. This will come in handy down the line.
Hello, I came across this forum while reading an AI research paper where the authors quoted from Yudkowsky’s “Hidden Complexity of Wishes.” The linked source brought me here, and I’ve been reading some really exceptional articles ever since.
By way of introduction, I’m working on the third edition of my book “Inside Cyber Warfare” and I’ve spent the last few months buried in AI research specifically in the areas of safety and security. I view AGI as a serious threat to our future for two reasons. One, neither safety nor security have ever been prioritized over profits by corporations dating all the way back to the start of the industrial revolution. And two, regulation has only ever come to an industry after a catastrophe or a significant loss of life has occurred, not before.
I look forward to reading more of the content here, and engaging in what I hope will be many fruitful and enriching discussions with LessWrong’s members.
Hi Jeffrey! Glad to see more cybersecurity people taking the issue seriously.
Just so you know, if you want to introduce someone to AGI risk, the best way I know of to introduce laymen to the problem is for them to read Scott Alexander’s Superintelligence FAQ. This will come in handy down the line.
Thanks, Trevor. I’ve bookmarked that link. Just yesterday I started creating a short list of terms for my readers so that link will come in handy.
@Raemon is the superintelligence FAQ helpful as a short list of terms for Caruso’s readers?
Welcome! Hope you have a good time!