Motivation: I’m asking this question because one thing I notice is that there’s the unstated assumption that AGI/AI will be a huge deal, and how much of a big deal would change virtually everything about LW works, depending on the answer. I’d really like to know why LWers hold that AGI/ASI will be a big deal.
This is confusing to me.
I’ve read lots of posts on here about why AGI/AI would be a huge deal, and the ones I’m remembering seemed to do a good job at unpacking their assumptions (or at least a better job than I would do by default). It seems to me like those assumptions have been stated and explored at great length, and I’m wondering how we’ve ended up looking at the same site and getting such different impressions.
(Holden’s posts seem pretty good at laying out a bunch of things and explicitly tagging the assumptions as assumptions, as an example.)
Although that… doesn’t feel fair on my part?
I’ve spent some time at the AI Risk or Computer Scientists workshops, and I might have things I learned from those and things I’ve learned from LessWrong mixed up in my brain. Or maybe they prepared me tounderstand and engage with the LW content in ways that I otherwise wouldn’t have stumbled onto?
There are a lot of words on this site—and some really long posts. I’ve been browsing them pretty regularly for 4+ years now, and that doesn’t seem like a burden I’d want to place on someone in order to listen to them. I’m sure I’m missing stuff that the longer term folks have soaked into their bones.
Maybe there’s something like an “y’all should put more effort into collation and summary of your points if you want people to engage” point that falls out of this? Or something about “have y’all created an in-group, and to what extent is that intentional/helpful-in-cases vs accidental?”
Maybe there’s something like an “y’all should put more effort into collation and summary of your points if you want people to engage” point that falls out of this?
This is confusing to me.
I’ve read lots of posts on here about why AGI/AI would be a huge deal, and the ones I’m remembering seemed to do a good job at unpacking their assumptions (or at least a better job than I would do by default). It seems to me like those assumptions have been stated and explored at great length, and I’m wondering how we’ve ended up looking at the same site and getting such different impressions.
(Holden’s posts seem pretty good at laying out a bunch of things and explicitly tagging the assumptions as assumptions, as an example.)
Although that… doesn’t feel fair on my part?
I’ve spent some time at the AI Risk or Computer Scientists workshops, and I might have things I learned from those and things I’ve learned from LessWrong mixed up in my brain. Or maybe they prepared me tounderstand and engage with the LW content in ways that I otherwise wouldn’t have stumbled onto?
There are a lot of words on this site—and some really long posts. I’ve been browsing them pretty regularly for 4+ years now, and that doesn’t seem like a burden I’d want to place on someone in order to listen to them. I’m sure I’m missing stuff that the longer term folks have soaked into their bones.
Maybe there’s something like an “y’all should put more effort into collation and summary of your points if you want people to engage” point that falls out of this? Or something about “have y’all created an in-group, and to what extent is that intentional/helpful-in-cases vs accidental?”
Yes, this might be useful.