I’m 22 (±0.35) years old and have been seriously getting involved with AI-Safety over the last few months. However, I chanced upon LW via SSC a few years ago (directed to SSC by Guzey) when I was 19.
The generational shift is a concern to me because as we start losing people who’ve accumulated decades of knowledge (of which only a small fraction is available to read/watch), it’s possible that a lot of time would be wasted on developing ideas which have been developed via routes which have been explored. Of course, there’s a lot of utility in coming up with ideas from the ground up, but there comes a time when you accept and build upon an existing framework based on true statements. Regardless of whether the timelines are shorter than what we expect, this is a cause for concern.
I’m 22 (±0.35) years old and have been seriously getting involved with AI-Safety over the last few months. However, I chanced upon LW via SSC a few years ago (directed to SSC by Guzey) when I was 19.
The generational shift is a concern to me because as we start losing people who’ve accumulated decades of knowledge (of which only a small fraction is available to read/watch), it’s possible that a lot of time would be wasted on developing ideas which have been developed via routes which have been explored. Of course, there’s a lot of utility in coming up with ideas from the ground up, but there comes a time when you accept and build upon an existing framework based on true statements. Regardless of whether the timelines are shorter than what we expect, this is a cause for concern.