I’m quite curious about what benefits you experienced from your two week visit… anything you can share or is it all secret and mysterious?
Perhaps the most publicly noticeable result was that I had the opportunity to write this post (and also this wiki entry) in an environment where writing Less Wrong posts was socially reinforced as a worthwhile use of one’s time.
Then, of course, are the benefits discussed above—those that one would automatically get from spending time living in a high-IQ environment. In some ways, in fact, it was indeed like a two-week-long Less Wrong meetup.
I had the opportunity to learn specific information about subjects relating to artificial intelligence and existential risk (and the beliefs of certain people about these subjects), which resulted in some updating of my beliefs about these subjects; as well as the opportunity to participate in rationality training exercises.
It was also nice to become personally acquainted with some of the “important people” on LW, such as Anna Salamon, Kaj Sotala, Nick Tarleton, Mike Blume, and Alicorn (who did indeed go by that name around SIAI!); as well as a number of other folks at SIAI who do very important work but don’t post as much here.
Conversations were frequent and very stimulating. (Kaj Sotala wasn’t lying about Michael Vassar.)
As a result of having done this, I am now “in the network”, which will tend to facilitate any specific contributions to existential risk reduction that I might be able to make apart from my basic strategy of “become as high-status/high-value as possible in the field(s) I most enjoy working in, and transfer some of that value via money to existential risk reduction”.
Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly.
Eliezer is uninvolved with the Visiting Fellows program, and I doubt he even had any idea that I was there. Nor is Alicorn currently there, as I understand.
Perhaps the most publicly noticeable result was that I had the opportunity to write this post (and also this wiki entry) in an environment where writing Less Wrong posts was socially reinforced as a worthwhile use of one’s time.
Then, of course, are the benefits discussed above—those that one would automatically get from spending time living in a high-IQ environment. In some ways, in fact, it was indeed like a two-week-long Less Wrong meetup.
I had the opportunity to learn specific information about subjects relating to artificial intelligence and existential risk (and the beliefs of certain people about these subjects), which resulted in some updating of my beliefs about these subjects; as well as the opportunity to participate in rationality training exercises.
It was also nice to become personally acquainted with some of the “important people” on LW, such as Anna Salamon, Kaj Sotala, Nick Tarleton, Mike Blume, and Alicorn (who did indeed go by that name around SIAI!); as well as a number of other folks at SIAI who do very important work but don’t post as much here.
Conversations were frequent and very stimulating. (Kaj Sotala wasn’t lying about Michael Vassar.)
As a result of having done this, I am now “in the network”, which will tend to facilitate any specific contributions to existential risk reduction that I might be able to make apart from my basic strategy of “become as high-status/high-value as possible in the field(s) I most enjoy working in, and transfer some of that value via money to existential risk reduction”.
Eliezer is uninvolved with the Visiting Fellows program, and I doubt he even had any idea that I was there. Nor is Alicorn currently there, as I understand.