I’m quite curious about what benefits you experienced from your two week visit… anything you can share or is it all secret and mysterious?
Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly. The freedom to speak ones mind without the need for securing approval is just too attractive to pass up! :)
Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly. The freedom to speak ones mind without the need for securing approval is just too attractive to pass up! :)
Neither of these should stop you. Alicorn lives on the other side of the country from the house, and Eliezer is pretty lax about criticism (and isn’t around much, anyway).
I’m quite curious about what benefits you experienced from your two week visit… anything you can share or is it all secret and mysterious?
Perhaps the most publicly noticeable result was that I had the opportunity to write this post (and also this wiki entry) in an environment where writing Less Wrong posts was socially reinforced as a worthwhile use of one’s time.
Then, of course, are the benefits discussed above—those that one would automatically get from spending time living in a high-IQ environment. In some ways, in fact, it was indeed like a two-week-long Less Wrong meetup.
I had the opportunity to learn specific information about subjects relating to artificial intelligence and existential risk (and the beliefs of certain people about these subjects), which resulted in some updating of my beliefs about these subjects; as well as the opportunity to participate in rationality training exercises.
It was also nice to become personally acquainted with some of the “important people” on LW, such as Anna Salamon, Kaj Sotala, Nick Tarleton, Mike Blume, and Alicorn (who did indeed go by that name around SIAI!); as well as a number of other folks at SIAI who do very important work but don’t post as much here.
Conversations were frequent and very stimulating. (Kaj Sotala wasn’t lying about Michael Vassar.)
As a result of having done this, I am now “in the network”, which will tend to facilitate any specific contributions to existential risk reduction that I might be able to make apart from my basic strategy of “become as high-status/high-value as possible in the field(s) I most enjoy working in, and transfer some of that value via money to existential risk reduction”.
Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly.
Eliezer is uninvolved with the Visiting Fellows program, and I doubt he even had any idea that I was there. Nor is Alicorn currently there, as I understand.
I hear that the secret to being a fellow is show rigorously that the probability that one of them is being silly is greater than 1⁄2. Just a silly math test.
I’m quite curious about what benefits you experienced from your two week visit… anything you can share or is it all secret and mysterious?
Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly. The freedom to speak ones mind without the need for securing approval is just too attractive to pass up! :)
Neither of these should stop you. Alicorn lives on the other side of the country from the house, and Eliezer is pretty lax about criticism (and isn’t around much, anyway).
Oh, there’s the thing with being on the other side of the world too. ;)
They pay for airfare, you know...
Damn you and your shooting down all my excuses! ;)
Not that I’d let them pay for my airfare anyway. I would only do it if I could pay them for the experience.
Fortunately, you appear to be able to rationalize more quite easily. ;)
Perhaps the most publicly noticeable result was that I had the opportunity to write this post (and also this wiki entry) in an environment where writing Less Wrong posts was socially reinforced as a worthwhile use of one’s time.
Then, of course, are the benefits discussed above—those that one would automatically get from spending time living in a high-IQ environment. In some ways, in fact, it was indeed like a two-week-long Less Wrong meetup.
I had the opportunity to learn specific information about subjects relating to artificial intelligence and existential risk (and the beliefs of certain people about these subjects), which resulted in some updating of my beliefs about these subjects; as well as the opportunity to participate in rationality training exercises.
It was also nice to become personally acquainted with some of the “important people” on LW, such as Anna Salamon, Kaj Sotala, Nick Tarleton, Mike Blume, and Alicorn (who did indeed go by that name around SIAI!); as well as a number of other folks at SIAI who do very important work but don’t post as much here.
Conversations were frequent and very stimulating. (Kaj Sotala wasn’t lying about Michael Vassar.)
As a result of having done this, I am now “in the network”, which will tend to facilitate any specific contributions to existential risk reduction that I might be able to make apart from my basic strategy of “become as high-status/high-value as possible in the field(s) I most enjoy working in, and transfer some of that value via money to existential risk reduction”.
Eliezer is uninvolved with the Visiting Fellows program, and I doubt he even had any idea that I was there. Nor is Alicorn currently there, as I understand.
I hear that the secret to being a fellow is show rigorously that the probability that one of them is being silly is greater than 1⁄2. Just a silly math test.