I have read around 25% of The Sequences, most of HPMOR, a lot of LessWrong posts, some Daniel Khaneman, have familiarized myself with logical fallacies, and have begun learning about research methodology. I’ve been checking my own reasoning and beliefs for flaws and doing self-improvement for years. I have also attended LessWrong and Effective Altruist meetups, and a CFAR workshop after party.
Like many of you, I am an IT person and an atheist. I have a large amount of interest in effective altruism, research, self-improvement and technology in general, and a lesser degree of interest in other topics the community also enjoys like artificial intelligence and transhumanism. I do believe that it’s fairly likely that the Singularity will happen within my lifetime, but do not believe that the first AGIs are likely to be friendly.
Aisarka is intended to be more of a neat handle than a claim but I hope to live up to it.
I have read around 25% of The Sequences, most of HPMOR, a lot of LessWrong posts, some Daniel Khaneman, have familiarized myself with logical fallacies, and have begun learning about research methodology. I’ve been checking my own reasoning and beliefs for flaws and doing self-improvement for years. I have also attended LessWrong and Effective Altruist meetups, and a CFAR workshop after party.
Like many of you, I am an IT person and an atheist. I have a large amount of interest in effective altruism, research, self-improvement and technology in general, and a lesser degree of interest in other topics the community also enjoys like artificial intelligence and transhumanism. I do believe that it’s fairly likely that the Singularity will happen within my lifetime, but do not believe that the first AGIs are likely to be friendly.
Aisarka is intended to be more of a neat handle than a claim but I hope to live up to it.