Education: BS math (writing minor), PhD comp sci/artificial intelligence (cog sci/linguistics minors), MS bioinformatics
Jobs held (chronological): robot programmer in a failed startup, cryptologist, AI TA, lecturer, virtual robot programmer in a failed startup, distributed simulation project manager, AI research project manager, computer network security research, patent examiner, founder of failed AIish startup, computational linguist, bioinformatics engineer
I was a serious fundamentalist evangelical until about age 20. Factors that led me to deconvert included Bible study, successful simulations of evolution, and observation of radical cognitive biases in other Christians.
I was active on the Extropian mailing list, and published a couple of things in Extropy, about 1991-1995.
Like EY, I think AI is inevitable, and is the most important problem facing us. I have a lot of reservations about his plans, to the point of seeing his FAI as UFAI (don’t ask in this thread). I think the most difficult problem isn’t developing AI, or even making it friendly, but figuring out what kind of possible universes we should aim for; and we have a limited time in which we have large leverage over the future.
I prioritize slowing aging over work on AI. I expect that partial cures for aging will be developed 10-20 years before they are approved in the US, and so I want to be in a position to take published research and apply it to myself when the time comes.
I believe that rationality is instrumental, and repeatedly dissent when people on LW make what I see as ideological claims about rationality, such as that it is defined as that which wins; and at presenting rationality as a value-system or a lifestyle. There’s room for that too; I mainly want people to recognize that being rational doesn’t require all that.
Location: Washington DC, USA
Education: BS math (writing minor), PhD comp sci/artificial intelligence (cog sci/linguistics minors), MS bioinformatics
Jobs held (chronological): robot programmer in a failed startup, cryptologist, AI TA, lecturer, virtual robot programmer in a failed startup, distributed simulation project manager, AI research project manager, computer network security research, patent examiner, founder of failed AIish startup, computational linguist, bioinformatics engineer
Blog
I was a serious fundamentalist evangelical until about age 20. Factors that led me to deconvert included Bible study, successful simulations of evolution, and observation of radical cognitive biases in other Christians.
I was active on the Extropian mailing list, and published a couple of things in Extropy, about 1991-1995.
Like EY, I think AI is inevitable, and is the most important problem facing us. I have a lot of reservations about his plans, to the point of seeing his FAI as UFAI (don’t ask in this thread). I think the most difficult problem isn’t developing AI, or even making it friendly, but figuring out what kind of possible universes we should aim for; and we have a limited time in which we have large leverage over the future.
I prioritize slowing aging over work on AI. I expect that partial cures for aging will be developed 10-20 years before they are approved in the US, and so I want to be in a position to take published research and apply it to myself when the time comes.
I believe that rationality is instrumental, and repeatedly dissent when people on LW make what I see as ideological claims about rationality, such as that it is defined as that which wins; and at presenting rationality as a value-system or a lifestyle. There’s room for that too; I mainly want people to recognize that being rational doesn’t require all that.