Well since I’m procrastinating on important things I might as well use this time to introduce myself. Structured procrastination for the win!
Hello everyone, I have been poking around on less wrong , slater star codex and related places for around three to four years now but mostly lurking. I have gradually become more and more taken with the risks of artificial intelligence orders of magnitudes smarter than us Homo Sapiens. In that aspect, I’m glad that the topic of a super-intelligent AI has taken off into the mainstream media and academia. EY isn’t the lonely crank with no real academic affiliation, a nerdy Cassandra of his time, spewing nonsense on the internet anymore. From what I gather, status games are so cliche here that it’s not cool. But with endorsements by people like Hawking and Gates, people can’t easily dismiss these ideas anymore. I feel like this is a massively good thing because with these ideas up in the air so to speak, even intelligent AI researchers who disagree on these topics will probably not accidentally build an AI that will turn us all into paper clips to maximize happiness. That is not to say that there doesn’t exist numerous other failure pathways. Maybe someday notions such I. J. Good’s idea of a improving intelligence feedback loop will make it’s way into standard AI textbooks. You don’t have to join the lw sub-community to understand the risks, neither do you have to read through the sequences and all that. IMO, the greatest good less wrong has done so far for the world is to propagate and legitimate these concerns. I’m aware of the other key ideas in the memespace of lesswrong(rationality and all that) but it’s hard enough to get the general public and other academics and researchers to take concern about super intelligent AI as an existential risk seriously without all sort of other ideas outside of their inference bubble.
Intellectually, my background is in physics.(currently studying, along with requisite math you pick up from physics) I have been reading philosophy for a ridiculous long time(around seven years now) although as a part time hobby. Probably like most people here, I have an incurable addiction to the internet. I also read a lot, in varied intellectual fields. I read a lot of fiction, anything from Milton to YA books. Science fiction and fantasy probably is responsible for why I find trans-humanist notions so easy to swallow. You read enough F. A Hamilton and Greg Egan and things like living forever and super intelligent machines are downright tame in comparison. I like every academic subject, gender studies doesn’t count. Neuroscience, economics, computer science.. you name it. Even “fluffy” stuff like sociology and psychology and literature. I am doomed to be caught between the two cultures( C.P. Snow)
As to the stuff regarding rationality and cognitive biases, while the scientific evidence wasn’t in until fairly recently. Hume anticipated all it centuries ago. Now I know lesswrong isn’t very impressed with a prior armchair philosophizing without a scrap of evidence, I have to disagree on account of correct theories being much easier to build off empirical data and that deducing the correct theory to explain natural phenomenon without any empirical data in terms of experiments is much much harder. Hume had a huge possibility space while modern psychologists and cognitive scientists have a much small one. Let’s not forget Hume’s most famous quote. ““If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” I honestly can’t say I was surprised by the framework presented in the series like most people were, it’s sure nice to find a community that was thinking on the same lines that I do! A lot of the tactics to apply these ideas so I can overcame these heuristics were very nice and welcome. My favorite aspect of lw has to be that people has an agreed framework to discuss things and in theory we can come to agreements. Debating is one of my favorite things to do and frankly most people are not worth arguing with and is a waste of time.
I’m interested in contributing in the study of friendly AI and have some ideas regarding it. So I might post here in the future stuff I’m thinking about. Please please feel free to criticize such posts to your heart’s content. I appreciate feedback much more than I care about slights or insults so feel free to be rude. My ideas are probably old or wrong anyway, I have’t had time to look through all the literature presented here or elsewhere.
Lastly, I should mention I have been active in the lesswrong irc room. If you want to find me, I’m there. also if lukeprog sees this, I really liked the the literature summaries you post sometimes. it’s been a huge help and saved me a ton of time in my own exploration of the scientific literature.
Well since I’m procrastinating on important things I might as well use this time to introduce myself. Structured procrastination for the win!
Hello everyone, I have been poking around on less wrong , slater star codex and related places for around three to four years now but mostly lurking. I have gradually become more and more taken with the risks of artificial intelligence orders of magnitudes smarter than us Homo Sapiens. In that aspect, I’m glad that the topic of a super-intelligent AI has taken off into the mainstream media and academia. EY isn’t the lonely crank with no real academic affiliation, a nerdy Cassandra of his time, spewing nonsense on the internet anymore. From what I gather, status games are so cliche here that it’s not cool. But with endorsements by people like Hawking and Gates, people can’t easily dismiss these ideas anymore. I feel like this is a massively good thing because with these ideas up in the air so to speak, even intelligent AI researchers who disagree on these topics will probably not accidentally build an AI that will turn us all into paper clips to maximize happiness. That is not to say that there doesn’t exist numerous other failure pathways. Maybe someday notions such I. J. Good’s idea of a improving intelligence feedback loop will make it’s way into standard AI textbooks. You don’t have to join the lw sub-community to understand the risks, neither do you have to read through the sequences and all that. IMO, the greatest good less wrong has done so far for the world is to propagate and legitimate these concerns. I’m aware of the other key ideas in the memespace of lesswrong(rationality and all that) but it’s hard enough to get the general public and other academics and researchers to take concern about super intelligent AI as an existential risk seriously without all sort of other ideas outside of their inference bubble.
Intellectually, my background is in physics.(currently studying, along with requisite math you pick up from physics) I have been reading philosophy for a ridiculous long time(around seven years now) although as a part time hobby. Probably like most people here, I have an incurable addiction to the internet. I also read a lot, in varied intellectual fields. I read a lot of fiction, anything from Milton to YA books. Science fiction and fantasy probably is responsible for why I find trans-humanist notions so easy to swallow. You read enough F. A Hamilton and Greg Egan and things like living forever and super intelligent machines are downright tame in comparison. I like every academic subject, gender studies doesn’t count. Neuroscience, economics, computer science.. you name it. Even “fluffy” stuff like sociology and psychology and literature. I am doomed to be caught between the two cultures( C.P. Snow)
As to the stuff regarding rationality and cognitive biases, while the scientific evidence wasn’t in until fairly recently. Hume anticipated all it centuries ago. Now I know lesswrong isn’t very impressed with a prior armchair philosophizing without a scrap of evidence, I have to disagree on account of correct theories being much easier to build off empirical data and that deducing the correct theory to explain natural phenomenon without any empirical data in terms of experiments is much much harder. Hume had a huge possibility space while modern psychologists and cognitive scientists have a much small one. Let’s not forget Hume’s most famous quote. ““If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” I honestly can’t say I was surprised by the framework presented in the series like most people were, it’s sure nice to find a community that was thinking on the same lines that I do! A lot of the tactics to apply these ideas so I can overcame these heuristics were very nice and welcome. My favorite aspect of lw has to be that people has an agreed framework to discuss things and in theory we can come to agreements. Debating is one of my favorite things to do and frankly most people are not worth arguing with and is a waste of time.
I’m interested in contributing in the study of friendly AI and have some ideas regarding it. So I might post here in the future stuff I’m thinking about. Please please feel free to criticize such posts to your heart’s content. I appreciate feedback much more than I care about slights or insults so feel free to be rude. My ideas are probably old or wrong anyway, I have’t had time to look through all the literature presented here or elsewhere.
Lastly, I should mention I have been active in the lesswrong irc room. If you want to find me, I’m there. also if lukeprog sees this, I really liked the the literature summaries you post sometimes. it’s been a huge help and saved me a ton of time in my own exploration of the scientific literature.