Hm. I think modern academic philosophy is a raging shitshow, but I thought philosophy on LW was quite good. I haven’t been a regular LW user until a couple of years ago, and the philosophical takes here, particularly Eliezer’s, converge with my own conclusions after a half lifetime of looking at philosophical questions through the lens of science, particularly neuroscience and psychology.
So: what do you see as the limitations in LW/Yudkowskian philosophy? Perhaps I’ve overlooked them.
I am currently skeptical that we need better philosophy for good AGI outcomes, vs. better practical work on technical AGI alignment (a category that barely exists) and PR work to put the likely personal intent aligned AGI into the hands of people that give half a crap about understanding or implementing ethics. Deciding on the long term future will be a matter of a long contemplation if we get AGI into good hands. We should decide if that logic is right, and if so, plan the victory party after we’ve won the war.
I did read your metaphilosophy post and remain unconvinced that there’s something big the rest of us are missing.
I’m happy to be corrected (I love becoming less wrong, and I’m aware of many of my biases that might prevent it):
Here’s how it currently looks to me: Ethics are ultimately a matter of preference, the rest is game theory and science (including the science of human preferences). Philosophical questions boil down to scientific questions in most cases, so epistemology is metaphilosophy for the most part.
Change my mind! Seriously, I’ll listen. It’s been years since I’ve thought about philosophy hard.
Hm. I think modern academic philosophy is a raging shitshow, but I thought philosophy on LW was quite good. I haven’t been a regular LW user until a couple of years ago, and the philosophical takes here, particularly Eliezer’s, converge with my own conclusions after a half lifetime of looking at philosophical questions through the lens of science, particularly neuroscience and psychology.
So: what do you see as the limitations in LW/Yudkowskian philosophy? Perhaps I’ve overlooked them.
I am currently skeptical that we need better philosophy for good AGI outcomes, vs. better practical work on technical AGI alignment (a category that barely exists) and PR work to put the likely personal intent aligned AGI into the hands of people that give half a crap about understanding or implementing ethics. Deciding on the long term future will be a matter of a long contemplation if we get AGI into good hands. We should decide if that logic is right, and if so, plan the victory party after we’ve won the war.
I did read your metaphilosophy post and remain unconvinced that there’s something big the rest of us are missing.
I’m happy to be corrected (I love becoming less wrong, and I’m aware of many of my biases that might prevent it):
Here’s how it currently looks to me: Ethics are ultimately a matter of preference, the rest is game theory and science (including the science of human preferences). Philosophical questions boil down to scientific questions in most cases, so epistemology is metaphilosophy for the most part.
Change my mind! Seriously, I’ll listen. It’s been years since I’ve thought about philosophy hard.