But some of the sources you just listed are from mainstream philosophy...
Again, location of those sources was not (and were it otherwise, could well be not) a product of scholarship in mainstream philosophy, which subtracts from expectation of usefulness of the activity of reading new unknown stuff, which is an altogether different enterprise from reading the stuff that’s already known to be useful.
Also, I’m working on some stuff with regard to machine ethics for superintelligence, so I’ll be curious to find out if you find that as useful as well.
Do you mean, would I find your survey papers/book useful?
Probably not for me, maybe useful as material for new people to study, since it’s focused on this particular problem and so could collect the best relevant things you’ll find, depending on your standard of quality/relevance in selecting things to discuss. From what I saw of your first drafts and other articles, it’ll probably look more like a broad eclectic survey than useful-for-study lecture notes, which subtracts from that use case (but who knows).
Could catalyze conversation in academia or elsewhere though, or work as standard reference node for when you’re in a hurry and don’t want to dereference it.
(Compare with Chalmers’ paper, which is all fine in the general outline, generates a citation node, allows to introduce people from particular background to motivation for AGI-risks-related discussion, and has already initiated discussion in academia. But it’s not useful as study material, given available alternatives, nor does it say anything new.)
Again, location of those sources was not… a product of scholarship in mainstream philosophy...
I think we agree on this so I’ll drop it. My original post claimed that mainstream philosophy makes useful contributions and should not be ignored, and you agree. We also agree that poring through the resources of mainstream philosophy is not the best use for pretty much anyone’s time.
As for my forthcoming work on machine ethics for superintelligence...
maybe useful as material for new people to study
Yep. I want to write short, broad, well-cited overviews of the subjects relevant to Friendly AI, something that mostly has not yet been done.
Could catalyze conversation in academia or elsewhere
Yes.
[could] work as standard reference node for when you’re in a hurry
Right.
You’ve hit on most of the immediate goals of such work, though eventually my intention is to contribute to more of the cutting-edge stuff on Friendly AI, for example on how reflective equilibrium could be programmatically implemented in CEV. But that’s getting ahead of myself. Also, it’s doubtful that such work will actually materialize, because of the whole ‘not being independently wealthy’ problem I have. Research takes time, and I’ve got rent to pay.
Again, location of those sources was not (and were it otherwise, could well be not) a product of scholarship in mainstream philosophy, which subtracts from expectation of usefulness of the activity of reading new unknown stuff, which is an altogether different enterprise from reading the stuff that’s already known to be useful.
Do you mean, would I find your survey papers/book useful?
Probably not for me, maybe useful as material for new people to study, since it’s focused on this particular problem and so could collect the best relevant things you’ll find, depending on your standard of quality/relevance in selecting things to discuss. From what I saw of your first drafts and other articles, it’ll probably look more like a broad eclectic survey than useful-for-study lecture notes, which subtracts from that use case (but who knows).
Could catalyze conversation in academia or elsewhere though, or work as standard reference node for when you’re in a hurry and don’t want to dereference it.
(Compare with Chalmers’ paper, which is all fine in the general outline, generates a citation node, allows to introduce people from particular background to motivation for AGI-risks-related discussion, and has already initiated discussion in academia. But it’s not useful as study material, given available alternatives, nor does it say anything new.)
I think we agree on this so I’ll drop it. My original post claimed that mainstream philosophy makes useful contributions and should not be ignored, and you agree. We also agree that poring through the resources of mainstream philosophy is not the best use for pretty much anyone’s time.
As for my forthcoming work on machine ethics for superintelligence...
Yep. I want to write short, broad, well-cited overviews of the subjects relevant to Friendly AI, something that mostly has not yet been done.
Yes.
Right.
You’ve hit on most of the immediate goals of such work, though eventually my intention is to contribute to more of the cutting-edge stuff on Friendly AI, for example on how reflective equilibrium could be programmatically implemented in CEV. But that’s getting ahead of myself. Also, it’s doubtful that such work will actually materialize, because of the whole ‘not being independently wealthy’ problem I have. Research takes time, and I’ve got rent to pay.