I expect to be quite busy with my Master’s thesis, which is on a completely unrelated subject, but I would be interested in at least discussing, at possibly also co-authoring papers, on at least the following topics:
The social feasibility of reduced impact AI and Oracle AI. In the subsections of part 5.1. of Responses to Catastrophic AGI Risk, we argued that there would be a constant and strong incentive for anyone with an Oracle AI to turn it into an active one and give it more power, so they would be unlikely to be voluntarily reined in. If that argument is correct, then we would need regulation in order to do the reining in, but that has its own challenges.
The future of surveillance. I’m generally rather concerned with the negative sides of surveillance, but I also acknowledge that the current trend is a continual increase in the amount of surveillance, and we might just be forced to make the best out of it. There is a lot of bad stuff that currently goes on undetected that one could potentially avoid with more comprehensive surveillance, though it would also probably kill a lot completely harmless stuff. Also, there’s the thing about being able to use it to stop x-risks and such. It seems like a comprehensive analysis of the exact function and (positive and negative) effects of privacy is something that we’d need in order to properly evaluate various surveillance scenarios.
The Turing test. I worked on a draft of a summary paper on measures of general intelligence last year, but we never got around finishing it. I e-mailed you with a link to it.
The problems with CEV / problems with total utilitarianism. I think that one of the most important questions relating to both CEV and utilitarianism would involve figuring out what exactly human preferences are. And it looks like psychology and neuroscience are starting to produce some very interesting hints about that. Luke posted about the possibility of humans having a hidden utility function, and in my review of Hirstein’s Mindmelding book, I mentioned that
As the above discussion hopefully shows, some of our preferences are implicit in our automatic habits (the things that we show we value with our daily routines), some in the preprocessing of sensory data that our brains carry out (the things and ideas that are ”painted with” positive associations or feelings), and some in the configuration of our executive processes (the actions we actually end up doing in response to novel or conflicting situations). [...] This kind of a breakdown seems like very promising material for some neuroscience-aware philosopher to tackle in an attempt to figure out just what exactly preferences are; maybe someone has already done so.
I expect to be quite busy with my Master’s thesis, which is on a completely unrelated subject, but I would be interested in at least discussing, at possibly also co-authoring papers, on at least the following topics:
The social feasibility of reduced impact AI and Oracle AI. In the subsections of part 5.1. of Responses to Catastrophic AGI Risk, we argued that there would be a constant and strong incentive for anyone with an Oracle AI to turn it into an active one and give it more power, so they would be unlikely to be voluntarily reined in. If that argument is correct, then we would need regulation in order to do the reining in, but that has its own challenges.
The future of surveillance. I’m generally rather concerned with the negative sides of surveillance, but I also acknowledge that the current trend is a continual increase in the amount of surveillance, and we might just be forced to make the best out of it. There is a lot of bad stuff that currently goes on undetected that one could potentially avoid with more comprehensive surveillance, though it would also probably kill a lot completely harmless stuff. Also, there’s the thing about being able to use it to stop x-risks and such. It seems like a comprehensive analysis of the exact function and (positive and negative) effects of privacy is something that we’d need in order to properly evaluate various surveillance scenarios.
The Turing test. I worked on a draft of a summary paper on measures of general intelligence last year, but we never got around finishing it. I e-mailed you with a link to it.
The problems with CEV / problems with total utilitarianism. I think that one of the most important questions relating to both CEV and utilitarianism would involve figuring out what exactly human preferences are. And it looks like psychology and neuroscience are starting to produce some very interesting hints about that. Luke posted about the possibility of humans having a hidden utility function, and in my review of Hirstein’s Mindmelding book, I mentioned that
Will contact you in a few days for more details.