Persuade Grantmaking Organizations that Certain Lines of Research are Dangerous
I’ve never been involved in academia, but my vague idea of how things work is that researchers apply for grants from organizations that sponsor their research. If these organizations could be persuaded to change the criteria they used to assign grants, it’s possible the progress of AI research could be shaped.
Assuming my model of how academia works is correct (can someone comment on this?), persuading grantmakers could be a better use of time than trying to persuade researchers directly for a few reasons:
There are probably many researchers for each grantmaker, so personal communication with individual grantmakers gives greater leverage.
Grantmakers probably have less personal investment in the research they judge, which would mean less motivated cognition to go through with the research.
Grantmakers are more likely to be interested in what will be beneficial for society as a whole, whereas individual researchers may be more more motivated by gaining status or solving problems for their own sake.
For example, the research into a: how to make AI relate it’s computational structure to the substrate (AIXI does not, and fails to self preserve), b: how to prevent wireheading for AI that does relate it’s computational structure to the substrate, and c: how to define real world goals for AI to pursue (currently the AIs are just mathematics that makes some abstract variables satisfy abstract properties that may be described in the real world terms in the annotations in the papers but implement no correspondence to the real world).
Such research is clearly dangerous, and also unnecessary for creation of practically useful AIs (so it is not done at large; perhaps it is only done by SI in which case persuading grantmaking organizations not to give any money to SI may do the trick)
Persuade Grantmaking Organizations that Certain Lines of Research are Dangerous
I’ve never been involved in academia, but my vague idea of how things work is that researchers apply for grants from organizations that sponsor their research. If these organizations could be persuaded to change the criteria they used to assign grants, it’s possible the progress of AI research could be shaped.
More thoughts on this:
Assuming my model of how academia works is correct (can someone comment on this?), persuading grantmakers could be a better use of time than trying to persuade researchers directly for a few reasons:
There are probably many researchers for each grantmaker, so personal communication with individual grantmakers gives greater leverage.
Grantmakers probably have less personal investment in the research they judge, which would mean less motivated cognition to go through with the research.
Grantmakers are more likely to be interested in what will be beneficial for society as a whole, whereas individual researchers may be more more motivated by gaining status or solving problems for their own sake.
For example, the research into a: how to make AI relate it’s computational structure to the substrate (AIXI does not, and fails to self preserve), b: how to prevent wireheading for AI that does relate it’s computational structure to the substrate, and c: how to define real world goals for AI to pursue (currently the AIs are just mathematics that makes some abstract variables satisfy abstract properties that may be described in the real world terms in the annotations in the papers but implement no correspondence to the real world).
Such research is clearly dangerous, and also unnecessary for creation of practically useful AIs (so it is not done at large; perhaps it is only done by SI in which case persuading grantmaking organizations not to give any money to SI may do the trick)