Versions of this question have probably been asked before, but MIRI at least is getting more pessimistic and I’ve seen Eliezer express multiple times that he doesn’t know what actually useful advice to give people who aren’t in the field yet.
I don’t want the world to end, but I am only a decently intelligent person of medicore competence. I could try to read and grok alignment research until I can have productive thoughts, but I do not anticipate that helping (though I will probably start doing more reading anyways; I’m tentatively planning to try reading Jaynes). Should I go into some kind of advocacy? I don’t know how I would do productive work there either, really.
I would guess there are others in a roughly similar situation to mine. Does anyone have ideas?
[Question] How can a layman contribute to AI Alignment efforts, given shorter timeline/doomier scenarios?
Versions of this question have probably been asked before, but MIRI at least is getting more pessimistic and I’ve seen Eliezer express multiple times that he doesn’t know what actually useful advice to give people who aren’t in the field yet.
I don’t want the world to end, but I am only a decently intelligent person of medicore competence. I could try to read and grok alignment research until I can have productive thoughts, but I do not anticipate that helping (though I will probably start doing more reading anyways; I’m tentatively planning to try reading Jaynes). Should I go into some kind of advocacy? I don’t know how I would do productive work there either, really.
I would guess there are others in a roughly similar situation to mine. Does anyone have ideas?