If we could model humans as having well-defined values but irrational in predictable ways (e.g., due to computational constraints or having a limited repertoire of heuristics), then some variant of CIRL might be sufficient (along with solving certain other technical problems such as corrigibility and preventing bugs) for creating aligned AIs. I was (and still am) worried that some researchers think this is actually true, or by not mentioning further difficulties, give the wrong impression to policymakers and other researchers.
If you are already aware of the philosophical/metaphilosophical problems mentioned here, and have an approach that you think can work despite them, then it’s not my intention to dampen your enthusiasm. We may differ on how much expected value we think your approach can deliver, but I don’t really know another approach that you can more productively spend your time on.
If we could model humans as having well-defined values but irrational in predictable ways (e.g., due to computational constraints or having a limited repertoire of heuristics), then some variant of CIRL might be sufficient (along with solving certain other technical problems such as corrigibility and preventing bugs) for creating aligned AIs. I was (and still am) worried that some researchers think this is actually true, or by not mentioning further difficulties, give the wrong impression to policymakers and other researchers.
If you are already aware of the philosophical/metaphilosophical problems mentioned here, and have an approach that you think can work despite them, then it’s not my intention to dampen your enthusiasm. We may differ on how much expected value we think your approach can deliver, but I don’t really know another approach that you can more productively spend your time on.