We aren’t committed to using IDA to solve this long-run problem, IDA is free to admit ignorance or delegate to a different process.
This still requires IDA to have enough metaphilosophical competence to realize that it should admit ignorance or know which process to delegate to. (Or for the user to have enough metaphilosophical competence to realize that it should override IDA via corrigibility.)
I’d be happy to more explicitly flag that IDA is aiming at solving what I call alignment, and so we might end up with a world where e.g. moral progress has slowed relative to other kinds of progress because we are not able to automate it or where potentially-dangerous misunderstandings are more common.
Yes, I think it would be helpful to make it clearer what the strategic landscape will look like, under the assumption that IDA works out more or less the way you hope. That wasn’t very clear to me, hence my line of thinking/questioning in this thread.
I’m just aiming for the world where our AI is able to sit down with us and have a coherent conversation about this risk, to take reasonable actions in light of its limited abilities, and so on.
Hmm, this supposes that IDA knows the limits of its own abilities, but it’s not clear how an overseer who improvises a value learning scheme for example is supposed to know what its limits are, given the lack of theory behind it.
If we count these as two independent risks then it seems like there are thousands of independent risks—one for every important human ability that might fail to be translated to ML.
I guess it’s not one independent risk per human ability, but one per AI substitute for human ability. For example I think the abilities on your list (and probably most other human abilities) can be substituted by either consequentialism, applying metaphilosophy, or learning from historical data, so the independent risks are that large-scale consequentialism doesn’t work well, metaphilosophy doesn’t work well, and learning from historical data doesn’t work well. For example if large-scale consequentialism works well then that would solve making laws, governing, and anticipating problems 5 years out, so those aren’t really independent risks.
Value learning and metaphilosophy are distinguished as human abilities since they each need their own AI substitutes (and therefore constitute independent risks), and also they’re necessary for two of the main AI substitutes (namely consequentialism and applying metaphilosophy) to work so the impact of not being competent in them seem especially high.
(The above two paragraphs may be unclear/confusing/wrong since they are fresh thinking prompted by your question. Also I’m not sure I addressed what you’re asking about because I’m not sure what your motivation for the question was.)
For example I think the abilities on your list (and probably most other human abilities) can be substituted by either consequentialism, applying metaphilosophy, or learning from historical data, so the independent risks are that large-scale consequentialism doesn’t work well, metaphilosophy doesn’t work well, and learning from historical data doesn’t work well.
I don’t see why this is the case. Humans use lots of heuristics to make decisions in each of these domains. If AI systems don’t use those heuristics then they may do those tasks worse or take longer, even if they could rederive the same heuristics in the limit (this seems like the same situation as with your short-term concern with value learning).
This still requires IDA to have enough metaphilosophical competence to realize that it should admit ignorance or know which process to delegate to. (Or for the user to have enough metaphilosophical competence to realize that it should override IDA via corrigibility.)
I agree that “recognizing when you are wrong” may itself be a hard problem. But I don’t think you should predict a simple systematic error like being overconfident. I’m not quite sure what long-term error you have in mind, but overall it seems like if the short-term behavior works out then the long-term behavior isn’t that concerning (since reasonable short-term behavior needs to be sophisticated enough to e.g. avoid catastrophic overconfidence).
Humans use lots of heuristics to make decisions in each of these domains. If AI systems don’t use those heuristics then they may do those tasks worse or take longer, even if they could rederive the same heuristics in the limit (this seems like the same situation as with your short-term concern with value learning).
By “work well” I meant that the AI doesn’t take too long to rederive human heuristics (or equally good ones) compared to the speed of other intellectual progress. That seems hopeful because for a lot of those abilities there’s no reason to expect that human evolution would have optimized for them extra hard relative to other abilities (e.g., making laws for a large society is not something that would have been useful in the ancestral environment). To the extent that’s not true (perhaps for deal making, for example) that does seem like an independent risk.
I also think with value learning, the improvised value learning may not converge to what a human would do (or to what a human would/should converge to), so it’s also not the same situation in that regard.
I’m not quite sure what long-term error you have in mind
For example the AI makes changes to its value learning scheme that worsens it over time, or fails to find improvements that it can be confident in, or makes the value learning better but too slowly (relative to other intellectual progress), or fails to converge to what the right value learning algorithm is, and it fails to realize that it’s doing these things or doesn’t know how to correct them.
This still requires IDA to have enough metaphilosophical competence to realize that it should admit ignorance or know which process to delegate to. (Or for the user to have enough metaphilosophical competence to realize that it should override IDA via corrigibility.)
Yes, I think it would be helpful to make it clearer what the strategic landscape will look like, under the assumption that IDA works out more or less the way you hope. That wasn’t very clear to me, hence my line of thinking/questioning in this thread.
Hmm, this supposes that IDA knows the limits of its own abilities, but it’s not clear how an overseer who improvises a value learning scheme for example is supposed to know what its limits are, given the lack of theory behind it.
I guess it’s not one independent risk per human ability, but one per AI substitute for human ability. For example I think the abilities on your list (and probably most other human abilities) can be substituted by either consequentialism, applying metaphilosophy, or learning from historical data, so the independent risks are that large-scale consequentialism doesn’t work well, metaphilosophy doesn’t work well, and learning from historical data doesn’t work well. For example if large-scale consequentialism works well then that would solve making laws, governing, and anticipating problems 5 years out, so those aren’t really independent risks.
Value learning and metaphilosophy are distinguished as human abilities since they each need their own AI substitutes (and therefore constitute independent risks), and also they’re necessary for two of the main AI substitutes (namely consequentialism and applying metaphilosophy) to work so the impact of not being competent in them seem especially high.
(The above two paragraphs may be unclear/confusing/wrong since they are fresh thinking prompted by your question. Also I’m not sure I addressed what you’re asking about because I’m not sure what your motivation for the question was.)
I don’t see why this is the case. Humans use lots of heuristics to make decisions in each of these domains. If AI systems don’t use those heuristics then they may do those tasks worse or take longer, even if they could rederive the same heuristics in the limit (this seems like the same situation as with your short-term concern with value learning).
I agree that “recognizing when you are wrong” may itself be a hard problem. But I don’t think you should predict a simple systematic error like being overconfident. I’m not quite sure what long-term error you have in mind, but overall it seems like if the short-term behavior works out then the long-term behavior isn’t that concerning (since reasonable short-term behavior needs to be sophisticated enough to e.g. avoid catastrophic overconfidence).
By “work well” I meant that the AI doesn’t take too long to rederive human heuristics (or equally good ones) compared to the speed of other intellectual progress. That seems hopeful because for a lot of those abilities there’s no reason to expect that human evolution would have optimized for them extra hard relative to other abilities (e.g., making laws for a large society is not something that would have been useful in the ancestral environment). To the extent that’s not true (perhaps for deal making, for example) that does seem like an independent risk.
I also think with value learning, the improvised value learning may not converge to what a human would do (or to what a human would/should converge to), so it’s also not the same situation in that regard.
For example the AI makes changes to its value learning scheme that worsens it over time, or fails to find improvements that it can be confident in, or makes the value learning better but too slowly (relative to other intellectual progress), or fails to converge to what the right value learning algorithm is, and it fails to realize that it’s doing these things or doesn’t know how to correct them.