We don’t need to solve all of philosophy and morality, it would be sufficient to have the AI system to leave us in control and respect our preferences where they are clear
I agree that we don’t need to solve philosophy/morality if we could at least pin down things like corrigibility, but humans may poorly understand “leaving humans in control” and “respecting human preferences” such that optimizing for human abstractions of these concepts could be unsafe (this belief isn’t that strongly held, I’m just considering some exotic scenarios where humans are technically ‘in control’ according to the specification we thought of, but the consequences are negative nonetheless, normal goodharting failure mode).
Which of the two (or the innumerable other possibilities) happens?
Depending on the work you’re asking the AI(s) to do (e.g. automating large parts of open ended software projects, automating large portions of STEM work), I’d say the world takeover/power-seeking/recursive self improvement type of scenarios happen since these tasks incentivize the development of unbounded behaviors (because open-ended, project based work doesn’t have clear deadlines, may require multiple retries, and has lots of uncertainty, I can imagine unbounded behaviors like “gain more resources because that’s broadly useful under uncertainty” to be strongly selected for).
I agree that we don’t need to solve philosophy/morality if we could at least pin down things like corrigibility, but humans may poorly understand “leaving humans in control” and “respecting human preferences” such that optimizing for human abstractions of these concepts could be unsafe (this belief isn’t that strongly held, I’m just considering some exotic scenarios where humans are technically ‘in control’ according to the specification we thought of, but the consequences are negative nonetheless, normal goodharting failure mode).
Depending on the work you’re asking the AI(s) to do (e.g. automating large parts of open ended software projects, automating large portions of STEM work), I’d say the world takeover/power-seeking/recursive self improvement type of scenarios happen since these tasks incentivize the development of unbounded behaviors (because open-ended, project based work doesn’t have clear deadlines, may require multiple retries, and has lots of uncertainty, I can imagine unbounded behaviors like “gain more resources because that’s broadly useful under uncertainty” to be strongly selected for).