But if when Eliezer gets finished on 1), someone else is getting finished on 2), the two may be combinable to some extent.
If someone (lets say, Eliezer, having been convinced by the above post to change tack) finishes 2), and no-one has done 1), then a non-friendly AGI becomes far more likely.
I’m not convinced by the singularity concept, but if it’s true Friendliness is orders of magnitude more important than just making an AGI. The difference between friendly AI and no-AI is big, but the difference between unfriendly AI and friendly AI dwarfs it.
And if it’s false? Well, if it’s false, making an AGI is orders of magnitude less important than that.
This cooperation thing sounds hugely important. What we want is for the AGI community to move in a direction where the best research is FAI-compatible. How can this be accomplished?
But if when Eliezer gets finished on 1), someone else is getting finished on 2), the two may be combinable to some extent.
If someone (lets say, Eliezer, having been convinced by the above post to change tack) finishes 2), and no-one has done 1), then a non-friendly AGI becomes far more likely.
I’m not convinced by the singularity concept, but if it’s true Friendliness is orders of magnitude more important than just making an AGI. The difference between friendly AI and no-AI is big, but the difference between unfriendly AI and friendly AI dwarfs it.
And if it’s false? Well, if it’s false, making an AGI is orders of magnitude less important than that.
This cooperation thing sounds hugely important. What we want is for the AGI community to move in a direction where the best research is FAI-compatible. How can this be accomplished?