I finally noticed your anti-doom post. Mostly you seem to be skeptical about the specific idea of the single superintelligence that rapidly bootstraps its way to control of the world. The complexity and uncertainty of real life means that a competitive pluralism will be maintained.
But even if that’s so, I don’t see anything in your outlook which implies that such a world will be friendly to human beings. If people are fighting for their lives under conditions of AI-empowered social Darwinism, or cowering under the umbrella of AI superpowers that are constantly chipping away at each other, I doubt many people are going to be saying, oh those foolish rationalists of the 2010s who thought it was all going to be over in an instant.
Any scenario in which AIs have autonomy, general intelligence, and a need to compete, just seems highly unstable from the perspective of all-natural unaugmented human beings remaining relevant.
I guess I will break my recently self-imposed rule of not talking about this anymore.
I can certainly envision a future where multiple powerful AGIs fight against each other and are used as weapons, some might be rogue AGIs and some others might be at the service of human-controlled institutions (such as Nation Estates). To put it more clearly: I have trouble imagining a future where something along these lines DOES NOT end up happening.
But, this is NOT what Eliezer is saying. Eliezer is saying:
The Alignment problem has to be solved AT THE FIRST TRY because once you create this AGI we are dead in a matter of days (maybe weeks/months, it does not matter). If someone thinks that Eliezer is saying something else, I think they are not listening properly. Eliezer can have many flaws but lack of clarity is not one of them.
In general, I think this is a textbook example of the Motte and Baley fallacy. The Motte is: AGI can be dangerous, AGI will kill people, AGI will be very powerful. The Baley is: AGI creation means the imminent destruction of all human life and therefore we need to stop now all developments.
I never discussed the Motte. I do agree with that.
FYI I upvoted your most recent comment, but downvoted your previous few in this thread. Your most recent comment seemed to do a good job spelling out your position and gesturing at your crux. My guess is maybe other people were just tired of the discussion and downvoting sort of to make the whole discussion go away.
I finally noticed your anti-doom post. Mostly you seem to be skeptical about the specific idea of the single superintelligence that rapidly bootstraps its way to control of the world. The complexity and uncertainty of real life means that a competitive pluralism will be maintained.
But even if that’s so, I don’t see anything in your outlook which implies that such a world will be friendly to human beings. If people are fighting for their lives under conditions of AI-empowered social Darwinism, or cowering under the umbrella of AI superpowers that are constantly chipping away at each other, I doubt many people are going to be saying, oh those foolish rationalists of the 2010s who thought it was all going to be over in an instant.
Any scenario in which AIs have autonomy, general intelligence, and a need to compete, just seems highly unstable from the perspective of all-natural unaugmented human beings remaining relevant.
Doom is doom, dystopia is dystopia.
I guess I will break my recently self-imposed rule of not talking about this anymore.
I can certainly envision a future where multiple powerful AGIs fight against each other and are used as weapons, some might be rogue AGIs and some others might be at the service of human-controlled institutions (such as Nation Estates). To put it more clearly: I have trouble imagining a future where something along these lines DOES NOT end up happening.
But, this is NOT what Eliezer is saying. Eliezer is saying:
The Alignment problem has to be solved AT THE FIRST TRY because once you create this AGI we are dead in a matter of days (maybe weeks/months, it does not matter). If someone thinks that Eliezer is saying something else, I think they are not listening properly. Eliezer can have many flaws but lack of clarity is not one of them.
In general, I think this is a textbook example of the Motte and Baley fallacy. The Motte is: AGI can be dangerous, AGI will kill people, AGI will be very powerful. The Baley is: AGI creation means the imminent destruction of all human life and therefore we need to stop now all developments.
I never discussed the Motte. I do agree with that.
I would certainly appreciate knowing the reason for the downvotes
FYI I upvoted your most recent comment, but downvoted your previous few in this thread. Your most recent comment seemed to do a good job spelling out your position and gesturing at your crux. My guess is maybe other people were just tired of the discussion and downvoting sort of to make the whole discussion go away.