I find it implausible that it is harder to build an AI that doesn’t kill or enslave everybody, than to build an AI that does enslave everybody, in a way that wiser beings than us would agree was beneficial.
Why?
The SIAI claims they want to build an AI that asks what wiser beings than us would want (where the definition includes our values right before the AI gets the ability to alter our brains). They say it would look at you just as much as it looks at Eliezer in defining “wise”. And we don’t actually know it would “enslave everybody”. You think it would because you think a superhumanly bright AI that only cares about ‘wisdom’ so defined would do so, and this seems unwise to you. What do you mean by “wiser” that makes this seem logically coherent?
Those considerations obviously ignore the risk of bugs or errors in execution. But to this layman, bugs seem far more likely to kill us or simply break the AI than to hit that sweet spot (sour spot?) which keeps us alive in a way we don’t want. Which may or may not address your actual point, but certainly addresses the quote.
Why?
The SIAI claims they want to build an AI that asks what wiser beings than us would want (where the definition includes our values right before the AI gets the ability to alter our brains). They say it would look at you just as much as it looks at Eliezer in defining “wise”. And we don’t actually know it would “enslave everybody”. You think it would because you think a superhumanly bright AI that only cares about ‘wisdom’ so defined would do so, and this seems unwise to you. What do you mean by “wiser” that makes this seem logically coherent?
Those considerations obviously ignore the risk of bugs or errors in execution. But to this layman, bugs seem far more likely to kill us or simply break the AI than to hit that sweet spot (sour spot?) which keeps us alive in a way we don’t want. Which may or may not address your actual point, but certainly addresses the quote.