MIRI should’ve been an attempt to keep AGI out of the hands of the state
Eliezer several times expressed the view that it’s a mistake to focus too much on whether “good” or “bad” people are in charge of AGI development. Good people with a mistaken methodology can still produce a “bad” AI, and a sufficiently robust methodology (e.g. by aligning with an idealized abstract human rather than a concrete individual) would still produce a “good” AI from otherwise unpromising circumstances.
You write
Eliezer several times expressed the view that it’s a mistake to focus too much on whether “good” or “bad” people are in charge of AGI development. Good people with a mistaken methodology can still produce a “bad” AI, and a sufficiently robust methodology (e.g. by aligning with an idealized abstract human rather than a concrete individual) would still produce a “good” AI from otherwise unpromising circumstances.
Can you link to 3 times?
Unequivocal example from 2015: “You can’t take for granted that good people build good AIs and bad people build bad AIs.”
A position paper from 2004. See the whole section “Avoid creating a motive for modern-day humans to fight over the initial dynamic.”
Tweets from 2020.
That’s an artificially narrow example. You can have...
a good person with good methodology
a good person with bad methodology
a bad person with good methodology
a bad person with bad methodology
A question to ask is, when someone aligns an AGI with some approximation of “good values,” whose approximation are we using?