It’s true that if humans were reliably very ambitious, consequentialist, and power-seeking, then this would be stronger evidence that superintelligent AI tends to be ambitious and power-seeking. So the absence of that evidence has to be evidence against “superintelligent AI tends to be ambitious and power-seeking”, even if it’s not a big weight in the scales.
Current ML work is on track to produce things that are, in the ways that matter, more like “randomly sampled plans” than like “the sorts of plans a civilization of human von Neumanns would produce”. (Before we’re anywhere near being able to produce the latter sorts of things.)[9]
We’re building “AI” in the sense of building powerful general search processes (and search processes for search processes), not building “AI” in the sense of building friendly ~humans but in silicon.
Mainly from the second paragraph, I got the impression that “randomly sampled plans” referred to, or at least included, what is the goal, not just how much you optimize it. Anyway, I think I’m losing the thread of the discussion, so whatever.
It’s true that if humans were reliably very ambitious, consequentialist, and power-seeking, then this would be stronger evidence that superintelligent AI tends to be ambitious and power-seeking. So the absence of that evidence has to be evidence against “superintelligent AI tends to be ambitious and power-seeking”, even if it’s not a big weight in the scales.
Mainly from the second paragraph, I got the impression that “randomly sampled plans” referred to, or at least included, what is the goal, not just how much you optimize it. Anyway, I think I’m losing the thread of the discussion, so whatever.