an agent will aim its capabilities towards its current goals including by reshaping itself and its context to make itself better-targeted at those goals, creating a virtuous cycle wherein increased capabilities lock in & robustify initial alignment, so long as that initial alignment was in a “basin of attraction”, so to speak
Yeah, I think if you nail initial alignment and have a system that has developed the instrumental drive for goal-content integrity, you’re in a really good position. That’s what I mean by “getting alignment to generalize in a robust manner”, getting your AI system to the point where it “really *wants* to help you help them stay aligned with you in a deep way”.
I think a key question of inner alignment difficulty is to what extent there is a “basin of attraction”, where Yudkowsky is arguing there’s no easy basin to find, and you basically have to precariously balance on some hill on the value landscape.
I wrote a little about my confusions about when goal-content integrity might develop here.
Yeah, I think if you nail initial alignment and have a system that has developed the instrumental drive for goal-content integrity, you’re in a really good position. That’s what I mean by “getting alignment to generalize in a robust manner”, getting your AI system to the point where it “really *wants* to help you help them stay aligned with you in a deep way”.
I think a key question of inner alignment difficulty is to what extent there is a “basin of attraction”, where Yudkowsky is arguing there’s no easy basin to find, and you basically have to precariously balance on some hill on the value landscape.
I wrote a little about my confusions about when goal-content integrity might develop here.