I feel like the difference between the Alpha and Beta examples and my examples mediate through your examples having basically no control of Beta’s data at all, and my examples having far more control over what data is learned by the AI.
I think the key crux is whether we have much more control over AI data sources than evolution.
If I agreed with you that we would have essentially no control on what data the AI has, I’d be a lot more worried, but I don’t think this is true, and I expect future AIs, including AGIs, to be a lot more built than grown, and for a lot of their data to be very carefully controlled via synthetic data, for simple capabilities reasons, but this can also be used for alignment strategies.
I think another disagreement is I basically don’t buy the evolution analogy for DL, and I think there are some deep disanalogies (the big one for now is again how much more control over data sources than evolution, and this is only set to increase with synthetic data).
So I basically don’t expect this to happen:
I instead strongly expect that the story would just repeat. The training process (or whatever process spits out the AGI) would end up creating some extremely specific conditions in which the AGI is learning the values. Its values would then necessarily be some complicated functions over weird mixes of the abstractions-natural-to-the-dataset-it’s-trained-on, with their specifics being highly contingent on some invisible-to-us details of that process.
Pretty much all of your examples rely on the Alpha being unable to control the data learnt by Beta, and if this isn’t the case, your examples break down.
I feel like the difference between the Alpha and Beta examples and my examples mediate through your examples having basically no control of Beta’s data at all, and my examples having far more control over what data is learned by the AI.
I think the key crux is whether we have much more control over AI data sources than evolution.
If I agreed with you that we would have essentially no control on what data the AI has, I’d be a lot more worried, but I don’t think this is true, and I expect future AIs, including AGIs, to be a lot more built than grown, and for a lot of their data to be very carefully controlled via synthetic data, for simple capabilities reasons, but this can also be used for alignment strategies.
I think another disagreement is I basically don’t buy the evolution analogy for DL, and I think there are some deep disanalogies (the big one for now is again how much more control over data sources than evolution, and this is only set to increase with synthetic data).
So I basically don’t expect this to happen:
Pretty much all of your examples rely on the Alpha being unable to control the data learnt by Beta, and if this isn’t the case, your examples break down.