I don’t think “random” AI goals is a thing that will ever happen.
I think it’s much more likely that, if there are Aimability failures, they will be highly nonrandom and push AI towards various attractors (like how the behavior of dictators is surprisingly consistent across time, space and ideology)
I don’t think “random” AI goals is a thing that will ever happen.
I think it’s much more likely that, if there are Aimability failures, they will be highly nonrandom and push AI towards various attractors (like how the behavior of dictators is surprisingly consistent across time, space and ideology)