Ah, I may not have gotten all the context.
If you design an AI on x-like principles, it will probably be X-like, unless something goes wrong.
If “something goes wrong” with high probability, it will probably not be X-like.
Ah, I may not have gotten all the context.
If “something goes wrong” with high probability, it will probably not be X-like.