From my beginners understanding, the two objects you are comparing are not mutually exclusive.
There is currently work being done on inner alignment and outer alignment, where inner alignment is more focused on making sure that an AI doesn’t coincidentally optimize humanity out of existence due to [us not teaching it a clear enough version of/it misinterpreting] our goals and outer alignment more focused on making sure we have goals aligned to human values we should teach it.
Different big names focus on different parts/subparts of the above (with crossover as well).
From my beginners understanding, the two objects you are comparing are not mutually exclusive.
There is currently work being done on inner alignment and outer alignment, where inner alignment is more focused on making sure that an AI doesn’t coincidentally optimize humanity out of existence due to [us not teaching it a clear enough version of/it misinterpreting] our goals and outer alignment more focused on making sure we have goals aligned to human values we should teach it.
Different big names focus on different parts/subparts of the above (with crossover as well).