Reducing, through reducing the population beforehand.
I am sorely tempted to adjust its goal. Since this is a narrow AI, it shouldn’t be smart enough to be friendly; we can’t encode the real utility function into it, even if we knew what it is. I wonder if that means it can’t be made safe, or just that we need to be careful?
Reducing, through reducing the population beforehand.
I am sorely tempted to adjust its goal. Since this is a narrow AI, it shouldn’t be smart enough to be friendly; we can’t encode the real utility function into it, even if we knew what it is. I wonder if that means it can’t be made safe, or just that we need to be careful?
Best I can tell, the lesson is to be very careful with how you code the objective.