Also, they might let the AIs proceed with the research anyway even though they don’t trust that they are aligned, or they might erroneously trust that they are aligned due to deception. If this sounds irresponsible to you, well, welcome to Earth.
Also, they might let the AIs proceed with the research anyway even though they don’t trust that they are aligned, or they might erroneously trust that they are aligned due to deception. If this sounds irresponsible to you, well, welcome to Earth.