if ones goal is to minimize the harm per animal conditional on it existing, and one believes that ASI is within reach, the correct focus would seem to be to ignore alignment and focus on capabilities
IMO aligned AI reduces suffering even more than unaligned AI because it’ll pay alien civilizations (eg baby eaters) to not do things that we’d consider large scale suffering (in exchange for some of our lightcone), so even people closer to the negative utilitarian side should want to solve alignment.
agreed overall.
IMO aligned AI reduces suffering even more than unaligned AI because it’ll pay alien civilizations (eg baby eaters) to not do things that we’d consider large scale suffering (in exchange for some of our lightcone), so even people closer to the negative utilitarian side should want to solve alignment.