The David Silver on it being okay to have AGIs with different goals part worried me because it sounded like he wasn’t at all thinking about the risk from misaligned AI. It seemed like he was saying we should create general intelligence regardless of its goals and values, just because it’s intelligent.
The David Silver on it being okay to have AGIs with different goals part worried me because it sounded like he wasn’t at all thinking about the risk from misaligned AI. It seemed like he was saying we should create general intelligence regardless of its goals and values, just because it’s intelligent.