As someone who has disagreed quite a bit with Habryka in the past, endorsed.
They are absolutely trying to solve a frankly pretty difficult problem, where there’s a lot of selection for more conflict than is optimal, and also selection for being more paranoid than is optimal, because they have to figure out if a company or person in the AI space is being shady or outright a liar, which unfortunately has a reasonable probability, but there’s also a reasonable probability of them being honest but them failing to communicate well.
I agree with Raemon that you can’t have your conflict theory detectors set to 0 in the AI space.
Some of those concerns are, indeed, overly paranoid, but, like, it wasn’t actually reasonable to calibrate the wariness/conflict-theory-detector to zero, you have to make guesses.
As someone who has disagreed quite a bit with Habryka in the past, endorsed.
They are absolutely trying to solve a frankly pretty difficult problem, where there’s a lot of selection for more conflict than is optimal, and also selection for being more paranoid than is optimal, because they have to figure out if a company or person in the AI space is being shady or outright a liar, which unfortunately has a reasonable probability, but there’s also a reasonable probability of them being honest but them failing to communicate well.
I agree with Raemon that you can’t have your conflict theory detectors set to 0 in the AI space.