That was actually the subtext of my comment, so thanks for making it explicit.
To be tediously clear, I think this is a good idea, but only because I take imminent AI extinction doom super-seriously and think just about anything that could plausibly help might as well be tried.
As a general policy, I do feel giving all the money you can lay your hands on to the bad guys in proportion to how bad they’ve historically been is not “establishing correct incentives going forward”. (am I supposed to add “according to my model” here? I’m not terribly hip with the crazy rhythm of the current streets.)
Not such an issue if there is no going forward. An interesting test case for Logical Decision Theory perhaps?
That was actually the subtext of my comment, so thanks for making it explicit.
To be tediously clear, I think this is a good idea, but only because I take imminent AI extinction doom super-seriously and think just about anything that could plausibly help might as well be tried.
As a general policy, I do feel giving all the money you can lay your hands on to the bad guys in proportion to how bad they’ve historically been is not “establishing correct incentives going forward”. (am I supposed to add “according to my model” here? I’m not terribly hip with the crazy rhythm of the current streets.)
Not such an issue if there is no going forward. An interesting test case for Logical Decision Theory perhaps?