Oh, that’s so unfair! I’ve been avoiding working in AI for years, on account I worried I might have a good enough idea to bring the apocalypse forward a day. One day out of 8 billion lives makes me Worse Than Hitler(TM) by at least an order of magnitude.
And I’ve even been avoiding blogging about basic reinforcement learning for fear of inspiring people!
Where’s my million?!
Also hurry up, I hardly have any use for money, so it will take ages to spend a million dollars even if I get creative, and it doesn’t look like we have very long.
This actually brings up an important consideration. It would be bad to incentivize AGI research by paying people who work inside it huge sums, the same way paying slaveowners to free their slaves is a poor abolitionists’ strategy. In reality people aren’t Homo Economicus, so I think we could work around it, but we’d need to be careful.
This actually brings up an important consideration. It would be bad to incentivize AGI research by paying people who work inside it huge sums, the same way paying slaveowners to free their slaves is a poor abolitionists’ strategy.
? This was literally what the UK did to free their slaves, and (iiuc) historians considered it more successful than (e.g.) the US strategy, which led to abolition a generation later and also involved a civil war.
That was after the abolition of slavery by law, and long after the interdiction of the transatlantic slave trade by the Royal Navy. I wonder if it would have been such a good strategy if the compensated could have re-invested their money.
This is a really good point! Though I thought the subtext of the original comment was about the incentives rather than the causal benefits/harms after receiving the money.
That was actually the subtext of my comment, so thanks for making it explicit.
To be tediously clear, I think this is a good idea, but only because I take imminent AI extinction doom super-seriously and think just about anything that could plausibly help might as well be tried.
As a general policy, I do feel giving all the money you can lay your hands on to the bad guys in proportion to how bad they’ve historically been is not “establishing correct incentives going forward”. (am I supposed to add “according to my model” here? I’m not terribly hip with the crazy rhythm of the current streets.)
Not such an issue if there is no going forward. An interesting test case for Logical Decision Theory perhaps?
Well, Russia did pay to free the serfs and it would have worked if they didn’t design it so that the former serfs had to pay back the debt themselves. Similarly, Ireland paid the big landholders off in a series of acts from 1870 − 1909 which created a new small landholding class. In fact, this type of thing historically seems to work when it concerns purchasing property rights.
The analogy though is imperfect for knowledge workers. They don’t own the knowledge in the same way. Perhaps, the best way to do this is as previously mentioned, to get a sizable portion of current accomplished researchers to spend time on the problem by purchasing their time.
Oh, that’s so unfair! I’ve been avoiding working in AI for years, on account I worried I might have a good enough idea to bring the apocalypse forward a day. One day out of 8 billion lives makes me Worse Than Hitler(TM) by at least an order of magnitude.
And I’ve even been avoiding blogging about basic reinforcement learning for fear of inspiring people!
Where’s my million?!
Also hurry up, I hardly have any use for money, so it will take ages to spend a million dollars even if I get creative, and it doesn’t look like we have very long.
This actually brings up an important consideration. It would be bad to incentivize AGI research by paying people who work inside it huge sums, the same way paying slaveowners to free their slaves is a poor abolitionists’ strategy. In reality people aren’t Homo Economicus, so I think we could work around it, but we’d need to be careful.
? This was literally what the UK did to free their slaves, and (iiuc) historians considered it more successful than (e.g.) the US strategy, which led to abolition a generation later and also involved a civil war.
That was after the abolition of slavery by law, and long after the interdiction of the transatlantic slave trade by the Royal Navy. I wonder if it would have been such a good strategy if the compensated could have re-invested their money.
This is a really good point! Though I thought the subtext of the original comment was about the incentives rather than the causal benefits/harms after receiving the money.
That was actually the subtext of my comment, so thanks for making it explicit.
To be tediously clear, I think this is a good idea, but only because I take imminent AI extinction doom super-seriously and think just about anything that could plausibly help might as well be tried.
As a general policy, I do feel giving all the money you can lay your hands on to the bad guys in proportion to how bad they’ve historically been is not “establishing correct incentives going forward”. (am I supposed to add “according to my model” here? I’m not terribly hip with the crazy rhythm of the current streets.)
Not such an issue if there is no going forward. An interesting test case for Logical Decision Theory perhaps?
Well, Russia did pay to free the serfs and it would have worked if they didn’t design it so that the former serfs had to pay back the debt themselves. Similarly, Ireland paid the big landholders off in a series of acts from 1870 − 1909 which created a new small landholding class. In fact, this type of thing historically seems to work when it concerns purchasing property rights.
The analogy though is imperfect for knowledge workers. They don’t own the knowledge in the same way. Perhaps, the best way to do this is as previously mentioned, to get a sizable portion of current accomplished researchers to spend time on the problem by purchasing their time.