But this is as true. My point is that you shouldn’t waste hope on lost causes. If you know how to make given AGI Friendly, it’s a design of FAI. It is not the same as performing a Friendliness ritual on AGI and hoping that the situation will somehow work out for the best. It’s basic research in a near-dead field, it’s not like there are 50K teams having any clue. But even then it would be a better bet than Friendliness lottery. If you convince the winner in the reality of danger, to let your team work on Friendliness, you’ve just converted that AGI project into a FAI project, taking it out of the race. If you only get a month to think about improvements to given AGI and haven’t figured out a workable plan by the deadline, there is no reason to call your activity “maximizing chances of Friendliness”.
But this is as true. My point is that you shouldn’t waste hope on lost causes. If you know how to make given AGI Friendly, it’s a design of FAI. It is not the same as performing a Friendliness ritual on AGI and hoping that the situation will somehow work out for the best. It’s basic research in a near-dead field, it’s not like there are 50K teams having any clue. But even then it would be a better bet than Friendliness lottery. If you convince the winner in the reality of danger, to let your team work on Friendliness, you’ve just converted that AGI project into a FAI project, taking it out of the race. If you only get a month to think about improvements to given AGI and haven’t figured out a workable plan by the deadline, there is no reason to call your activity “maximizing chances of Friendliness”.