The AI Alignment Prize was a contest carried out during 2017-2019, meant to encourage better work in the area.
Throughout its existence, the alignment prize paid out $50k ($15k in the first iteration, $15k in the second, $10k in the third and $20k in the fourth and last).
The successive rounds received 40, 37, 12 and 10 entries, and awarded 6, 5, 2 and 4 people, respectively. Towards the end of the prize’s existence, numbers declined, as the organizers note and reflect about in the body and comments of the last post. Two bottlenecks to participation and impact noted were a) lack of prestige, and b) the time requirements for the organizers.
Since then, there have been other prizes hosted on LessWrong, some of which can be found in the bounties and prizes tag, though older articles might not be tagged. One such related prize is the ELK prize, which attracted 197 entries, selected 32 winners, and gave out $274,000 in prizes.