Another possible source is the Computer language shootout. They don’t publish clear historical measures (but perhaps some of the existing research on the shootout does?), so a workaround might be to figure out the submission time of individual programs/versions and use that to create a historical time-series. This would usefully cover both the improving performance of various programming languages and the ability of programmers of that language to improve their programs to hit the fixed target of a benchmark.
Another possible source is the Computer language shootout. They don’t publish clear historical measures (but perhaps some of the existing research on the shootout does?), so a workaround might be to figure out the submission time of individual programs/versions and use that to create a historical time-series. This would usefully cover both the improving performance of various programming languages and the ability of programmers of that language to improve their programs to hit the fixed target of a benchmark.
(But turns out that http://pcdb.santafe.edu/process_view.php only measures hardware progress.)