I very much agree with human flourishing as the main value I most want AI technologies to pursue and be used to pursue.
In that framing, my key claim is that in practice no area of purely technical AI research — including “safety” and/or “alignment” research — can be adequately checked for whether it will help or hinder human flourishing, without a social model of how the resulting techologies will be used by individuals / businesses / governments / etc..
And we don’t have good social models of technology for really any technology, even retrospectively. So AI is certainly one we are not going to align with human flourishing in advance. When it comes to human flourishing the humanizing of technologies take a lot of time. Eventually we will get there, but it’s a process that requires a lot of individual actors making choices and “feature requests” from the world, features that promote human flourishing.
I would not be surprised if lurking in the background of my thought is Tyler Cowen. He’s a huge influence on me. But I was thinking of specific examples. I don’t know of a good general history of “humanizing”.
What I had explicitly in mind was the historical development of automobile safety: seatbelts and airbags. There is a history of invention, innovation, deployment, and legal mandating that is long and varied for these.
How long did it take between the discovery of damaging chlorofluorocarbons and their demise? Or for asbestos and its abatement—how much does society pay for this process? What’s the delta between climate change research and renewables investment?
Essentially, many an externality can be internalized once it is named and drawn attention to and the costs are realized.
I very much agree with human flourishing as the main value I most want AI technologies to pursue and be used to pursue.
In that framing, my key claim is that in practice no area of purely technical AI research — including “safety” and/or “alignment” research — can be adequately checked for whether it will help or hinder human flourishing, without a social model of how the resulting techologies will be used by individuals / businesses / governments / etc..
And we don’t have good social models of technology for really any technology, even retrospectively. So AI is certainly one we are not going to align with human flourishing in advance. When it comes to human flourishing the humanizing of technologies take a lot of time. Eventually we will get there, but it’s a process that requires a lot of individual actors making choices and “feature requests” from the world, features that promote human flourishing.
Are you referring to a Science of Technological Progress ala https://www.theatlantic.com/science/archive/2019/07/we-need-new-science-progress/594946 ?
What is your gist on the processes for humanizing technologies, what sources/researches are available on such phenomena?
I would not be surprised if lurking in the background of my thought is Tyler Cowen. He’s a huge influence on me. But I was thinking of specific examples. I don’t know of a good general history of “humanizing”.
What I had explicitly in mind was the historical development of automobile safety: seatbelts and airbags. There is a history of invention, innovation, deployment, and legal mandating that is long and varied for these.
How long did it take between the discovery of damaging chlorofluorocarbons and their demise? Or for asbestos and its abatement—how much does society pay for this process? What’s the delta between climate change research and renewables investment?
Essentially, many an externality can be internalized once it is named and drawn attention to and the costs are realized.