The main benefits of the project are presumably known to the engineer engaging in it. It was the harm of the project (specifically the harm arising from how the project accelerates AI timelines) that the engineer was skeptical was significant that I wanted to look at more closely to determine whether it was large enough to make it questionable whether engaging in the project was good for the world.
Given my finding that a 400-hour ML project (I stipulated the project takes 0.2 years of FTE work) would, via its effects on shortening AI timelines, shorten the lives of existing people by around 17 years, it seems like this harm is not only trivial, but likely dominates the expected value of engaging in the project. This works out to shortening peoples’ lives by around 370 hours for every hour worked on the project.
If someone thinks the known benefits of working on the project are being drastically underestimated as well, I’d be interested in seeing an analysis of the expected value of those benefits, and in particular and am curious which benefits that person thinks are surprisingly huge. Given the lack of safety angle to the project, I don’t see what other benefit (or harm) would come close in magnitude to the harm caused via accelerating AI timelines and increasing extinction risk, but of course would love to hear if you have any idea.
Why didn’t you also compute the expectation this project contributes towards human flourishing?
If you only count the negative contributions, you will find that the expectation value of everything is negative.
The main benefits of the project are presumably known to the engineer engaging in it. It was the harm of the project (specifically the harm arising from how the project accelerates AI timelines) that the engineer was skeptical was significant that I wanted to look at more closely to determine whether it was large enough to make it questionable whether engaging in the project was good for the world.
Given my finding that a 400-hour ML project (I stipulated the project takes 0.2 years of FTE work) would, via its effects on shortening AI timelines, shorten the lives of existing people by around 17 years, it seems like this harm is not only trivial, but likely dominates the expected value of engaging in the project. This works out to shortening peoples’ lives by around 370 hours for every hour worked on the project.
If someone thinks the known benefits of working on the project are being drastically underestimated as well, I’d be interested in seeing an analysis of the expected value of those benefits, and in particular and am curious which benefits that person thinks are surprisingly huge. Given the lack of safety angle to the project, I don’t see what other benefit (or harm) would come close in magnitude to the harm caused via accelerating AI timelines and increasing extinction risk, but of course would love to hear if you have any idea.
You said nothing about positive contributions. When you throw away the positives, everything is negative.