I missed this being compiled and posted here when it came out! I typed up a summary [ of the Twitter thread ] and posted it to Substack. I’ll post it here.
“It’s easier to build foomy agent-type-things than nonfoomy ones. If you don’t trust in the logical arguments for this [foomy agents are the computationally cheapest utility satisficers for most conceivable nontrivial local-utility-satisfaction tasks], the evidence for this is all around us, in the form of America-shaped-things, technology, and ‘greed’ having eaten the world despite not starting off very high-prevalence in humanity’s cultural repertoire.
WITH THE TWIST that: while America-shaped-things, technology, and ‘greed’ have worked out great for us and work out great in textbook economics, textbook economics fails to account for the physical contingency of weaker economic participants [such as horses in 1920 and Native Americans in 1492] on the benevolence of stronger economic participants, who found their raw resources more valuable than their labor.”
As I say on Substack, this post goes hard and now I think I have something better to link people to, who are genuinely not convinced yet that the alignment problem is hard, than List of Lethalities.
I missed this being compiled and posted here when it came out! I typed up a summary [ of the Twitter thread ] and posted it to Substack. I’ll post it here.
As I say on Substack, this post goes hard and now I think I have something better to link people to, who are genuinely not convinced yet that the alignment problem is hard, than List of Lethalities.