In humans, there’s a lot of #2 behind our cooperative ability (even if the result looks a lot like #1). I don’t know how universal that will be, but it seems likely to be computationally cheaper at some margin to encode #2 than to calculate and prove #1.
In my view, “code for cooperation” will very often have a base assumption that cooperation in satisfying others’ goals is more effective (which feels like “more pleasant” or “more natural” from inside the algorithm) than contractual resource exchanges.
Mostly #1. Is there a reason to build AIs that inherently care about the well-being of paperclippers etc?
But EA should be mostly #2?
In humans, there’s a lot of #2 behind our cooperative ability (even if the result looks a lot like #1). I don’t know how universal that will be, but it seems likely to be computationally cheaper at some margin to encode #2 than to calculate and prove #1.
In my view, “code for cooperation” will very often have a base assumption that cooperation in satisfying others’ goals is more effective (which feels like “more pleasant” or “more natural” from inside the algorithm) than contractual resource exchanges.