This post is a collection of claims about acausal trade, some of which I find more compelling and some less. Overall, I think it’s a good contribution to the discussion.
Claims that I mostly agree with include:
Acausal trade in practice is usually not accomplished by literal simulation (the latter is mostly important as a convenient toy model) but by abstract reasoning.
It is likely to be useful to think of the “acausal economy” as a whole, rather just about each individual trade separately.
Claims that I have some quibbles with include:
The claim that there is a strong relation between the prevalent acausal norms and human moral philosophy. I agree that there are likely to be some parallels: both processes are to some degree motivated by articulating mutually beneficial norms. However, human moral philosophy is likely to contain biases specific to humans and to human circumstances on Earth. Conversely, acausal norms are likely to be shaped by metacosmological circumstances that we don’t even know yet. For example, maybe there is some reason why most civilizations in the multiverse really hate logarithmic spirals. In this case, there would be a norm against logarithmic spirals that we are currently completely oblivious about.
The claim that the concept of “boundaries” is likely to play a key role in acausal norms. I find this somewhat plausible but far from clear. AFAIK, Critch so far produced little in the way of compelling mathematical models to support the “boundaries” idea.
It seems to be implicit in the post that, an acausal-norm-following paperclip-maximizer would be “nice” to humans to some degree. (But Critch warns us that the paperclip-maximizer might easily fail to be acausal-norm-following.) While I grant that it’s possible, I think it’s far from clear. The usual trad-y argument to be nice to others is so that others are nice to you. However, (i) some agents are a priori less threatened by others and hence find the argument less compelling (ii) who exactly are the relevant “others” is unclear. For example, it might be that humans are in some ways not “advanced” enough to be considered. Conversely, it’s possible that human treatment of animals has already condemned us to the status of defectors (which can be defected-against in turn).
The technical notion that logical proofs and Lob/Payor are ultimately the right mathematical model of acausal trade. I am very much unconvinced, e.g. because proof search is intractable and also because we don’t know how to naturally generalizes these arguments far beyond the toy setting of Fair Bots in Prisoner’s Dilemma. On the other hand, I do expect there to exist some mathematical justification of superrationality, just along otherlines.
This post is a collection of claims about acausal trade, some of which I find more compelling and some less. Overall, I think it’s a good contribution to the discussion.
Claims that I mostly agree with include:
Acausal trade in practice is usually not accomplished by literal simulation (the latter is mostly important as a convenient toy model) but by abstract reasoning.
It is likely to be useful to think of the “acausal economy” as a whole, rather just about each individual trade separately.
Claims that I have some quibbles with include:
The claim that there is a strong relation between the prevalent acausal norms and human moral philosophy. I agree that there are likely to be some parallels: both processes are to some degree motivated by articulating mutually beneficial norms. However, human moral philosophy is likely to contain biases specific to humans and to human circumstances on Earth. Conversely, acausal norms are likely to be shaped by metacosmological circumstances that we don’t even know yet. For example, maybe there is some reason why most civilizations in the multiverse really hate logarithmic spirals. In this case, there would be a norm against logarithmic spirals that we are currently completely oblivious about.
The claim that the concept of “boundaries” is likely to play a key role in acausal norms. I find this somewhat plausible but far from clear. AFAIK, Critch so far produced little in the way of compelling mathematical models to support the “boundaries” idea.
It seems to be implicit in the post that, an acausal-norm-following paperclip-maximizer would be “nice” to humans to some degree. (But Critch warns us that the paperclip-maximizer might easily fail to be acausal-norm-following.) While I grant that it’s possible, I think it’s far from clear. The usual trad-y argument to be nice to others is so that others are nice to you. However, (i) some agents are a priori less threatened by others and hence find the argument less compelling (ii) who exactly are the relevant “others” is unclear. For example, it might be that humans are in some ways not “advanced” enough to be considered. Conversely, it’s possible that human treatment of animals has already condemned us to the status of defectors (which can be defected-against in turn).
The technical notion that logical proofs and Lob/Payor are ultimately the right mathematical model of acausal trade. I am very much unconvinced, e.g. because proof search is intractable and also because we don’t know how to naturally generalizes these arguments far beyond the toy setting of Fair Bots in Prisoner’s Dilemma. On the other hand, I do expect there to exist some mathematical justification of superrationality, just along other lines.