Not OK in what sense—as in morally wrong to kill sapient beings or as terrifying as getting killed?
The first one—they’re just a close relative :)
I don’t quite get this part—can you elaborate?
TDT says to treat the world as a causal diagram that has as its input your decision algorithm, and outputs (among other things) whether you’re a copy (at least, iff your decision changes how many copies of you there are). So you should literally evaluate the choices as if your action controlled whether or not you are a copy.
As to erasing memories—yeah I’m not sure either, but I’m learning towards it being somewhere between “almost a causal descendant” and “about as bad as being killed and a copy from earlier being saved.”
The first one—they’re just a close relative :)
TDT says to treat the world as a causal diagram that has as its input your decision algorithm, and outputs (among other things) whether you’re a copy (at least, iff your decision changes how many copies of you there are). So you should literally evaluate the choices as if your action controlled whether or not you are a copy.
As to erasing memories—yeah I’m not sure either, but I’m learning towards it being somewhere between “almost a causal descendant” and “about as bad as being killed and a copy from earlier being saved.”
OK, I’ll have to read deeper into TDT to understand why that happens, currently that seems counterintuitive as heck.