I think you’re interpreting far too literally the names of the simulation scenarios I jotted down. Your ability to trade is compromised if there’s no one left to trade with, for instance. But none of that matters much, really, as those are meant to be illustrative only.
Aren’t you arguing that AI will be aligned by default?
No. I’m really arguing that we don’t know whether or not it’ll be aligned by default.
As there is no particular reason to expect that it’s the case,
I also don’t see any particular reason to expect that the opposite would be the case, which is why I maintain that we don’t know. But as I understand it, you seem to think there is indeed reason to expect the opposite, because:
Sadly for us, survival of humanity is a very specific thing. This is just the whole premise of the alignment problem once again.
I think the problem here is that is that you’re using the word “specific” with a different meaning than people normally use in this context. Survival of humanity sure is a “specific” thing in the sense that it’ll require specific planning on the part of the ASI. It is however not “specific” in the sense that it’s hard to do if the ASI wants it done, it’s just that we don’t know how to make it want that. Abstract considerations about simulations might just do the trick automatically.
I think you’re interpreting far too literally the names of the simulation scenarios I jotted down. Your ability to trade is compromised if there’s no one left to trade with, for instance. But none of that matters much, really, as those are meant to be illustrative only.
No. I’m really arguing that we don’t know whether or not it’ll be aligned by default.
I also don’t see any particular reason to expect that the opposite would be the case, which is why I maintain that we don’t know. But as I understand it, you seem to think there is indeed reason to expect the opposite, because:
I think the problem here is that is that you’re using the word “specific” with a different meaning than people normally use in this context. Survival of humanity sure is a “specific” thing in the sense that it’ll require specific planning on the part of the ASI. It is however not “specific” in the sense that it’s hard to do if the ASI wants it done, it’s just that we don’t know how to make it want that. Abstract considerations about simulations might just do the trick automatically.