If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren’t involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn’t the first one to create an AI. (90%)
I have not assigned a high probability to that outcome, but I would not find it surprising if someone else has assigned a probability as high as 95% - my set of data is small. On the other hand, time travel at all is such a flagrant violation of known physics that it seems positively ludicrous that it should be assigned a similarly high probability.
Edit: Of course, evidence for that 95%+ would be appreciated.
Well, most of the arguments against it are, to my knowledge, start with something along the lines of “If time travel exists, causality would be fucked up, and therefore time travel can’t exist,” though it might not be framed quite that implicitly.
Also, if FTL travel exists, either general relativity is wrong, or time travel exists, and it might be possible to create FTL travel by harnessing the Casimir effect or something akin to it on a larger scale, and if it is possible to do so, a recursively improving AI will figure out how to do so.
My P(this|time travel possible) is much higher than my P(this), but P(this) is still very low. Why wouldn’t the UFAI have sent the assassins to back before he started spreading bad-for-the-UFAI memes (or just after so it would be able to know who to kill)?
If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren’t involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn’t the first one to create an AI. (90%)
What reason do you have for assigning such high probability to time travel being possible?
And what reason do you have for assigning a high probability to an unfriendly AI coming into existence with Eliezer not involved in its creation?
;)
Edit: I meant what reason do you (nic12000) have? Not you (RobinZ). Sorry for the confusion.
I have not assigned a high probability to that outcome, but I would not find it surprising if someone else has assigned a probability as high as 95% - my set of data is small. On the other hand, time travel at all is such a flagrant violation of known physics that it seems positively ludicrous that it should be assigned a similarly high probability.
Edit: Of course, evidence for that 95%+ would be appreciated.
Well, most of the arguments against it are, to my knowledge, start with something along the lines of “If time travel exists, causality would be fucked up, and therefore time travel can’t exist,” though it might not be framed quite that implicitly.
Also, if FTL travel exists, either general relativity is wrong, or time travel exists, and it might be possible to create FTL travel by harnessing the Casimir effect or something akin to it on a larger scale, and if it is possible to do so, a recursively improving AI will figure out how to do so.
That … doesn’t seem quite like a reason to believe. Remember: as a general rule, any random hypothesis you consider is likely to be wrong unless you already have evidence for it. All you have to do is look at the gallery of failed atomic models to see how difficult it is to even invent the correct answer, however simple it appears in retrospect.
nick voted up, robin voted down… This feels pretty weird.
If it can go back that far, why wouldn’t it go back as far as possible and just start optimizing the universe?
My P(this|time travel possible) is much higher than my P(this), but P(this) is still very low. Why wouldn’t the UFAI have sent the assassins to back before he started spreading bad-for-the-UFAI memes (or just after so it would be able to know who to kill)?