There was an attempt to create such math in “Law without law”
avturchin
For any BB, there is another BB somewhere which looks like as if it is causally affected by the first BB. As a result, there are chains of BBs which look like causal chains of minds.
There is almost no difference between them and the real world.
I think there is a subtle difference between DA and Laplace:
Laplace predicts the “minimal probability”: there is at least 4999 / 5000 chance that the
Sun will rise tomorrowhumanity will not extinct next year . DA predicts necessity: there is 1 in 5000 chance that humanity will extinct next year.So Laplace supports reverse Doomsday argument: that the end can’t be very nigh.
But Laplace doesn’t predict that humanity will inevitably extinct at the age 10 times longer than it has now. DA instead predicts that the chances to survive to such age are 10 per cent.
I find this rather infohazardous idea.[edited]
Sitting on a long table (or bar itself) is a signal that you are open to connect with other people.
Does it require assumption of qualia realism—that different qualia of pain do really exist?
Option 3: Benevolent AI cares about values and immortality of all people who ever lived
Surely, I am against currently living people being annihilated. If superintelligent AI will be created but doesn’t provide immortality and resurrection for ALL people ever lived, it is misaliged AI in my opinion.
I asked Sonnet to ELI5 you comment and it said:
Option 1: A small group of people controls a very powerful AI that does what they want. This AI might give those people immortality (living forever), but it might also destroy or control everyone else.Option 2: No super-powerful AI gets built at all, so people just live and die naturally like we do now.
Both outcomes are bad in my opinion.
My point was that if I assume that aging and death are bad – then I personally strive to live indefinitely long, and I wish that other people will do. In that case, longtermism becomes personal issue unrelated to future generations: I only can live billions of years if civilization will exist billions of years.
In other words, if there is no aging and death, there is no ’future generations” in a sense that they exist after my death.
Moreover, if AI risk is real, than AI is a powerful thing and it can solve the problem of aging and death. Anyone surviving until AI will be either instantly dead or practically immortal. In that case, “future generation after my death” is un-applicable.
All that will not happen if AI get stuck half-way to superintelligence. There will be no immortality, but a lot of drone warfare. In other words, to be mundane risk, AI has to have mundane capability limit. We don’t know for now, will it.
what do you mean by “symmetry in qualia”
How it works for zombies of the second kind: the ones with inverted spectrum? Imagine there is a parallel universe, exactly the same as ours, everyone is conscious, but quale of green is replaced with quale of red for everyone.
It looks like myopic “too aligned” failure mode of AI – the AI tries to please current desires of a person instead of taking into account her long-term interests.
This reminds me nested time machines discussed by gwern. https://gwern.net/review/timecrimes
Precomitments plays the role of time loops and they can propagate almost infinitely in time and space. For example, any one who is going to become a major, can pre-pre-pre-commit never open any video for mafia boss etc.
Yes, they can generate a list of comments to a post, putting correct names of prominent LessWrongers and typical styles and topics for each commenter.
Thanks, that was actually what EY said in his quote, which I put just below my model—that we should change the bit each time. I somehow missed it (“send back a ‘0’ if a ‘1’ is recorded as having been received, or vice versa—unless some goal state is achieved”).
As I stated in the epistemic status, this article is just a preliminary write-up. I hope more knowledgeable people will write much better models of x-risks from time machines and will be able to point out where avturchin was wrong and explain what the real situation is.
I am going to post about biouploading soon – where the uploading is happened into (or via) a distributed net of my own biological neurons. This combines good things about uploading – immortality, ability to be copied, easy to repair, and good things about being biological human – preserving infinite complexity, exact sameness of a person, guarantee that the bioupload will have human qualia and any other important hidden things which we can miss.
Thanks! Fantastic read. It occurred to me that sending code or AI back in time, rather than a person, is more likely since sending data to the past could be done serially and probably requires less energy than sending a physical body.
Some loops could be organized by sending a short list of instructions to the past to an appropriate actor – whether human or AI.
Additionally, some loops might not require sending any data at all: Roko’s Basilisk is an example of such acausal data transmission to the past. Could there be an outer loop for Roko’s Basilisk? For example, a precommitment not to be acausally blackmailed.
Also (though I’m not certain about this), loops like you described require that the non-cancellation principle is false – meaning that events which have happened can be turned into non-existence. To prevent this, we would need to travel to the past and compensate for any undesirable changes, thus creating loops. This assumption motivated the character in Timecrimes to try to recreate all events exactly as they happened.
However, if the non-cancellation principle is false, we face a much more serious risk than nested loops (which are annoying, but most people would live normal lives, especially those who aren’t looped and would continue through loops unaffected). The risk is that a one-time time machine could send a small probe into the remote past and prevent humanity from appearing at all.
We can also hypothesize that an explosion of nested loops and time machines might be initiated by aliens somewhere in the multiverse – perhaps in the remote future or another galaxy. Moreover, what we observe as UAPs might be absurd artifacts of this time machine explosion.
The main claim of the article does not depend on the exact mechanism of time travel, which I have chosen not to discuss in detail. The claim is that we should devote some thought to possible existential risks related to time travel.
The argument about presentism is that the past does not ontologically exist, so “travel” into it is impossible. Even if one travels to what appears to be the past, it would not have any causal effects along the timeline.
I was referring to something like eternal return—where all of existence happens again and again, but without new memories being formed. The only effect of such a loop is anthropic—it has a higher measure than a non-looped timeline. This implies that we are more likely to exist in such a loop and in a universe where this is possible.
I think that AI will also preserve humans for utilitarian reasons, like for a trade with possible aliens or simulation owners or even its own future versions – to demonstrate trustworthiness.