Agreed. I would add that the 90% also makes trying to look for alternative paths to a positive singularity truly worth it (whole brain emulation, containment protocols for unfriendly AI, intelligence enhancement, others?)
Agreed. I would add that the 90% also makes trying to look for alternative paths to a positive singularity truly worth it (whole brain emulation, containment protocols for unfriendly AI, intelligence enhancement, others?)
Worth investigating as a possibility. In some cases I suggests that may lead us to actively acting to thwart searches that will create a negative singularity.
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty,
I’m honestly confused. Are you mistaking me with someone else? I know Will and at least one other guy have mentioned such predictions. I don’t have confidence in a simulation universe and most likely would expend time to discuss it.
so instead I’ll ask: does your certainty affect decisions in your actual life?
I’ll consider the question as a counterfactual and suppose that I would let it affect my decisions somewhat. I would obviously consider the whether or not it was worth expending resources to hack the matrix, so to speak. Possibly including hacking the simulators if that is the most plausible vulnerability. But I suspect I would end up making similar decisions to the ones I make now.
The fact that there is something on the outside of the sim doesn’t change what is inside it, so most of life goes on. Then, the possibility of influencing the external reality is one that is probably best exploited by creating an FAI to do it for me.
When it comes to toy problems, such as when dealing with superintelligences that say they can simulate me, I always act according to whatever action will most benefit the ‘me’ that I care about (usually the non-simmed me, if there is one). This gives some insight into my position.
Downvoted for agreement. (But 90% makes trying to do the impossible well and truly worth it.)
Agreed. I would add that the 90% also makes trying to look for alternative paths to a positive singularity truly worth it (whole brain emulation, containment protocols for unfriendly AI, intelligence enhancement, others?)
Worth investigating as a possibility. In some cases I suggests that may lead us to actively acting to thwart searches that will create a negative singularity.
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
I’m honestly confused. Are you mistaking me with someone else? I know Will and at least one other guy have mentioned such predictions. I don’t have confidence in a simulation universe and most likely would expend time to discuss it.
I’ll consider the question as a counterfactual and suppose that I would let it affect my decisions somewhat. I would obviously consider the whether or not it was worth expending resources to hack the matrix, so to speak. Possibly including hacking the simulators if that is the most plausible vulnerability. But I suspect I would end up making similar decisions to the ones I make now.
The fact that there is something on the outside of the sim doesn’t change what is inside it, so most of life goes on. Then, the possibility of influencing the external reality is one that is probably best exploited by creating an FAI to do it for me.
When it comes to toy problems, such as when dealing with superintelligences that say they can simulate me, I always act according to whatever action will most benefit the ‘me’ that I care about (usually the non-simmed me, if there is one). This gives some insight into my position.
Sorry! My comment was intended for Will_Newsome. Thank you for answering it anyway though, instead of just calling me an idiot =D