There will be no plausible, complete and ready to be implemented theory for friendly artificial intelligence good enough for making a safe singleton AI, regardless of the state of artificial intelligence research in general, by 2100. (90 %)
Agreed. I would add that the 90% also makes trying to look for alternative paths to a positive singularity truly worth it (whole brain emulation, containment protocols for unfriendly AI, intelligence enhancement, others?)
Agreed. I would add that the 90% also makes trying to look for alternative paths to a positive singularity truly worth it (whole brain emulation, containment protocols for unfriendly AI, intelligence enhancement, others?)
Worth investigating as a possibility. In some cases I suggests that may lead us to actively acting to thwart searches that will create a negative singularity.
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty,
I’m honestly confused. Are you mistaking me with someone else? I know Will and at least one other guy have mentioned such predictions. I don’t have confidence in a simulation universe and most likely would expend time to discuss it.
so instead I’ll ask: does your certainty affect decisions in your actual life?
I’ll consider the question as a counterfactual and suppose that I would let it affect my decisions somewhat. I would obviously consider the whether or not it was worth expending resources to hack the matrix, so to speak. Possibly including hacking the simulators if that is the most plausible vulnerability. But I suspect I would end up making similar decisions to the ones I make now.
The fact that there is something on the outside of the sim doesn’t change what is inside it, so most of life goes on. Then, the possibility of influencing the external reality is one that is probably best exploited by creating an FAI to do it for me.
When it comes to toy problems, such as when dealing with superintelligences that say they can simulate me, I always act according to whatever action will most benefit the ‘me’ that I care about (usually the non-simmed me, if there is one). This gives some insight into my position.
I accidentally asked this of wedrifid above, but it was intended for you:
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
(About the unwillingness to expend time to elaborate: I really am sorry about that.)
Decisions? …Kind of. In some cases, the answer is trivially yes, because I decide to spend a lot of time thinking about the implications of being in the computation of an agent whose utility function I’m not sure of. But that’s not what you mean, I know.
It doesn’t really change my decisions, but I think that’s because I’m the kind of person who’d be put in a simulation. Or, in other words, if I wasn’t already doing incredibly interesting things, I wouldn’t have heard of Tegmark or the simulation argument, and I would have significantly less anthropic evidence to make me really pay attention to it. (The anthropic evidence is in no way a good source of argument or belief, but it forces me to pay attention to hypotheses that explain it.) If by some weird counterfactual miracle I’d determined I was in a simulation before I was trying to do awesome things, then I’d switch to trying to do awesome things, as awesome things people probably have more measure, and more measure lets me better achieve my goals. But it’s not really possible to do that, because you only have lots of measure (observer moments) in the first place if you’re doing simulation-worthy things. (That’s the part where anthropics comes in and mucks things up, and probably where most people would flat out disagree with me; nonetheless, it’s not that important for establishing >95% certainty in non-negligible simulation measure.) This is the point where hypotheses of structural uncertainty or the outside view like “I’m a crazy narcissist” and “everything I know is wrong” are most convincing. (Though I still haven’t prevented the real arguments these are counterarguments against.)
So to answer your question: no, but for weird self-fulfilling reasons.
There will be no plausible, complete and ready to be implemented theory for friendly artificial intelligence good enough for making a safe singleton AI, regardless of the state of artificial intelligence research in general, by 2100. (90 %)
Voted up for underconfidence. 90% seems low :)
90 years has room for a lot of compound weird.
Downvoted for agreement. (But 90% makes trying to do the impossible well and truly worth it.)
Agreed. I would add that the 90% also makes trying to look for alternative paths to a positive singularity truly worth it (whole brain emulation, containment protocols for unfriendly AI, intelligence enhancement, others?)
Worth investigating as a possibility. In some cases I suggests that may lead us to actively acting to thwart searches that will create a negative singularity.
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
I’m honestly confused. Are you mistaking me with someone else? I know Will and at least one other guy have mentioned such predictions. I don’t have confidence in a simulation universe and most likely would expend time to discuss it.
I’ll consider the question as a counterfactual and suppose that I would let it affect my decisions somewhat. I would obviously consider the whether or not it was worth expending resources to hack the matrix, so to speak. Possibly including hacking the simulators if that is the most plausible vulnerability. But I suspect I would end up making similar decisions to the ones I make now.
The fact that there is something on the outside of the sim doesn’t change what is inside it, so most of life goes on. Then, the possibility of influencing the external reality is one that is probably best exploited by creating an FAI to do it for me.
When it comes to toy problems, such as when dealing with superintelligences that say they can simulate me, I always act according to whatever action will most benefit the ‘me’ that I care about (usually the non-simmed me, if there is one). This gives some insight into my position.
Sorry! My comment was intended for Will_Newsome. Thank you for answering it anyway though, instead of just calling me an idiot =D
Upvoted for disagreement; this universe computation is probably fun theoretic, and I think a tragic end would be cliche.
I accidentally asked this of wedrifid above, but it was intended for you:
Your confidence in a simulation universe has shaded many of your responses in this thread. You’ve stated you’re unwilling to expend the time to elaborate on your certainty, so instead I’ll ask: does your certainty affect decisions in your actual life?
(About the unwillingness to expend time to elaborate: I really am sorry about that.)
Decisions? …Kind of. In some cases, the answer is trivially yes, because I decide to spend a lot of time thinking about the implications of being in the computation of an agent whose utility function I’m not sure of. But that’s not what you mean, I know.
It doesn’t really change my decisions, but I think that’s because I’m the kind of person who’d be put in a simulation. Or, in other words, if I wasn’t already doing incredibly interesting things, I wouldn’t have heard of Tegmark or the simulation argument, and I would have significantly less anthropic evidence to make me really pay attention to it. (The anthropic evidence is in no way a good source of argument or belief, but it forces me to pay attention to hypotheses that explain it.) If by some weird counterfactual miracle I’d determined I was in a simulation before I was trying to do awesome things, then I’d switch to trying to do awesome things, as awesome things people probably have more measure, and more measure lets me better achieve my goals. But it’s not really possible to do that, because you only have lots of measure (observer moments) in the first place if you’re doing simulation-worthy things. (That’s the part where anthropics comes in and mucks things up, and probably where most people would flat out disagree with me; nonetheless, it’s not that important for establishing >95% certainty in non-negligible simulation measure.) This is the point where hypotheses of structural uncertainty or the outside view like “I’m a crazy narcissist” and “everything I know is wrong” are most convincing. (Though I still haven’t prevented the real arguments these are counterarguments against.)
So to answer your question: no, but for weird self-fulfilling reasons.
Interesting. Your reasoning in the counterfactual miracle is very reminiscent of UDT reasoning on Newcomb’s problem.
Thanks for sharing. If you ever take the time to lay out all your reasons for having >95% certainty in a simulation universe, I’d love to read it.
:(