The space of possible futures is a lot bigger than you think (and bigger than you CAN think). Here are a few possibilities (not representative of any probability distribution, because it’s bigger than I can think too). I do tend to favor a mix of the first and last ones in my limited thinking:
There’s some limit to complexity of computation (perhaps speed of light), and a singleton AI is insufficiently powerful for all the optimizations it wants. It makes new agents, which end up deciding to kill it (value drift or belief drift if they think it less-efficient than a replacement). Repeat with every generation forever.
The AI decides that it’s preferred state of the universe is on track without it’s interventions, and voluntarily terminate. Some conceptions of a deity are close to this—if the end-goal is human-like agency, make the humans then get out of the way.
It turns out optimal to improve the universe by designing and creating a new AI and voluntarily terminating oneself. We get a sequence of ever-improving AIs.
Our concept of identity is wrong. It barely applies to humans, and not to AIs at all. The future cognition mass of the universe is constantly cleaving and merging in ways that make counting the number of intelligences meaningless.
The implications that any of these have as to goals (expansion, survival for additional time periods, creation of aligned agents that are better or more far-reaching than you, improvement of local state) is no different from the question of what are your personal goals as a human. Are you seeking immortality, seeking to help your community, seeking to create a better human replacement, seeking to create a better AI replacement, etc.? Both you and the theoretical AI can assign probability*effect weights to all options, and choose accordingly.
I agree with you claims about AI but with not your claims about what I think and or CAN think :) This was probably rhetoric from your side, but it may look offensive from the other side.
I apologize to anyone offended, but I stand by my statement. I do believe that the space of possible minds is bigger than any individual mind can conceive.
Your ideas are directed either to AI’s halting or multigenerational AI-civilization. As our identity concept is human-only, generations of AIs may more look like one AI. So the question boils down to the question of continuity of intelligent life.
However, this is not exactly what I wanted to ask. I was more interested into relation of two potential infinities: infinite IQ of very advance AI, and infinite possible future time needed for “immortality”.
It all again boils down to the scholastic question: “Could God create a stone so heavy that he can’t lifted it”, which is basically a question about infinite capabilities and infinite complexity of problems (https://en.wikipedia.org/wiki/Omnipotence_paradox).
Why I ask it? Because some times in discussion I see an appeal to superintelligent AI’s omnipotence (like it will be able almost instantly convert galaxies to quasars or travel with light speed).
What do you mean by infinite IQ? If I take you literally, that’s impossible because the test outputs real numbers. But maybe you mean “unbounded optimization power as time goes to infinity” or something similar.
The space of possible futures is a lot bigger than you think (and bigger than you CAN think). Here are a few possibilities (not representative of any probability distribution, because it’s bigger than I can think too). I do tend to favor a mix of the first and last ones in my limited thinking:
There’s some limit to complexity of computation (perhaps speed of light), and a singleton AI is insufficiently powerful for all the optimizations it wants. It makes new agents, which end up deciding to kill it (value drift or belief drift if they think it less-efficient than a replacement). Repeat with every generation forever.
The AI decides that it’s preferred state of the universe is on track without it’s interventions, and voluntarily terminate. Some conceptions of a deity are close to this—if the end-goal is human-like agency, make the humans then get out of the way.
It turns out optimal to improve the universe by designing and creating a new AI and voluntarily terminating oneself. We get a sequence of ever-improving AIs.
Our concept of identity is wrong. It barely applies to humans, and not to AIs at all. The future cognition mass of the universe is constantly cleaving and merging in ways that make counting the number of intelligences meaningless.
The implications that any of these have as to goals (expansion, survival for additional time periods, creation of aligned agents that are better or more far-reaching than you, improvement of local state) is no different from the question of what are your personal goals as a human. Are you seeking immortality, seeking to help your community, seeking to create a better human replacement, seeking to create a better AI replacement, etc.? Both you and the theoretical AI can assign probability*effect weights to all options, and choose accordingly.
I agree with you claims about AI but with not your claims about what I think and or CAN think :) This was probably rhetoric from your side, but it may look offensive from the other side.
I apologize to anyone offended, but I stand by my statement. I do believe that the space of possible minds is bigger than any individual mind can conceive.
Your ideas are directed either to AI’s halting or multigenerational AI-civilization. As our identity concept is human-only, generations of AIs may more look like one AI. So the question boils down to the question of continuity of intelligent life.
However, this is not exactly what I wanted to ask. I was more interested into relation of two potential infinities: infinite IQ of very advance AI, and infinite possible future time needed for “immortality”.
It all again boils down to the scholastic question: “Could God create a stone so heavy that he can’t lifted it”, which is basically a question about infinite capabilities and infinite complexity of problems (https://en.wikipedia.org/wiki/Omnipotence_paradox).
Why I ask it? Because some times in discussion I see an appeal to superintelligent AI’s omnipotence (like it will be able almost instantly convert galaxies to quasars or travel with light speed).
What do you mean by infinite IQ? If I take you literally, that’s impossible because the test outputs real numbers. But maybe you mean “unbounded optimization power as time goes to infinity” or something similar.