It may seem like a bad thing from our perspective now; but its a bit of a strawman argument that it will certainly be something we do not want when the time comes that it is possible to do so.
This is absolutely what I am afraid of. Values themselves will be selected for and I don’t want my values to be ground up entirely to dust. Who’s to say that I will want to exist under a different value system, even as a part of some larger consciousness? What if consciousness is a waste of resources?
every day we wake up a slightly different version of the consciousness that went to sleep. In this way the entire of our conscious existence is undergoing small changes. Each day we wouldn’t even think to ask ourselves if we are the same person as yesterday, but if we could isolate the me of today and talk to the me of 10 years ago we would be able to notice the different clearly.
It is a fact of life that we take changes day by day. If that’s where we end up; I don’t think the you of today has anything to complain about because the you of every day in between gradually made the choices to end up there.
the you of today should contend with the you of every single day between now and the state that you dislike (lack of consciousness or whatever) before being able to hold a complaint about it.
Also I am not sure that you were getting my point. If in the future the choice to do away with consciousness is made; it will be made by future entities with much more information and clearer reasons for doing so. Without that future information and reasoning at our disposal; We can’t really criticize the decision. I can confidently say that my consciousness (based on what I know) does not want to be gotten rid of right now. If reasons overpoweringly convincing come along to change my mind then I will make that decision at that time with the best of information at the time.
My point was that the decision making process is up to the future self and is dependent on future information. The future self will not be making worse decisions. It will not make decisions that do not benefit itself (based on a version of your values right now that are slightly different..
does that make sense? Or should I try to explain it again...?
You’re definitely missing the point of the whole thing. Suppose that the optimal design for gaining knowledge is something like this (a vast supercomputer without the slightest bit of awareness or emotion.)
I think it is very unlikely- even in the worst case scenarios, I can’t imagine that superintelligence wouldn’t inherit some sort of value.
I don’t see the problem with that being the eventual case. Death of the state of the world as we know it yes; but the existence of a new entity. That’s the way the cookie crubles.
This is absolutely what I am afraid of. Values themselves will be selected for and I don’t want my values to be ground up entirely to dust. Who’s to say that I will want to exist under a different value system, even as a part of some larger consciousness? What if consciousness is a waste of resources?
every day we wake up a slightly different version of the consciousness that went to sleep. In this way the entire of our conscious existence is undergoing small changes. Each day we wouldn’t even think to ask ourselves if we are the same person as yesterday, but if we could isolate the me of today and talk to the me of 10 years ago we would be able to notice the different clearly.
It is a fact of life that we take changes day by day. If that’s where we end up; I don’t think the you of today has anything to complain about because the you of every day in between gradually made the choices to end up there.
the you of today should contend with the you of every single day between now and the state that you dislike (lack of consciousness or whatever) before being able to hold a complaint about it.
So? I don’t think you’re really getting my point here. If consciousness is fluid or imperfect, it doesn’t mean that it is worthless.
Yes; I don’t think I was getting your point.
Also I am not sure that you were getting my point. If in the future the choice to do away with consciousness is made; it will be made by future entities with much more information and clearer reasons for doing so. Without that future information and reasoning at our disposal; We can’t really criticize the decision. I can confidently say that my consciousness (based on what I know) does not want to be gotten rid of right now. If reasons overpoweringly convincing come along to change my mind then I will make that decision at that time with the best of information at the time.
My point was that the decision making process is up to the future self and is dependent on future information. The future self will not be making worse decisions. It will not make decisions that do not benefit itself (based on a version of your values right now that are slightly different..
does that make sense? Or should I try to explain it again...?
You’re definitely missing the point of the whole thing. Suppose that the optimal design for gaining knowledge is something like this (a vast supercomputer without the slightest bit of awareness or emotion.)
I think it is very unlikely- even in the worst case scenarios, I can’t imagine that superintelligence wouldn’t inherit some sort of value.
I don’t see the problem with that being the eventual case. Death of the state of the world as we know it yes; but the existence of a new entity. That’s the way the cookie crubles.
Are you expecting these things to happen within your lifetime?
Probably not within my own natural lifetime, no.