If I do not want my preferences to be altered in the kind of way you mention then a Friendly (to me) AI doesn’t do them.
I just don’t see how that is possible without the AI becoming a primary attractor and therefore fundamentally altering the trajectory of your preferences. I’d favor the way Kurzweil portrays a technological Singularity here, where humans themselves become the Gods. I do not want to live in a universe where I’m just a puppet of the seed I once sowed. That is, I want to implement my own volition without the oversight of a caretaker God. As long as there is a being vastly superior to me that takes interest in my own matters, even the mere observer effect will alter my preferences since I’d have to take this being into account in everything conceivable.
The whole idea of friendly AI, even if it was created to suit only my personal volition, reminds me of the promises of the old religions. This horrible boring universe where nothing bad can happen to you and everything is already figured out by this one being. Sure, it wouldn’t figure it out if it knew I want to do that myself. But that be pretty dumb, as it could if I wanted it to. And that’s just the case with my personal friendly AI. One based on the extrapolated volition of humanity would very likely not be friendly towards me and would ultimately dictate what I can and cannot do.
Really the only favorable possibility here is to merge with the AI. But that would mean instant annihilation to me as I would add nothing to a being that vast. So I still hope that AI going foom is wrong and that we see a slow development over many centuries instead, without any singularity type event.
And I’m aware that big government and other environmental influences are altering and stearing my preferences as well. But they are much more fuzzy whereas a friendly AI is very specific. The more specific, the less free will I do have. That is, the higher the ratio of influence and effectiveness of control that I exert over the environment to the environment over me the more free I am to implement what I want to do versus what others want me to do.
I’d favor the way Kurzweil portrays a technological Singularity here, where humans themselves become the Gods.
The problem with having a pantheon of Gods… they tend to bicker. With metaphorical lightening bolts. ;)
I don’t that outcome would be incompatible with a FAI (which may be necessary to do the research to get you your godlike powers). Apart from the initial enabling the FAI would provide the new ‘Gods’ could choose by mutual agreement to create some form of power structure that prevented them from messing each other over and burning the cosmic commons in competition.
So I still hope that AI going foom is wrong and that we see a slow development over many centuries instead, without any singularity type event.
You talked about the downside to mere observation. That would be utterly trivial and benign compared to the effects of Malthusian competition. Humans are not in a stable equilibrium now. We rely on intuitions created in a different time and different circumstances to prevent us from rapidly rushing to a miserable equilibrium of subsistence living.
The longer we go before putting a check on evolutionary pressure towards maximum securing of resources the more we will lose that which we value as ‘human’. Yes everything we value except existence itself. Even consciousness in the form that we experience it.
The longer we go before putting a check on evolutionary pressure towards maximum securing of resources the more we will lose that which we value as ‘human’. Yes everything we value except existence itself. Even consciousness in the form that we experience it.
I don’t think I emphasised this enough. Unless the ultimate cooperation problem is solved we will devolve to something that is less human than Clippy. Clippy at least has a goal that he seeks to maximise and which motivates his quest for power. Competition would weed out even that much personality.
I just don’t see how that is possible without the AI becoming a primary attractor and therefore fundamentally altering the trajectory of your preferences. I’d favor the way Kurzweil portrays a technological Singularity here, where humans themselves become the Gods. I do not want to live in a universe where I’m just a puppet of the seed I once sowed. That is, I want to implement my own volition without the oversight of a caretaker God. As long as there is a being vastly superior to me that takes interest in my own matters, even the mere observer effect will alter my preferences since I’d have to take this being into account in everything conceivable.
The whole idea of friendly AI, even if it was created to suit only my personal volition, reminds me of the promises of the old religions. This horrible boring universe where nothing bad can happen to you and everything is already figured out by this one being. Sure, it wouldn’t figure it out if it knew I want to do that myself. But that be pretty dumb, as it could if I wanted it to. And that’s just the case with my personal friendly AI. One based on the extrapolated volition of humanity would very likely not be friendly towards me and would ultimately dictate what I can and cannot do.
Really the only favorable possibility here is to merge with the AI. But that would mean instant annihilation to me as I would add nothing to a being that vast. So I still hope that AI going foom is wrong and that we see a slow development over many centuries instead, without any singularity type event.
And I’m aware that big government and other environmental influences are altering and stearing my preferences as well. But they are much more fuzzy whereas a friendly AI is very specific. The more specific, the less free will I do have. That is, the higher the ratio of influence and effectiveness of control that I exert over the environment to the environment over me the more free I am to implement what I want to do versus what others want me to do.
The problem with having a pantheon of Gods… they tend to bicker. With metaphorical lightening bolts. ;)
I don’t that outcome would be incompatible with a FAI (which may be necessary to do the research to get you your godlike powers). Apart from the initial enabling the FAI would provide the new ‘Gods’ could choose by mutual agreement to create some form of power structure that prevented them from messing each other over and burning the cosmic commons in competition.
You talked about the downside to mere observation. That would be utterly trivial and benign compared to the effects of Malthusian competition. Humans are not in a stable equilibrium now. We rely on intuitions created in a different time and different circumstances to prevent us from rapidly rushing to a miserable equilibrium of subsistence living.
The longer we go before putting a check on evolutionary pressure towards maximum securing of resources the more we will lose that which we value as ‘human’. Yes everything we value except existence itself. Even consciousness in the form that we experience it.
I don’t think I emphasised this enough. Unless the ultimate cooperation problem is solved we will devolve to something that is less human than Clippy. Clippy at least has a goal that he seeks to maximise and which motivates his quest for power. Competition would weed out even that much personality.