No AI is friendly. That’s a naive idea. Is a FAI friendly towards superintelligent, highly conscious uFAI? No, it’s not. It will kill it. Same as it will kill all other entities who’ll try to do what they want with the universe. Friendliness is subjective and cannot be guaranteed.
Is a FAI friendly towards superintelligent, highly conscious uFAI? No, it’s not. It will kill it.
Are you sure? Random alternative possibilities:
Hack it and make it friendly
Assimilate it
Externally constrain its actions
Toss it into another universe where humanity doesn’t exist
Unless you’re one yourself, it’s rather difficult to predict what other options a superintelligence might come up with, that you never even considered.
How do you want to guarantee friendliness? If there are post-Singularity aliens out there then their CEV might be opposed to that of humanity, which would ultimately either mean our or their extinction. Obviously any CEV acted on by some friendly AI is a dictatorship that regards any disruptive elements, such as unfriendly AI’s and aliens, as an existential risk. You might call this friendly, I don’t. It’s simply one way to shape the universe that is favored by the SIAI, a bunch of human beings who want to imprint the universe with an anthropocentric version of CEV. Therefore, as I said above, friendliness is subjective and cannot be guaranteed. I don’t even think that it can be guaranteed subjectively, as any personal CEV would ultimately be a feedback process favoring certain constants between you and the friendly AI trying suit your preferences. If you like sex, the AI will provide you with better sex which in turn will make you like sex even more and so on. Any CEV is prone to be a paperclip maximizer seen from any position that is not regarded by the CEV. That’s not friendliness, it’s just a convoluted way of shaping the universe according to your will.
That’s not friendliness, it’s just a convoluted way of shaping the universe according to your will.
Yes, it’s a “Friendly to me” AI that I want. (Where replacing ‘me’ with other individuals or groups with acceptable values would be better than nothing.) I don’t necessarily want it to be friendly in the general colloquial sense. I don’t particularly mind if you call it something less ‘nice’ sounding than Friendly.
I don’t even think that it can be guaranteed subjectively, as any personal CEV would ultimately be a feedback process favoring certain constants between you and the friendly AI trying suit your preferences. If you like sex, the AI will provide you with better sex which in turn will make you like sex even more and so on.
Here we disagree on a matter of real substance. If I do not want my preferences to be altered in the kind of way you mention then a Friendly (to me) AI doesn’t do them. This is tautological. Creating a system an guaranteeing that it works as specified is then ‘just’ a matter of engineering and mathematics. (Where ‘just’ means ‘harder than anything humans have ever done’.)
If I do not want my preferences to be altered in the kind of way you mention then a Friendly (to me) AI doesn’t do them.
I just don’t see how that is possible without the AI becoming a primary attractor and therefore fundamentally altering the trajectory of your preferences. I’d favor the way Kurzweil portrays a technological Singularity here, where humans themselves become the Gods. I do not want to live in a universe where I’m just a puppet of the seed I once sowed. That is, I want to implement my own volition without the oversight of a caretaker God. As long as there is a being vastly superior to me that takes interest in my own matters, even the mere observer effect will alter my preferences since I’d have to take this being into account in everything conceivable.
The whole idea of friendly AI, even if it was created to suit only my personal volition, reminds me of the promises of the old religions. This horrible boring universe where nothing bad can happen to you and everything is already figured out by this one being. Sure, it wouldn’t figure it out if it knew I want to do that myself. But that be pretty dumb, as it could if I wanted it to. And that’s just the case with my personal friendly AI. One based on the extrapolated volition of humanity would very likely not be friendly towards me and would ultimately dictate what I can and cannot do.
Really the only favorable possibility here is to merge with the AI. But that would mean instant annihilation to me as I would add nothing to a being that vast. So I still hope that AI going foom is wrong and that we see a slow development over many centuries instead, without any singularity type event.
And I’m aware that big government and other environmental influences are altering and stearing my preferences as well. But they are much more fuzzy whereas a friendly AI is very specific. The more specific, the less free will I do have. That is, the higher the ratio of influence and effectiveness of control that I exert over the environment to the environment over me the more free I am to implement what I want to do versus what others want me to do.
I’d favor the way Kurzweil portrays a technological Singularity here, where humans themselves become the Gods.
The problem with having a pantheon of Gods… they tend to bicker. With metaphorical lightening bolts. ;)
I don’t that outcome would be incompatible with a FAI (which may be necessary to do the research to get you your godlike powers). Apart from the initial enabling the FAI would provide the new ‘Gods’ could choose by mutual agreement to create some form of power structure that prevented them from messing each other over and burning the cosmic commons in competition.
So I still hope that AI going foom is wrong and that we see a slow development over many centuries instead, without any singularity type event.
You talked about the downside to mere observation. That would be utterly trivial and benign compared to the effects of Malthusian competition. Humans are not in a stable equilibrium now. We rely on intuitions created in a different time and different circumstances to prevent us from rapidly rushing to a miserable equilibrium of subsistence living.
The longer we go before putting a check on evolutionary pressure towards maximum securing of resources the more we will lose that which we value as ‘human’. Yes everything we value except existence itself. Even consciousness in the form that we experience it.
The longer we go before putting a check on evolutionary pressure towards maximum securing of resources the more we will lose that which we value as ‘human’. Yes everything we value except existence itself. Even consciousness in the form that we experience it.
I don’t think I emphasised this enough. Unless the ultimate cooperation problem is solved we will devolve to something that is less human than Clippy. Clippy at least has a goal that he seeks to maximise and which motivates his quest for power. Competition would weed out even that much personality.
No AI is friendly. That’s a naive idea. Is a FAI friendly towards superintelligent, highly conscious uFAI? No, it’s not. It will kill it. Same as it will kill all other entities who’ll try to do what they want with the universe. Friendliness is subjective and cannot be guaranteed.
Are you sure? Random alternative possibilities:
Hack it and make it friendly
Assimilate it
Externally constrain its actions
Toss it into another universe where humanity doesn’t exist
Unless you’re one yourself, it’s rather difficult to predict what other options a superintelligence might come up with, that you never even considered.
Yes to subjective, no to guaranteed.
How do you want to guarantee friendliness? If there are post-Singularity aliens out there then their CEV might be opposed to that of humanity, which would ultimately either mean our or their extinction. Obviously any CEV acted on by some friendly AI is a dictatorship that regards any disruptive elements, such as unfriendly AI’s and aliens, as an existential risk. You might call this friendly, I don’t. It’s simply one way to shape the universe that is favored by the SIAI, a bunch of human beings who want to imprint the universe with an anthropocentric version of CEV. Therefore, as I said above, friendliness is subjective and cannot be guaranteed. I don’t even think that it can be guaranteed subjectively, as any personal CEV would ultimately be a feedback process favoring certain constants between you and the friendly AI trying suit your preferences. If you like sex, the AI will provide you with better sex which in turn will make you like sex even more and so on. Any CEV is prone to be a paperclip maximizer seen from any position that is not regarded by the CEV. That’s not friendliness, it’s just a convoluted way of shaping the universe according to your will.
Yes, it’s a “Friendly to me” AI that I want. (Where replacing ‘me’ with other individuals or groups with acceptable values would be better than nothing.) I don’t necessarily want it to be friendly in the general colloquial sense. I don’t particularly mind if you call it something less ‘nice’ sounding than Friendly.
Here we disagree on a matter of real substance. If I do not want my preferences to be altered in the kind of way you mention then a Friendly (to me) AI doesn’t do them. This is tautological. Creating a system an guaranteeing that it works as specified is then ‘just’ a matter of engineering and mathematics. (Where ‘just’ means ‘harder than anything humans have ever done’.)
I just don’t see how that is possible without the AI becoming a primary attractor and therefore fundamentally altering the trajectory of your preferences. I’d favor the way Kurzweil portrays a technological Singularity here, where humans themselves become the Gods. I do not want to live in a universe where I’m just a puppet of the seed I once sowed. That is, I want to implement my own volition without the oversight of a caretaker God. As long as there is a being vastly superior to me that takes interest in my own matters, even the mere observer effect will alter my preferences since I’d have to take this being into account in everything conceivable.
The whole idea of friendly AI, even if it was created to suit only my personal volition, reminds me of the promises of the old religions. This horrible boring universe where nothing bad can happen to you and everything is already figured out by this one being. Sure, it wouldn’t figure it out if it knew I want to do that myself. But that be pretty dumb, as it could if I wanted it to. And that’s just the case with my personal friendly AI. One based on the extrapolated volition of humanity would very likely not be friendly towards me and would ultimately dictate what I can and cannot do.
Really the only favorable possibility here is to merge with the AI. But that would mean instant annihilation to me as I would add nothing to a being that vast. So I still hope that AI going foom is wrong and that we see a slow development over many centuries instead, without any singularity type event.
And I’m aware that big government and other environmental influences are altering and stearing my preferences as well. But they are much more fuzzy whereas a friendly AI is very specific. The more specific, the less free will I do have. That is, the higher the ratio of influence and effectiveness of control that I exert over the environment to the environment over me the more free I am to implement what I want to do versus what others want me to do.
The problem with having a pantheon of Gods… they tend to bicker. With metaphorical lightening bolts. ;)
I don’t that outcome would be incompatible with a FAI (which may be necessary to do the research to get you your godlike powers). Apart from the initial enabling the FAI would provide the new ‘Gods’ could choose by mutual agreement to create some form of power structure that prevented them from messing each other over and burning the cosmic commons in competition.
You talked about the downside to mere observation. That would be utterly trivial and benign compared to the effects of Malthusian competition. Humans are not in a stable equilibrium now. We rely on intuitions created in a different time and different circumstances to prevent us from rapidly rushing to a miserable equilibrium of subsistence living.
The longer we go before putting a check on evolutionary pressure towards maximum securing of resources the more we will lose that which we value as ‘human’. Yes everything we value except existence itself. Even consciousness in the form that we experience it.
I don’t think I emphasised this enough. Unless the ultimate cooperation problem is solved we will devolve to something that is less human than Clippy. Clippy at least has a goal that he seeks to maximise and which motivates his quest for power. Competition would weed out even that much personality.