Saying no FAI exists in design space that could satisfy us is equivalent to saying nothing can satisfy us. In other words, if you are correct then the AI isn’t the problem and humanity would be “straitjacketed” anyway.
I don’t see how not being fully satisfied is a straitjacket. I’m saying that our (the mankind) maximum satisfaction may be when straitjacketed, because mankind isn’t sane (and if there isn’t any truly sane morality system edit: to clarify. if there is truly sane morality system, then mankind can be cured of insanity).
I was using the term “satisfied” to include all human preferences, including the desire to not be “straitjacketed”.
If human preferences are inconsistent then humans still can’t do any better than an AI for there is an AI in design space that does nothing in our world but would make similar worlds look exactly like ours.
You assume that the utility of two different worlds can not be exactly equal. edit: or maybe you don’t. In any case, this AI which does absolutely nothing in our world is no more useful than AI that does nothing in all possible worlds, or just a brick.
Also, the desire for mankind (and life) not to be straitjacketed, is my view, i’m not sure it is coherently shared by mankind, and in fact i’m not even sure i like the way it is going if it is not straitjacketed in some way. edit: to clarify. I like the heuristics of maximizing the future choices for me. It is part of my values, that i don’t want removed. I don’t like [consequences of] this heuristic for mankind. Mankind is a meta-organism that is dumb and potentially self destructive.
edit: To clarify. What I am saying, is that there’s conflict between two values whose product matters. Survival vs freedom. Survival without freedom is bad. Freedom without survival is nonsense.
this AI which does absolutely nothing in our world is no more useful than AI that does nothing in all possible worlds, or just a brick.
Sorry, I wasn’t being clear. The point was saying that no AI can do better than humanity implies that our world is optimal out of all similar worlds. (I believe there are much stronger arguments than this against what you are saying, but this one should suffice)
It only implies so if your AI is totally omniscient.
edit: Anyhow, I can of course think of AI that can do better than humanity: the AI sits inside Jupiter, and nudges away any incoming comets and asteroids, and that’s it (then as sun burns up then burns out, moves Earth around). The problem starts when you make the AI discriminate between very similar worlds. edit: and even that asteroid stopping AI may be a straitjacket to intelligent life as it may be that the mankind is a wrong thing entirely, and should be permitted to kill itself, and then the meteorite impacts should be allowed so that ants get a chance.
as it may be that the manking is a wrong thing entirely, and should be permitted to kill itself, and then the meteorite impacts should be allowed so that ants get a chance.
I don’t know much about my own extrapolated preferences but I can reason that as my preferences are the product of noise in the evolutionary process, reality is unlikely to align with them naturally. It’s possible that my preferences consider “mankind a wrong thing entirely”; but that they would align with whatever the universe happens to produce next on earth (assuming the rise of another dominant species is even plausible) is incredibly unlikely. Anything that happens without a causal line of descent from human values is unlikely to align with human values.
Anything that happens without a causal line of descent from human values is unlikely to align with human values.
Unlikely to align how exactly? There’s also the common causes, you know; A and B can be correlated when A causes B, when B causes A, or when C causes A and B.
It seems to me that you can require arbitrary degree of alignment to arrive at arbitrary unlikehood, but some alignment via common cause is nonetheless probable.
There’s such thing as over-fitting… if you have some noisy data, the theory that fits the data ideally is just the table of the data (e.g. heights and falling times); the useful theory doesn’t fit data exactly in practice. If we make the AI perfectly fit to what mankind does, we could just as well make a brick and proclaim it an omnipotent omniscient mankind-friendly AI that will never stop the mankind from doing something that mankind wants (including taking the extinction risks).
I don’t see how not being fully satisfied is a straitjacket. I’m saying that our (the mankind) maximum satisfaction may be when straitjacketed, because mankind isn’t sane (and if there isn’t any truly sane morality system edit: to clarify. if there is truly sane morality system, then mankind can be cured of insanity).
I was using the term “satisfied” to include all human preferences, including the desire to not be “straitjacketed”.
If human preferences are inconsistent then humans still can’t do any better than an AI for there is an AI in design space that does nothing in our world but would make similar worlds look exactly like ours.
You assume that the utility of two different worlds can not be exactly equal. edit: or maybe you don’t. In any case, this AI which does absolutely nothing in our world is no more useful than AI that does nothing in all possible worlds, or just a brick.
Also, the desire for mankind (and life) not to be straitjacketed, is my view, i’m not sure it is coherently shared by mankind, and in fact i’m not even sure i like the way it is going if it is not straitjacketed in some way. edit: to clarify. I like the heuristics of maximizing the future choices for me. It is part of my values, that i don’t want removed. I don’t like [consequences of] this heuristic for mankind. Mankind is a meta-organism that is dumb and potentially self destructive.
edit: To clarify. What I am saying, is that there’s conflict between two values whose product matters. Survival vs freedom. Survival without freedom is bad. Freedom without survival is nonsense.
Sorry, I wasn’t being clear. The point was saying that no AI can do better than humanity implies that our world is optimal out of all similar worlds. (I believe there are much stronger arguments than this against what you are saying, but this one should suffice)
It only implies so if your AI is totally omniscient.
edit: Anyhow, I can of course think of AI that can do better than humanity: the AI sits inside Jupiter, and nudges away any incoming comets and asteroids, and that’s it (then as sun burns up then burns out, moves Earth around). The problem starts when you make the AI discriminate between very similar worlds. edit: and even that asteroid stopping AI may be a straitjacket to intelligent life as it may be that the mankind is a wrong thing entirely, and should be permitted to kill itself, and then the meteorite impacts should be allowed so that ants get a chance.
I don’t know much about my own extrapolated preferences but I can reason that as my preferences are the product of noise in the evolutionary process, reality is unlikely to align with them naturally. It’s possible that my preferences consider “mankind a wrong thing entirely”; but that they would align with whatever the universe happens to produce next on earth (assuming the rise of another dominant species is even plausible) is incredibly unlikely. Anything that happens without a causal line of descent from human values is unlikely to align with human values.
Unlikely to align how exactly? There’s also the common causes, you know; A and B can be correlated when A causes B, when B causes A, or when C causes A and B.
It seems to me that you can require arbitrary degree of alignment to arrive at arbitrary unlikehood, but some alignment via common cause is nonetheless probable.
Well yes, but I would assume you would want more alignment, not less.
There’s such thing as over-fitting… if you have some noisy data, the theory that fits the data ideally is just the table of the data (e.g. heights and falling times); the useful theory doesn’t fit data exactly in practice. If we make the AI perfectly fit to what mankind does, we could just as well make a brick and proclaim it an omnipotent omniscient mankind-friendly AI that will never stop the mankind from doing something that mankind wants (including taking the extinction risks).