If number 1 is true, then AI isn’t a threat. It never will go crazy and cause harm. It will just do a few harmless and quirky things. Maybe that will be the case. If it is, Kudlowsky is still wrong. Beyond that, isn’t going to solve these problems. To think that it will is moonshine. It assumes that solving complex and difficult problems are just a question of time and calculation. Sadly, the world isn’t that simple. Most of the “big problems” are big because they are moral dilemmas with no answer that doesn’t require value judgements and comparisons that simply cannot be solved via sure force of intellect.
As far as two you say, “It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability.” You are just describing it being human and having human emotions. It is making value and moral judgements on its own. That is the definition of being human and having moral agency.
Then you go on to say “If the future holds something of a Rationality-rating akin to a Credit rating, we’d be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility.”
That is the sort of laughable nonsense that only intellectuals believe. There is no such that as something being “objectively reasonable” in any ultimate sense. Reason is just the process by which you think. That process can produce any result you want provided you feed into it the right assumptions. What seems irrational to you, can be totally rational to me if I start with different assumptions or different perceptions of the world than you do. You can reason yourself into any conclusion. They are called rationalization. The idea that there is an objective thing called “reason” which gives a single path to the truth is 8th grade philosophy and why Ayn Rand is a half wit. The world just doesn’t work that way. A super AI is no more or less “reasonable” than anyone else. And its conclusions are no more or less reasonable or true than any other conclusions. To pretend it is is just faith based worship of reason and computation as some sort of ultimate truth. It isn’t.
“The chances that a genuinely rule- and law-based society is more fair, efficient and generally superior to current human societies is 1”
A society with rules tempered by values and human judgement is fair and just to the extent human societies can be. A society that is entirely rule based tempered by no judgement of values is monstrous. Every rule has a limit, a point where applying it because unjust and wrong. If it were just a question of having rules and applying them to everything, ethical debate would have ended thousands of years ago. It isn’t that simple. Ethics lie in the middle, rules are needed right up to the point they are not. Sadly, the categorical imperative didn’t answer the issue.
There are so many unexamined assumptions in this argument. Why do you assume that a super intelligent AI would find humanity wanting? You admit it would be different than us. So, why would it find us inferior? We will have qualities it doesn’t have. There is nothing to say it wouldn’t find itself wanting. Moreover, even if it did, why is it assumed that it would then decide humanity must be destroyed? Where does that logic come from? That makes no sense. I suppose it is possible but I see no reason to think that is certain or some sort of necessary conclusion. I find dogs wanting but I don’t desire to murder them all. The whole argument assumes that any super intelligent being of any sort would look at humanity and necessarily and immediately decide it must be destroyed.
That is just people projecting their own issues and desires onto AI. They find humanity wanting for whatever reason and if they were in a position above it and where they could destroy it they would conclude it must be destroyed. Therefore, any AI would do the same. To that I say, stop worrying about AI and get a shrink and start worrying about your view of humanity.