I think you’re making a number of flawed assumptions here Sir Kluge.
1) Uncontrollability may be an emergent property of the G in AGI. Imagine you have a farm hand that works super fast, does top quality work but now and then there just ain’t nothing to do so he goes for a walk, maybe flirts around town, whatever. That may not be that problematic, but if you have a constantly self-improving AI that can give us answers to major massive issues that we then have to hope to implement in the actual world… chances are that it will have a lot of spare time on its hands for alternative pursuits… either for “itself” or for its masters… and they will not waste any time grabbing max advantage in min time, aware they may soon face a competing AGI. Safeguards will just get in the way, you see.
2) Having the G in AGI does not at all have to mean it will then become human in the sense it has moods, emotions or any internal “non-rational” state at all. It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability. Also, they lie a lot. Not least to themselves. If the future holds something of a Rationality-rating akin to a Credit rating, we’d be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility.
3) Any AI we design that is an AGI (or close to it) and has “executive” powers will almost inevitably display collateral side-effects that may run out of control and cause major issues. What is perhaps even more dangerous is an A(G)I that is being used in secret or for unknown ends by some criminal group or… you know… any “other guys” who end up gaining an advantage of such enormity that “the world” would be unable to stop, control or detect it.
4) The chances that a genuinely rule- and law-based society is more fair, efficient and generally superior to current human societies is 1. If we’d let smart AI’s actually be in charge, indifferent to race, religion, social status, how big your boobs are, whether you are a celebrity and regardless of whether most people think you look pretty good—mate, our societies would rival the best of imaginable utopias. Of course, the powers that be (ands wish to remain thus) would never allow it—and so we have what we have now—The powerful using AI to entrench and secure their privileged status and position. But if we’d actually let “dispassionate computers do politics” (or perhaps more accurately labelled “actual governance”!) the world would very soon be a much better place. At least in theory, assuming we’ve solved many of the very issues EY raises here. You’re not worried about AI—you’re worried about some humans using AI to the disadvantage of other humans.
There are so many unexamined assumptions in this argument. Why do you assume that a super intelligent AI would find humanity wanting? You admit it would be different than us. So, why would it find us inferior? We will have qualities it doesn’t have. There is nothing to say it wouldn’t find itself wanting. Moreover, even if it did, why is it assumed that it would then decide humanity must be destroyed? Where does that logic come from? That makes no sense. I suppose it is possible but I see no reason to think that is certain or some sort of necessary conclusion. I find dogs wanting but I don’t desire to murder them all. The whole argument assumes that any super intelligent being of any sort would look at humanity and necessarily and immediately decide it must be destroyed.
That is just people projecting their own issues and desires onto AI. They find humanity wanting for whatever reason and if they were in a position above it and where they could destroy it they would conclude it must be destroyed. Therefore, any AI would do the same. To that I say, stop worrying about AI and get a shrink and start worrying about your view of humanity.
If number 1 is true, then AI isn’t a threat. It never will go crazy and cause harm. It will just do a few harmless and quirky things. Maybe that will be the case. If it is, Kudlowsky is still wrong. Beyond that, isn’t going to solve these problems. To think that it will is moonshine. It assumes that solving complex and difficult problems are just a question of time and calculation. Sadly, the world isn’t that simple. Most of the “big problems” are big because they are moral dilemmas with no answer that doesn’t require value judgements and comparisons that simply cannot be solved via sure force of intellect.
As far as two you say, “It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability.” You are just describing it being human and having human emotions. It is making value and moral judgements on its own. That is the definition of being human and having moral agency.
Then you go on to say “If the future holds something of a Rationality-rating akin to a Credit rating, we’d be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility.”
That is the sort of laughable nonsense that only intellectuals believe. There is no such that as something being “objectively reasonable” in any ultimate sense. Reason is just the process by which you think. That process can produce any result you want provided you feed into it the right assumptions. What seems irrational to you, can be totally rational to me if I start with different assumptions or different perceptions of the world than you do. You can reason yourself into any conclusion. They are called rationalization. The idea that there is an objective thing called “reason” which gives a single path to the truth is 8th grade philosophy and why Ayn Rand is a half wit. The world just doesn’t work that way. A super AI is no more or less “reasonable” than anyone else. And its conclusions are no more or less reasonable or true than any other conclusions. To pretend it is is just faith based worship of reason and computation as some sort of ultimate truth. It isn’t.
“The chances that a genuinely rule- and law-based society is more fair, efficient and generally superior to current human societies is 1”
A society with rules tempered by values and human judgement is fair and just to the extent human societies can be. A society that is entirely rule based tempered by no judgement of values is monstrous. Every rule has a limit, a point where applying it because unjust and wrong. If it were just a question of having rules and applying them to everything, ethical debate would have ended thousands of years ago. It isn’t that simple. Ethics lie in the middle, rules are needed right up to the point they are not. Sadly, the categorical imperative didn’t answer the issue.
That there is no such thing as being 100% objective/rational does not mean one can’t be more or less rational than some other agent. Listen. Why do you have a favorite color? How come you prefer leather seats? In fact, why did you have tea this morning instead of coffee? You have no idea. Even if you do (say, you ran out of coffee) you still don’t know why you decided to drink tea instead of running down to the store to get some coffee instead.
We are so irrational that we don’t actually even know ourselves why most of the things we think, believe, want or prefer are such things. The very idea of liking is irrational. And no, you don’t “like” a Mercedes more than a Yugo because it’s safer—that’s a fact, not a matter of opinion. A “machine” can also give preference to a Toyota over a Honda but it certainly wouldn’t do so because it likes the fabric of the seats, or the fact the tail lights converge into the bumper so nicely. It will list a bunch of facts and parameters and calculate that the Toyota is the thing it will “choose”.
We humans delude ourselves that this is how we make decisions but this is of course complete nonsense. Naturally, some objective aspects are considered like fuel economy, safety, features and options… but the vast majority of people end up with a car that far outstrips their actual, objective transportation needs, and most of that part is really about status, how having a given car makes you feel compared to others in your social environment and what “image” you (believe you) project on those whose opinion matters most to you. An AI will have none of these wasteful obsessive compulsions.
Look—be honest with yourself Mr. Kluge. Please. Slow down, think, feel inside. Ask yourself—what makes you want… what makes you desire. You will, if you know how to listen… very soon discover none of that is guided by rational, dispassionate arguments or objective, logical realities. Now imagine an AI/machine that is even half as smart as the average Joe, but is free from all those subjective distractions, emotions and anxieties. It will accomplish 10x the amount of work in half the time. At least.
I think you’re making a number of flawed assumptions here Sir Kluge.
1) Uncontrollability may be an emergent property of the G in AGI. Imagine you have a farm hand that works super fast, does top quality work but now and then there just ain’t nothing to do so he goes for a walk, maybe flirts around town, whatever. That may not be that problematic, but if you have a constantly self-improving AI that can give us answers to major massive issues that we then have to hope to implement in the actual world… chances are that it will have a lot of spare time on its hands for alternative pursuits… either for “itself” or for its masters… and they will not waste any time grabbing max advantage in min time, aware they may soon face a competing AGI. Safeguards will just get in the way, you see.
2) Having the G in AGI does not at all have to mean it will then become human in the sense it has moods, emotions or any internal “non-rational” state at all. It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability. Also, they lie a lot. Not least to themselves. If the future holds something of a Rationality-rating akin to a Credit rating, we’d be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility.
3) Any AI we design that is an AGI (or close to it) and has “executive” powers will almost inevitably display collateral side-effects that may run out of control and cause major issues. What is perhaps even more dangerous is an A(G)I that is being used in secret or for unknown ends by some criminal group or… you know… any “other guys” who end up gaining an advantage of such enormity that “the world” would be unable to stop, control or detect it.
4) The chances that a genuinely rule- and law-based society is more fair, efficient and generally superior to current human societies is 1. If we’d let smart AI’s actually be in charge, indifferent to race, religion, social status, how big your boobs are, whether you are a celebrity and regardless of whether most people think you look pretty good—mate, our societies would rival the best of imaginable utopias. Of course, the powers that be (ands wish to remain thus) would never allow it—and so we have what we have now—The powerful using AI to entrench and secure their privileged status and position. But if we’d actually let “dispassionate computers do politics” (or perhaps more accurately labelled “actual governance”!) the world would very soon be a much better place. At least in theory, assuming we’ve solved many of the very issues EY raises here. You’re not worried about AI—you’re worried about some humans using AI to the disadvantage of other humans.
There are so many unexamined assumptions in this argument. Why do you assume that a super intelligent AI would find humanity wanting? You admit it would be different than us. So, why would it find us inferior? We will have qualities it doesn’t have. There is nothing to say it wouldn’t find itself wanting. Moreover, even if it did, why is it assumed that it would then decide humanity must be destroyed? Where does that logic come from? That makes no sense. I suppose it is possible but I see no reason to think that is certain or some sort of necessary conclusion. I find dogs wanting but I don’t desire to murder them all. The whole argument assumes that any super intelligent being of any sort would look at humanity and necessarily and immediately decide it must be destroyed.
That is just people projecting their own issues and desires onto AI. They find humanity wanting for whatever reason and if they were in a position above it and where they could destroy it they would conclude it must be destroyed. Therefore, any AI would do the same. To that I say, stop worrying about AI and get a shrink and start worrying about your view of humanity.
If number 1 is true, then AI isn’t a threat. It never will go crazy and cause harm. It will just do a few harmless and quirky things. Maybe that will be the case. If it is, Kudlowsky is still wrong. Beyond that, isn’t going to solve these problems. To think that it will is moonshine. It assumes that solving complex and difficult problems are just a question of time and calculation. Sadly, the world isn’t that simple. Most of the “big problems” are big because they are moral dilemmas with no answer that doesn’t require value judgements and comparisons that simply cannot be solved via sure force of intellect.
As far as two you say, “It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability.” You are just describing it being human and having human emotions. It is making value and moral judgements on its own. That is the definition of being human and having moral agency.
Then you go on to say “If the future holds something of a Rationality-rating akin to a Credit rating, we’d be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility.”
That is the sort of laughable nonsense that only intellectuals believe. There is no such that as something being “objectively reasonable” in any ultimate sense. Reason is just the process by which you think. That process can produce any result you want provided you feed into it the right assumptions. What seems irrational to you, can be totally rational to me if I start with different assumptions or different perceptions of the world than you do. You can reason yourself into any conclusion. They are called rationalization. The idea that there is an objective thing called “reason” which gives a single path to the truth is 8th grade philosophy and why Ayn Rand is a half wit. The world just doesn’t work that way. A super AI is no more or less “reasonable” than anyone else. And its conclusions are no more or less reasonable or true than any other conclusions. To pretend it is is just faith based worship of reason and computation as some sort of ultimate truth. It isn’t.
“The chances that a genuinely rule- and law-based society is more fair, efficient and generally superior to current human societies is 1”
A society with rules tempered by values and human judgement is fair and just to the extent human societies can be. A society that is entirely rule based tempered by no judgement of values is monstrous. Every rule has a limit, a point where applying it because unjust and wrong. If it were just a question of having rules and applying them to everything, ethical debate would have ended thousands of years ago. It isn’t that simple. Ethics lie in the middle, rules are needed right up to the point they are not. Sadly, the categorical imperative didn’t answer the issue.
That there is no such thing as being 100% objective/rational does not mean one can’t be more or less rational than some other agent. Listen. Why do you have a favorite color? How come you prefer leather seats? In fact, why did you have tea this morning instead of coffee? You have no idea. Even if you do (say, you ran out of coffee) you still don’t know why you decided to drink tea instead of running down to the store to get some coffee instead.
We are so irrational that we don’t actually even know ourselves why most of the things we think, believe, want or prefer are such things. The very idea of liking is irrational. And no, you don’t “like” a Mercedes more than a Yugo because it’s safer—that’s a fact, not a matter of opinion. A “machine” can also give preference to a Toyota over a Honda but it certainly wouldn’t do so because it likes the fabric of the seats, or the fact the tail lights converge into the bumper so nicely. It will list a bunch of facts and parameters and calculate that the Toyota is the thing it will “choose”.
We humans delude ourselves that this is how we make decisions but this is of course complete nonsense. Naturally, some objective aspects are considered like fuel economy, safety, features and options… but the vast majority of people end up with a car that far outstrips their actual, objective transportation needs, and most of that part is really about status, how having a given car makes you feel compared to others in your social environment and what “image” you (believe you) project on those whose opinion matters most to you. An AI will have none of these wasteful obsessive compulsions.
Look—be honest with yourself Mr. Kluge. Please. Slow down, think, feel inside. Ask yourself—what makes you want… what makes you desire. You will, if you know how to listen… very soon discover none of that is guided by rational, dispassionate arguments or objective, logical realities. Now imagine an AI/machine that is even half as smart as the average Joe, but is free from all those subjective distractions, emotions and anxieties. It will accomplish 10x the amount of work in half the time. At least.