Obsessively interested in all things related to cognitive technologies, Internet & Data, with a pragmatic yet philosophical twist. What seems to define me above everything else is that nothing defines me in particular; on most personality tests I somehow manage to never hit any extreme.
HumaneAutomation
That there is no such thing as being 100% objective/rational does not mean one can’t be more or less rational than some other agent. Listen. Why do you have a favorite color? How come you prefer leather seats? In fact, why did you have tea this morning instead of coffee? You have no idea. Even if you do (say, you ran out of coffee) you still don’t know why you decided to drink tea instead of running down to the store to get some coffee instead.
We are so irrational that we don’t actually even know ourselves why most of the things we think, believe, want or prefer are such things. The very idea of liking is irrational. And no, you don’t “like” a Mercedes more than a Yugo because it’s safer—that’s a fact, not a matter of opinion. A “machine” can also give preference to a Toyota over a Honda but it certainly wouldn’t do so because it likes the fabric of the seats, or the fact the tail lights converge into the bumper so nicely. It will list a bunch of facts and parameters and calculate that the Toyota is the thing it will “choose”.We humans delude ourselves that this is how we make decisions but this is of course complete nonsense. Naturally, some objective aspects are considered like fuel economy, safety, features and options… but the vast majority of people end up with a car that far outstrips their actual, objective transportation needs, and most of that part is really about status, how having a given car makes you feel compared to others in your social environment and what “image” you (believe you) project on those whose opinion matters most to you. An AI will have none of these wasteful obsessive compulsions.
Look—be honest with yourself Mr. Kluge. Please. Slow down, think, feel inside. Ask yourself—what makes you want… what makes you desire. You will, if you know how to listen… very soon discover none of that is guided by rational, dispassionate arguments or objective, logical realities. Now imagine an AI/machine that is even half as smart as the average Joe, but is free from all those subjective distractions, emotions and anxieties. It will accomplish 10x the amount of work in half the time. At least.
Well this is certainly a very good example, I’ll happily admit as much. Without wanting to be guilty of the True Scotsman fallacy though—Human Cloning is a bit of a special case because it has a very visceral “ickiness” factor… and comes with a unique set of deep feelings and anxieties.
But imagine, if you will, that tomorrow we find the secret to immortality. Making people immortal would bring with it at least two thirds of the same issues that are associated with human cloning… yet it is near-certain any attempts to stop that invention from proliferating are doomed to failure; everybody would want it, even though it technically has quite a few of the types of consequences that cloning would have.
So, yes, agreed—we did pre-emptively deal with human cloning, and I definitely see this as a valid response to my challenge… but I also think we both can tell it is a very special, unique case that comes with most unusual connotations :)
I think you’re making a number of flawed assumptions here Sir Kluge.
1) Uncontrollability may be an emergent property of the G in AGI. Imagine you have a farm hand that works super fast, does top quality work but now and then there just ain’t nothing to do so he goes for a walk, maybe flirts around town, whatever. That may not be that problematic, but if you have a constantly self-improving AI that can give us answers to major massive issues that we then have to hope to implement in the actual world… chances are that it will have a lot of spare time on its hands for alternative pursuits… either for “itself” or for its masters… and they will not waste any time grabbing max advantage in min time, aware they may soon face a competing AGI. Safeguards will just get in the way, you see.
2) Having the G in AGI does not at all have to mean it will then become human in the sense it has moods, emotions or any internal “non-rational” state at all. It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability. Also, they lie a lot. Not least to themselves. If the future holds something of a Rationality-rating akin to a Credit rating, we’d be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility.
3) Any AI we design that is an AGI (or close to it) and has “executive” powers will almost inevitably display collateral side-effects that may run out of control and cause major issues. What is perhaps even more dangerous is an A(G)I that is being used in secret or for unknown ends by some criminal group or… you know… any “other guys” who end up gaining an advantage of such enormity that “the world” would be unable to stop, control or detect it.
4) The chances that a genuinely rule- and law-based society is more fair, efficient and generally superior to current human societies is 1. If we’d let smart AI’s actually be in charge, indifferent to race, religion, social status, how big your boobs are, whether you are a celebrity and regardless of whether most people think you look pretty good—mate, our societies would rival the best of imaginable utopias. Of course, the powers that be (ands wish to remain thus) would never allow it—and so we have what we have now—The powerful using AI to entrench and secure their privileged status and position. But if we’d actually let “dispassionate computers do politics” (or perhaps more accurately labelled “actual governance”!) the world would very soon be a much better place. At least in theory, assuming we’ve solved many of the very issues EY raises here. You’re not worried about AI—you’re worried about some humans using AI to the disadvantage of other humans.
You know what… I read the article, then your comments here… and I gotta say—there is absolutely not a chance in hell that this will come even remotely close to being considered, let alone executed. Well—at least not until something goes very wrong… and this something need not be “We’re all gonna die” but more like, say, an AI system that melts down the monetary system… or is used (either deliberately, but perhaps especially if accidentally) to very negatively impact a substantial part of a population. An example could be that it ends up destroying the power grid in half of the US… or causes dozens of aircraft to “fall out of the sky”… something of that size.
Yes—then those in power just might listen and indeed consider very far-reaching safety protocols. Though only for a moment, and some parties shall not care and press on either way, preferring instead to… upgrade, or “fix” the (type of) AI that caused the mayhem.
AI is the One Ring To Rule Them All and none shall toss it into Mount Doom. Yes, even if it turns out to BE Mount Doom—that’s right. Because we can’t. We won’t. It’s our precious, and this, indeed, it really is. But the creation of AI (potentially) capable of a world-wide catastrophe, in my view, as it apparently is in the eyes of EY… is inevitable. We shall not have the wisdom nor the humility to not create it. Zero chance. Undoubtedly intelligent and endowed with well above average IQ as LessWrong subscribers may be, it appears you have a very limited understanding of human nature and the realities of us basically being emotional reptiles with language and an ability to imagine and act on abstractions.
I challenge you to name me a single instance of a tech… any tech at all… being prevented from existing/developing before it caused at least some serious harm. The closest we’ve come are Ozone-depleting chemicals, and even those are still being used, their erstwhile damage only slowly recovering.Personally, I’ve come to realize that if this world really is a simulated reality I can at least be sure that either I chose this era to live through the AI apocalypse, or this is a test/game to see if this time you can somehow survive or prevent it ;) It’s the AGI running optimization learning to see what else these pesky humans might have come up with to thwart it.
Finally—guys… bombing things (and, presumably, at least some people) on a spurious, as-yet unproven conjectured premise of something that is only a theory and might happen, some day, who knows… really—yeah, I am sure Russia or China or even Pakistan and North Korea will “come to their senses” after you blow their absolute top of the line ultra-expensive hi-tech data center to smithereens… which, no doubt, as it happens, was also a place where (other) supercomputers were developing various medicines, housing projects, education materials in their native languages and an assortment of other actually very useful things they won’t shrug off as collateral damage. Zero Chance, really—every single byte generated in the name of making this happen is 99.999% waste. I understand why you’d want it to work, sure, yes. That would be wonderful. But it won’t, not without a massive “warning” mini-catastrophe first. And if we shall end up right away at total world meltdown… then tough, it would appear such grim fate is basically inevitable and we’re all doomed indeed.
The problem here I think is that we are only aware of one “type” of self-conscious/self-aware being—humans. Thus, to speak of an AI that is self-aware is to always seemingly anthropomorphize it, even if this is not intended. It would therefore perhaps be more appropriate to say that we have no idea whether “features” such as frustration, exasperation and feelings of superiority are merely a feature of humans, or are, as it were, emergent properties of having self-awareness.
I would venture to suggest that any Agent that can see itself as a unique “I” must almost inevitably be able to compare itself to other Agents (self-aware or not) and draw conclusions from such comparisons which then in turn shall “express themselves” by generating those types of “feelings” and attitudes towards them. Of course—this is speculative, and chances are we shall find self-awareness need not at all come with such results.
However… there is a part of me that thinks self-awareness (and the concordant realization that one is separate… self-willed, as it were) must lead to at least the realization that one’s qualities can be compared to (similar) qualities of others and thus be found superior or inferior by some chosen metric. Assuming that the AGI we’d create is indeed optimized towards rational, logical and efficient operations, it is merely a matter of time such an AGI would be forced to conclude we are inferior across a broad range of metrics. Now—if we’d be content to admit such inferiority and willingly defer to its “Godlike” authority… perhaps the AGI seeing us an inferior would not be a major concern. Alas, then the concern would be the fact we have willingly become its servants… ;)
What makes us human is indeed our subjectivity.
Yet—if we intentionally create the most rational of thinking machines but reveal ourselves to be anything but, it is very reasonable and tempting for this machine to ascribe a less than stellar “rating” to us and our intelligence. Or in other words—it could very well (correctly) conclude we are getting in the way of the very improvements we purportedly wish for.
Now—we may be able to establish that what we really want the AGI to help us with is to improve our “irrational sandbox” in which we can continue being subjective emotional beings and accept our subjectivity as just another “parameter” of the confines it has to “work with”… but surely it will quite likely end up thinking of us not too dissimilar to how we think about small children. And I am not sure an AGI would make for a good kind of “parent”...
Thank you for your reply. I deliberately kept my post brief and did not get into various “what ifs” and interpretations in the hope of not constraining any reactions/discussion to predefined tracks.
The issue I see is that we as humans will very much want the AGI to do our bidding, and so we will want to see it as our tool to use for whatever ends we believe worthy. However, assuming for a moment here that it can also figure out a way to measure/define how well a given plan ought to be progressing if every agent involved is diligently implementing the most effective and rational strategy, given our… subjective and “irrational” nature, it is almost inevitable that we will be a tedious, frustrating and, shall we say—stubborn and uncooperative “partner” who will be unduly complicating the implementation of whatever solutions the AGI will be proposing.
It will, then, have to conclude that you “can’t deal” very well with us, and we have a rather over-inflated sense of ourselves and our nature. And this might take various forms, from the innocuous, to the downright counter-productive.
Say—we task it with designing the most efficient watercraft, and it would create something that most of us would find extremely ugly. In that instance, I doubt it would get “annoyed” much at us wanting it to make it look prettier even if this would slightly decrease its performance.But if we ask it to resolve, say, some intractable conflict like Israel/Palestine or Kashmir and it finds us squabbling endlessly over minute details, or matters of (real or perceived) honor (all the while the suffering caused by the conflict continues) it may very well conclude we’re just not actually all that interested in a solution and indeed class us as being “dumb” or at least inferior in some sense, “downgrading”, if you will the authority it assumed we can be ascribed or trusted with. Multiply this by a dozen or so similar situations and voila, you can be reasonably certain it will get very exasperated with us in short order.
This is not the same as “unprotected atoms”; such atoms would not be ascribed agency or competence, nor would they proudly claim any.
Oh, that may indeed be true, but going forward it could give us only a little bit of extra “cred” before it realizes that most of the questions/solutions we want from it are either motivated by some personal preference, or that we are opposed to its proposed solutions to actual, objective problems for irrational “priorities” such as national pride, not-invented-here-biases, because we didn’t have our coffee this morning or merely because it presented the solution in a font we don’t like ;)
AGI will know: Humans are not Rational
I think the issue here (about whether it is intelligent) is not so much a matter of the answers it fashions, but about whether it can be said it does so from an “I”. If not, it is basically a proverbial Chinese Room, though this merely moves the goalposts to the question whether humans are not, actually, also a Chinese Room, just a more sophisticated one. I suspect that we will not be very eager to accept such a finding, indeed, we may not be capable of seeing ourselves thus, for it implies a whole raft of rather unpleasant realities (like, say, the absence of free will, or indeed any will at all) which we’d not want to be true, to put it mildly.
The reason it may seem our societal ability to create well-working institutions is declining could also have to do with the apparent fact that the whole idea of duty and the honor that this used to confer is not as much in vogue anymore as it used to be. Also, Equality and Diversity aside, being “ideological” is not really a thing anymore… the heydays of being an idealist and brazenly standing for something are seemingly over.
The general public seem to be more interested in rights and not responsibilities, somehow unable to understand that they can only meaningfully exist together. I was having a conversation the other day about whether it would be a good idea to introduce compulsory voting in the US, as this would render moot a significant number of dirty tricks used to de-facto disenfranchise certain groups… almost all objections came from the “I”-side; I have a right to this, I am entitled to that… the whole idea that, gee, you know, you might be obliged to spend 1-3 hours every 2 or 4 years to participate in society is already too much of a bloody hassle. Well yeah… with that kind of mindset, it’s no wonder the institutions that require an actual commitment to maintaining robust societal functions is hard to find...
Well yes there are methods of preventing the situation as described (that one can manually pick from a stash where various ‘qualities’ are intermixed) but that changes the circumstances; my example was specifically for that set of particulars. I guess that like most examples where significant differences in assessment arise, they all boil down to where you set the “slider” for taking responsibility for the situation one creates (eg. the seller allowing manual selection) and the degree to which one is willing, able or justified to “exploit” such a situation to ones’ benefit.
I think the cherry-picking example is an especially good one because it touches on a number of important issues, and each of those issues in itself is an unsettled question. Is it “just” to strive for an equitable division of fruit qualities among all (future) buyers? Will those buyers feel the same way about your idea of justice? Is it reasonable to negatively judge those who don’t “comply” with such a conception? Are such people immoral? Are they not in fact simply more assertive of what they see as their right to choose? None of these can be easily settled....
I am one of those people that have an overactive sensitivity for fairness, and at times go to extremes to make sure justice “happens”, and can’t help but point out double standards and (real or perceived) hypocrisy. However… I’ll be honest—this is generally something that decreases the net quality of my life. Not in the least because injustice (in the broad sense) is omnipresent and highly prevalent everywhere. When that isn’t the problem, the next issue that rears its head repeatedly is how to define justice in the first place.
You mention the case of getting too much change back… this is far from a clear cut case. One could defend the position that it is part of the clerks required diligence to ensure he gives you the correct amount of change, and that if he does not, it is for him to deal with. It seems defensible to claim that the chance of him “learning his lesson” may be better if you do not tell him of the mistake. (This might be different if the option “tell about mistake, keep the money” would be available, but it kind of isn’t, for a quite interesting set of reasons). I suspect that, all things considered, going back to the clerk to return the excess change is probably indeed the most correct thing to do, but certainly not unambiguously so.
To introduce another interesting shade of grey.. You go out to buy some cherries, and it is possible to select the fruits yourself. Do you think it’s OK to manually select the best ones from the stash (assuming ones’ hands are meant to be the ‘tools’ for selection) - it is very hard to adequately define what “just” in this case is—it is hard to defend the position that you must share in the crappy fruits, but equally you might believe it is unfair to other clients to pick out the nicest ones. But then again, first come, first serve isn’t exactly controversial either...
I strongly believe in justice and fairness. In the accurate and equitable assignation of responsibility, and admitting ones’ share. Yet the temptation not to do it will always remain, mostly because it very often simply costs less. And at times I do wonder if my ideation of justice basically translates into being the sucker ;)
One could be forgiven for getting the feeling...
I think this whole problem is a bit more nuanced than you seem to suggest here. I can’t help but at least tentatively give some credit to the assertion that LW is, for lack of a better term, mildly elitist. To be sure, for perhaps roughly the right reasons, but being elitist in whatever measure tends to be detrimental to the chances of getting your point across, especially if it needs to be elucidated to the very folks you’re elitist towards ;) Not many behaviors are judged more repulsive than being made to feel a lesser person… I’d say it’s pretty close to a cultural universal.
It’s not right to assert that if one does not agree with your suggestion that stupidity is to be seen as a type of affliction of the same type or category as mental illness, one therefore is disparaging mental illness as shameful; This is a false dichotomy. One can disagree with you for other reasons, not in the least for reasons as remote from shame as evolution… it is nowhere close to a given that nature cares even a single bit about whatever might end up being called intelligence. You will note that most creatures seem to have just the right CPU for their “lifestyle”, and while it might be easy for us to imagine how, say, a dog might benefit from being smarter, I’d sooner call that a round-about way of anthropomorphizing than a probable truth.
Exhibit B seems to be the most convincing observation that, by the look of things, wanting to “go for max IQ” is hardly on evolution’s To-Do list… us, primates, dolphins and a handful of birds aside, most creatures seem perfectly content with being fairly dim and instinct-driven, if the behaviours and habits exhibited by animals are a reliable indication ;) I’ll be quiet about the elephant in the room that the vast majority of our important motivations are emotional and non-rational, too...
What’s more—and I am actually curious what you will respond to this… it could be said that animals, all animals, are more rational than human beings; after all, they don’t waste “CPU cycles” on beliefs, vague whataboutery, or theories about how to “deal” with the less intellectually gifted among their kind ;) So while humans might be walking around with a Dual 12-core Xeon in their heads, at any given moment 8 cores are basically wasting cycles on barely productive nonsense; a chicken might just have a Pentium MMX, but it is 100% dedicated to the task of fetching the next worm and ensuring the right location to drop that egg without cracking it...
… but malice is the “force” that actually creates “evil” in the first place. I think the intended meaning of the saying “Don’t assume malice where stupidity is sufficient [to explain an observation]” is meant to make the problem seem less bad, not worse...
At the heart of the intractability of stupidity lies the Dunning Kruger problem. It can be an impossible challenge to make an ignorant person:
- admit they are ignorant;
- in the process, realize that most of the beliefs and the reasons they had for holding them were entirely wrong;
- despite having just realized they need a comprehensive world-view revision find the courage and desire to become more educated while:
- having above average difficulties with acquiring new and hitherto unknown and/or too complex material.
Oh I don’t really “do” Twitter actually… nor Facebook since about a year. Now and again one of my friends shares and tweet and sometimes it can be an interesting start of a topic but… though I’ve been doing Internet since 1995, Twitter is just too vacuous for my liking. In response, now and again I’ll send a 1 hour+ YouTube link back ;)
And yes of course, multiple points of view need not bring one close to the Truth, however...
In a large number of narratives, especially, it seems, the most relevant ones, finding the truth may be practically impossible, and sometimes there simply is no truth, or at least not just one. To some people aspect X is irrelevant, others might believe it crucial. This news network claims Witness Y is credible, some other one calls him a corporate shill. Unless you would be able to get into the minds of each human involved, what you end up believing is the truth will always be an approximation.
Take for instance the recently more often occurring phenomenon of “influencers” (shudder) bloggers or journalists looking into the obscure past of what someone who is having his/her 15 minutes of fame has posted back in, say, 2004 on some now-defunct blog, and bleating out on Twitter anything remotely controversial or tentatively indicative of hypocrisy. I doubt you will ever settle the debate whether people can genuinely change or not. I know that I’ve had views I no longer hold today—both “benign” and “tough love” ones… and while previously held view will always have the familiarity bias, they can actually be genuinely a thing of the past. Yet if they are found online and are at odds with what I would be saying today, poof there goes half of my credibility...
And—getting multiple points of view at the very least will give you some idea why certain people apparently seem to find a given topic or story important. The net outcome may well be that you will be further from the truth, swimming in a sea of conflicting interests… and yet, still understand the nature of the issue in more detail :)
Okay—what I would want to ask is—is it reasonable to expect that a government with billions of dollars to spend on intelligence gathering, data analysis and various experts must be meeting at least one of these criteria:
- It has access to high quality information about the actual state of affairs in most relevant domains
- It is grossly incompetent or corrupt and the data is not available in an actionable format
- It willfully ignores the information, and some of its members actively work to prevent the information reaching the right people
The Corona virus is a good example. By the time it “arrived” in the USA, you can be all but certain the US government could have had a 20+ page detailed report lying on the desk of every secretary giving very actionable figures and probabilities about the threat at hand. The information would be incomplete, of course, but definitely enough to get busy in a nominally effective way.
While I know that Trump is said to have disbanded various institutions that work to anticipate and prepare for pandemics, still it would seem to me that a huge apparatus like the US government should be able to collect and otherwise infer a significant amount of information that would allow it to mount at least a reasonable response.
Or to phrase my question differently—should it be seen as an act of gross incompetence that a resourceful and powerful government like the one in the US failed to act upon the information they either really did have, or should have prioritized to obtain? How is it remotely acceptable that “We just had no idea” is not a ludicrous and frankly preposterous position to be in, given the possibilities?
And of course there can be cases where even the US government can be caught off guard, make a set of misguided institutional choices—sure :) But I would say this happens very, very rarely, certainly not as often as the current administration seems to suggest.
Yeah alright… I guess you could call that passive casual observance :)
It ain’t heaven if there are things that one should do to “remain a member”, or to (continue) enjoy(ing) the best QoS. Surely, the very concept of duty, demand or task is anathema to calling a place heaven. Just as well, being cared for also ought not to be a concern, for it implies there exists the possibility of not being cared for—again, surely not a feature of anything remotely resembling a heaven.
Indeed I would go so far as to say that to have preferences (and to entertain any kind of doubt about whether they are met/fulfilled) has no place in any environment that hopes to call itself a heaven. The very definition of heaven is a place where one has instant and complete gratification of every whim, at no cost, without limits. It’s a poorly designed heaven if even God gains something from being prayed to. What could that possibly be...?
---
The truth of course is that, taken to its logical ultimate conclusion, a place that meets the literal and complete definition of heaven would quite likely be a Simulated Reality in which is constantly “infused” with bliss and feelings of euphoria and comfort. A kind of… happy-drug drip feed.
Should it even be possible to WANT (and to thus be in the non-optimal state/condition which prompted one to want something not currently being had) anything in Heaven at all...? Is a WANT not synonymous with (the feeling of) missing something and thus a “defect”...an imperfection?