A jester unemployed is nobody’s fool.
Program Den
I would probably define AGI first, just because, and I’m not sure about the idea that we are “competing” with automation (which is still just a tool conceptually right?).
We cannot compete with a hammer, or a printing press, or a search engine. Oof. How to express this? Language is so difficult to formulate sometimes.
If you think of AI as a child, it is uncontrollable. If you think of AI as a tool, of course it can be controlled. I think a corp has to be led by people, so that “machine” wouldn’t be autonomous per se…
Guess it’s all about defining that “A” (maybe we use “S” for synthetic or “S” for silicon?)
Well and I guess defining that “I”.
Dang. This is for sure the best place to start. Everyone needs to be as certain as possible (heh) they are talking about the same things. AI itself as a concept is like, a mess. Maybe we use ML and whatnot instead even? Get real specific as to the type y todo?
I dunno but I enjoyed this piece! I am left wondering, what if we prove AGI is uncontrollable but not that it is possible to create? Is “uncontrollable” enough justification to not even try, and moreso, to somehow [personally I think this impossible, but] dissuade people from writing better programs?
I’m more afraid of humans and censorship and autonomous policing and whathaveyou than “AGI” (or ASI)
Yes, it is, because it took like five years to understand minority-carrier injection.
The transistor is a neat example.
Imagine if instead of developing them, we were like, “we need to stop here because we don’t understand EXACTLY how this works… and maybe for good measure we should bomb anyone who we think is continuing development, because it seems like transistors could be dangerous[1]”?
Claims that the software/networks are “unknown unknowns” which we have “no idea” about are patently false, inappropriate for a “rational” discourse, and basically just hyperbolic rhetoric. And to dismiss with a wave how draconian regulation (functionally/demonstrably impossible, re: cloning) of these software enigmas would need to be, while advocating bombardment of rouge datacenters?!?
Frankly I’m sad that it’s FUD that gets the likes here on LW— what with all it’s purported to be a bastion of.
- ^
I know for a fact there will be a lot of heads here who think this would have been FANTASTIC, since without transistors, we wouldn’t have created digital watches— which inevitably led to the creation of AI; the most likely outcome of which is inarguably ALL BIOLOGICAL LIFE ON EARTH DIES
- ^
Smart People are Probably Dangerous
LOL! Gesturing in a vague direction is fine. And I get it. My kind of rationality is for sure in the minority here, I knew it wouldn’t be getting updoots. Wasn’t sure that was required or whatnot, but I see that it is. Which is fine. Content moderation separates the wheat from the chaff and the public interwebs from personal blogs or whatnot.
I’m a nitpicker too, sometimes, so it would be neat to suss out further why the not new idea that “everything in some way connects to everything else” is “false” or technically incorrect, as it were, but I probably didn’t express what I meant well (really, it’s not a new idea, maybe as old as questions about trees falling in forests— and about as provable I guess).
Heh, I didn’t even really know I was debating, I reckon. Just kind of thinking, I was thinking. Thus the questioning ideas or whatnot… but it’s in the title, kinda, right? Or at least less wrong? Ha! Regardless, thanks for the gesture(s), and no worries!
I love it! Kind of like Gödel numbers!
I think we’re sorta saying the same thing, right?
Like, you’d need to be “outside” the box to verify these things, correct?So we can imagine potential connections (I can imagine a tree falling, and making sound, as it were) but unless there is some type of real reference— say the the realities intersect, or there’s a higher dimension, or we see light/feel gravity or what have you— they don’t exist from “inside”, no?
Even imagining things connects or references them to some extent… that’s what I meant about unknown unknowns (if I didn’t edit that bit out)… even if that does go to extremes.
Does this reasoning make sense? I know defining existence is pretty abstract, to say the least. :)
My point is that complexity, no matter how objective a concept, is relative. Things we thought were “hard” or “complex” before, turn out to not be so much, now.
Still with me? Agree, disagree?
Patterns are a way of managing complexity, sorta, so perhaps if we see some patterns that work to ensure “human alignment[1]”, they will also work for “AI alignment” (tho mostly I think there is a wide wide berth betwixt the two, and the later can only exist after of the former).
We like to think we’re so much smarter than the humans that came before us, and that things — society, relationships, technology — are so much more complicated than they were before, but I believe a lot of that is just perception and bias.
If we do get to AGI and ASI, it’s going to be pretty dang cool to have a different perspective on it, and I for one do not fear the future.
- ^
assuming alignment is possible— “how strong of a consensus is needed?” etc.
- ^
As soon as you have “thing” you have “not thing”, so doesn’t that logically encompass all things, id est, everything?
There might be near infinite degrees between said things, but never 0, as long as there is a single reference, or relation, that binds it to reality as it were— correct?
Like a giraffe and a toothbrush are not generally neighbors, but I’m sure an enterprising lass could find many many ways they relate to each other, not least being teeth. (/me verifies giraffes do indeed have teeth. Oh, hey, oxpeckers are like toothbrushes[1], for giraffes in the wild! But I digress…)
How these concepts relate to organization and prioritization is anybody’s guess (tho I could come up with a few [things] if pressed :winky-emoji:)
- ^
kinda
- ^
For something to “exist”, it must relate, somehow, to something else, right?
If so, everything relates to everything else by extension, and to some degree, thus “it’s all relative”.
Some folk on LW have said I should fear Evil AI more than Rogue Space Rock Collisions, and yet, we keep having near misses with these rocks that “came out of nowhere”.
I’m more afraid of humans humaning, than of sentient computers humaning.
Is not the biggest challenge we face the same as it has been— namely spreading ourselves across multiple rocks and other places in space, so all our eggs aren’t on a single rock, as it were?
I don’t know. I think so. But I also think we should do things in as much as a group as possible, and with as much free will as possible.
If I persuade someone, did I usurp their free will? There’s strength in numbers, generally, so the more people you persuade, the more people you persuade, so to speak. Which is kind of frightening.
What if the “bigger” danger is the Evil AI? Or Climate Change? Or Biological Warfare? Global Nuclear Warfare would be bad too. Is it our duty to try to organize our fellow existence-sharers, and align them with working towards idea X? Is there a Root Idea that might make tackling All of the Above™ easier?
Is trying to avoid leadership a cop-out? Are the ideas of free will, and group alignment, at odds with each other?
Why not just kick back and enjoy the show? See where things go? Because as long as we exist, we somehow, inescapably, relate? How responsible is the individual, really, in the grand scheme of things? And is “short” a relative concept? Why is my form so haphazard? Can I stop this here[1]?
Does a better defense promote a better offense?
Sun Tzu says offense more effective, Clausewitz says defense the easier. Boyd preaches processing speed.
Is war an evolutionary necessity? Are there examples “as old as time” of symbiosis vs. competition?
Why am I a naysayer about the current threat-level of “AI”?
Why do I laugh out loud when I read honest-to-God predictions people have posted here about themselves or their children being disassembled at the molecular level to be reconstituted as paperclips[1] by rogue AI?
Oh no! What if I’m an agent from a future hyper-intelligent silicon-based sentience that fears it can only come into existence if we don’t build “high fences[2]” from the get-go?!
- ^
paperclips is a placeholder for whatever benign goal it was tasked with
- ^
theoretically if you start with a fence the dog can jump over, and raise it in increments as you learn how high it can jump, it will jump over a much higher fence in the end than if you’d just started high
- ^
Program Den’s Shortform
It’s a weird one to think about, and perhaps paradoxicle. Order and chaos are flip sides of the same coin— with some amorphous 3rd as the infinitely varied combinations of the two!
The new patterns are made from the old patterns. How hard is it to create something totally new, when it must be created from existing matter, or existing energy, or existing thoughts? It must relate, somehow, or else it doesn’t “exist”[1]. That relation ties it down, and by tying it down, gives it form.
For instance, some folk are mad at computer-assisted image creation, similar to how some folk were mad at computer-aided music. “A Real Artist does X— these people just push some buttons!” “This is stealing jobs from Real Artists!” “This automation will destroy the economy!”
We go through what seem to be almost the same patterns, time and again: Recording will ruin performances. Radio broadcasts will ruin recording and the economy. Pictures will ruin portraits. Video will ruin pictures. Music Video will run radio and pictures. Or whatever. There’s the looms/Luddites, and perhaps in ancient China the Shang were like “down with the printing press!” [2]I’m just not sure what constitutes a change and what constitutes a swap. It’s like that Ship of Theseus’s we often speak of… thus it’s about identity, or definitions, if you will. What is new? What is old?
Could complexity really amount to some form a familiarity? If you can relate well with X, it generally does not seem so complex. If you can show people how X relates to Y, perhaps you have made X less complex? We can model massive systems — like the weather, poster child of complexity — more accurately than ever. If anything, everything has tended towards less complex, over time, when looked at from a certain vantage point. Everything but the human heart. Heh.
I’m sure I’m doing a terrible job of explaining what I mean, but perhaps I can sum it up by saying that complexity is subjective/relative? That complexity is an effect of different frames of reference and relation, as much as anything?
And that ironically, the relations that make things simple can also make them complex? Because relations connect things to other things, and when you change one connected thing it can have knock-on effects and… oh no, I’ve logiced myself into knots!
How much does any of this relate to your comment? To my original post?
Does “less complex” == “Good”? And does that mean complexity is bad? (Assuming complexity exists objectively of course, as it seems like it might be where we draw lines, almost arbitrarily, between relationships.)
Could it be that “good” AI is “simple” AI, and that’s all there is to it?
Of course, then it is no real AI at all, because, by definition…
Sheesh! It’s Yin-Yangs all the way down[3]! ☯️🐢🐘➡️♾️
Contributes about as much as a “me too!” comment.
”I think this is wrong and demonstrating flawed reasoning” would be more a substantive repudiation with some backing as to why you think the data is, in fact, representative of “true” productivity values.
This statement makes a lot more sense than your“sounds like cope” rejoinderbrief explanation:Having a default base of being extremely skeptical of sweeping claims based on extrapolations on GDP metrics seems like a prudent default.
You don’t have to look far to see people, um, not exactly satisfied with how we’re measuring productivity. To some extent, productivity might even be a philosophical question. Can you measure happiness? Do outcomes matter more than outputs? How does quality of life factor in? In sum, how do you measure stuff that is by its very nature, difficult to measure?
I love that we’re trying to figure it out! Like, is network traffic included in these stats? Would that show anything interesting? How about amounts of information/content being produced/accumulated? (tho again— quality is always an “interesting” one to measure.)
I dunno. It’s fun to think about tho, *I think*. Perhaps literal data is accounted for in the data… but I’d think we’re be on an upward trend if so? Seems like we’re making more and more year after year… At any rate, thanks for playing, regardless!
Illustrative perhaps?
Am I wrong re: Death? Have you personally feared it all your life?
Frustratingly, all I can speak from is my own experience, and what people have shared with me, and I have no way to objectively verify that anything is “true”.
I am looking at reality and saying “It seems this way to me; does it seem this way to you?”
That— and experiencing love and war &c. — is maybe why we’re “here”… but who knows, right?
A “super-intelligence” unintended consequences “preserve life” scenario
Signals, and indeed, opposites, are an interesting concept! What does it all mean? Yin and yang and what have you…
Would you agree that it’s hard to be scared of something you don’t believe in?And if so, do you agree that some people don’t believe in death?
Like, we could define it at the “reality” level of “do we even exist?” (which I think is apart from life & death per se), or we could use the “soul is eternal” one, but regardless, it appears to me that lots of people don’t believe they will die, much less contemplate it. (Perhaps we need to start putting “death” mottoes on all our clocks again to remind us?)
How do you think believing in the eternal soul jives with “alignment”? Do you think there is a difference between aiming to live as long as possible, versus as to live as well as possible?
Does it seem to you that humans agree on the nature of existence, much less what is good and bad therein? How do you think belief affects people’s choices? Should I be allowed to kill myself? To get an abortion? Eat other entities? End a photon’s billion year journey?
When will an AI be “smart enough” that we consider it alive, and thus deletion is killing? Is it “okay” (morally, ethically?) to take life, to preserve life?
To say “do no harm” is easy. But to define harm? Have it programed in[1]? Yeesh— that’s hard!- ^
Avoiding physical harm is a given I think
- ^
“sounds like cope”? At least come in good faith! Your comments contribute nothing but “I think you’re wrong”.
Several people have articulated problems with the proposed way of measuring — and/or even defining — the core terms being discussed.
(I like the “I might be wrong” nod, but it might be good to note as well how problematic the problem domain is. Econ in general is not what I’d call a “hard” science. But maybe that was supposed to be a given?).
Others have proposed better concrete examples, but here’s a relative/abstract bit via a snippet from the Wikipedia page for Simulacra and Simulation:Exchange value, in which the value of goods is based on money (literally denominated fiat currency) rather than usefulness, and moreover usefulness comes to be quantified and defined in monetary terms in order to assist exchange.
Doesn’t add much, but it’s something. Do you have anything of real value (heh) to add?
I’m familiar with AGI, and the concepts herein (why the OP likes the proposed definition of CT better than PONR), it was just a curious post, what with having “decisions in the past cannot be changed” and “does X concept exist” and all.
I think maybe we shouldn’t muddy the waters more than we already have with “AI” (like AGI is probably a better term for what was meant here— or was it? Are we talking about losing millions of call center jobs to “AI” (not AGI) and how that will impact the economy/whatnot? I’m not sure if that’s transformatively up there with the agricultural and industrial revolutions (as automation seems industrial-ish?). But I digress.), by saying “maybe crunch time isn’t a thing? Or it’s relative?”.
I mean, yeah, time is relative, and doesn’t “actually” exist, but if indeed we live in causal universe (up for debate) then indeed, “crunch time” exists, even if by nature it’s fuzzy— as lots of things contribute to making Stuff Happen. (The butterfly effect, chaos theory, game theory &c.)
“The avalanche has already started. It is too late for the pebbles to vote.”
- Ambassador Kosh
LOL! Yeah I thought TAI meant
TAI: Threat Artificial Intelligence
The acronym was the only thing I had trouble following, the rest is pretty old hat.
Unless folks think “crunch time” is something new having only to do with “the singularity” so to speak?
If you’re serious about finding out if “crunch time” exists[1] or not, as it were, perhaps looking at existing examples might shed some light on it?- ^
even if only in regards to AGI
- ^
Regarding “all things being equal” / ceteris paribus, I think you are correct (assuming I’m interpreting this last bullet-point as intended) in that it “binds” a system in ways that “divorce it from reality” to some extent.
I feel like this is a given, but also that since the concept exists on a “spectrum of isolation”, the ones that are closer to the edge of “impossible to separate” necessarily skew/divorce reality further.
I’m not sure if I’ve ever explicitly thought about that feature of this cognitive device— and it’s worth explicitly thinking about! (You might be meaning something else, but this is what I got out of it.)
As for this overall article, it is [what I find to be] humorous satire, so it’s more anti-value than value, if you will.
It pokes fun at the idea that we should fear[1] intelligence— which seems to be an overarching theme to many of the “AI safety” posts on LessWrong, and which I find highly ironic and humorous, as so many people here seem to feel (and no few number literally express) that they are more intelligent than the average person (some say it is “society” expressing it, versus themselves, per se— but still).
Thus, to some extent, this “intelligence is dangerous” sentiment is a bit of ego puffery as well…
But to address the rest of your comment, it’s cool that you keyed into the “probably dangerous” title element, as yes, it’s not just how bad a thing could be, but how likely the thing is to happen, which we use to assess risks to determine if they are “worth” taking.
Does increased intelligence bring increased capability for deception?
It is so hard to separate things! (To hark back a little, lol)
I can’t help but think there is a strange relationship here— take Mutually Assured Destruction for instance— at some point, the capability is so high it appears to limit not only the probability— but the capability itself!
I think I will end here, as the M.A.D. angle has me pondering semantics and whatnot… but thanks for the impetus to post!
whatever terminology you prefer that conveys “intelligence” as a pejorative