Certainly. But evolved intelligences do not optimize for intelligence. They optimize for perpetuation of the genome. Constructed intelligence allows for systems that are optimized for intelligence. This was what I was getting at with the mentions of the fact that evolution does not optimize for what we optimize for; that there is no evolutionary equivalent of the atom bomb nor the Saturn-V rocket.
So mentioning “ways that can go wrong” and reinforcing that point with evolutionary precedent seems to be rather missing the point. It’s apples-to-oranges.
After all: even if there is diminishing return on investment in terms of invested energy to achieve a more-intelligent design, once that new design is achieved it can be replicated essentially indefinitely.
Taking energy to get there isn’t what is relevant in that context. The relevant issue is that being intelligent takes a lot of resources up. This is an important distinction. And the fact that evolution doesn’t optimize for intelligence but for other goals isn’t really relevant, given that an AGI presumably won’t optimize itself for intelligence (a paperclip maximizer for example will make itself just as intelligent enough as it estimates is optimal for making paperclips everywhere). The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Quite correct, but you’re still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the “aims”/”goals” of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
given that an AGI presumably won’t optimize itself for intelligence
By the very dint of the fact that it is designed for the purpose of being intelligent, any AGI conceivably constructed by men would be optimized for intelligence; this seems a rather routinely heritable phenomenon. While a paperclip optimizer itself might not seek to optimize itself for intelligence, if we postulate that it is in the business of making a ‘smarter’ paperclip optimizer it will optimize for intelligence. Of course we cannot know the agency of any given point in the sequence; whether it will “make the choice” to recurse upwards.
That being said, there’s a real “non-sequitor” here to your dialogue, insofar as I can see. “The relevant issue is that being intelligent takes a lot of resources up”. -- Compared to what, exactly? Roughly 2⁄3 of our caloric intake goes to our brain. Our brain is not well-optimized for intelligence. “[...] given that an AGI presumably won’t optimize itself for intelligence”—but whatever designed that AGI would have.
The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)
I strongly disagree. The basic point is deeply flawed. I’ve already tried to say this repeatedly: evolution does not optimize for intelligence. Pointing at evolution’s history with intelligence and saying, “aha! Optimization finds intelligence expensive!” is missing the point altogether: evolution should find intelligence expensive. It doesn’t match what evolution “does”. Evolution ‘seeks’ stable local minima to perpetuate replication of the genome. That is all it does. Intelligence isn’t integral to that process; humans didn’t need to be any more intelligent than we are in order to reach our local minima of perpetuation, so we didn’t evolve any more intelligence.
To attempt to extrapolate from that to what intelligence-seeking designers would achieve is missing the point on a very deep level: to extrapolate correctly from the ‘lessons’ evolution would ‘teach us’, one would have to postulate a severe selection pressure favoring intelligence.
I don’t see how you’re doing that.
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Quite correct, but you’re still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the “aims”/”goals” of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
I don’t understand your remark. No one is going to make an AGI whose goal is to become as intelligent as possible. Evolution is thus in this context one type of optimizer. Whatever one is optimizing for, becoming as intelligent as possible won’t generally be the optimal thing to do even if becoming more intelligent does help achieve its goals more.
No one is going to make an AGI whose goal is to become as intelligent as possible.
I would.
Evolution is thus in this context one type of optimizer.
To which intelligence is extraneous.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Intelligence is definitionally instrumental to an artificial general intelligence. Given sufficient time, any AGI capable of constructing a superior AGI will do so.
No one is going to make an AGI whose goal is to become as intelligent as possible.
I would.
Are you trying to make sure a bad Singularity happens?
Evolution is thus in this context one type of optimizer.
To which intelligence is extraneous.
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied environments. Unfortunately, it is a resource intensive tool. That’s why Azathoth doesn’t use except in a few very bright species.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
You seem to be confusing two different notions of intelligence. One is the either/or “is it intelligent” and the other is how intelligent it is.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Are you trying to make sure a bad Singularity happens?
If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”. (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”.
As I’ve asked elsewhere to resoundingly negative results:
Why is it automatically “bad” to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience… why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a ‘child’ or ‘inheritor’ of “the human will”—then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?
I do not, furthermore, define “humanity” in so strict terms as to require that it be flesh-and-blood to be “human”. If our FOOMing AGI were a “human” one—I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.
Sure, it would suck for me—but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.
I ask this question because I feel that it is relevant. Why is “inheritor” non-Friendly AGI “bad”?
(This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Caveat: It is possible for me to discuss my own motivations using someone else’s valuative framework. So context matters. The mere fact that I would say “No” does not mean that I could never say “Yes—as you see it.”
Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment—without our permission—would you consider this a Friendly outcome?
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied enviornments. Unfortunately, it is a resource intensive tool.
Given the available routes to general intelligence available to “the blind idiot god” due to the characteristics it does optimize for. We have a language breakdown here.
The reason I said intelligence is ‘extraneous’ to evolution was because evolution only ‘seeks out’ local minima for perpetuation of the genome. What specific configuration a given local minimum happens to be is extraneous to the algorithm. Intelligence is in the available solution space but it is extraneous to the process. Which is why generalists often lose out to specialists in limited environments. (Read: the pigmy people who “went back to the trees”.)
Intelligence is not a goal to evolution; it is extraneous to its criteria. Intelligence is not the metric by which evolution judges fitness. Successful perpetuation of the genome is. Nothing else.
You seem to be confusing two different notions of intelligence. One is the either/or “is it intelligent” and the other is how intelligent it is.
Not at all. Not even remotely. I’m stating that any agent—a construct (biological or synthetic) that can actively select amongst variable results; a thing that makes choices—inherently values intelligence; the capacity to ‘make good choices’. A more-intelligent agent is a ‘superior’ agent, instrumentally speaking.
Any time there is an intelligent designed agent, the presence of said intelligence is a hard indicator of the agent valuing intelligence. Designed intelligences are “designed to be intelligent”. (This is tautological). This means that whoever designed that intelligence spent effort and time on making it intelligent. That, in turn, means that its designer valued that intelligence. Whatever goalset the designer imparted into the designed intelligence, thusly, is a goalset that requires intelligence to be effected.
Which in turn means that intelligence is definitionally instrumentally useful to a designed intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
I’m not sure what you mean here.
What gives you trouble with it? Try rephrasing it and I’ll ‘correct’ said rephrasing towards my intended meaning as possible, perhaps? I want to be understood. :)
Certainly. But evolved intelligences do not optimize for intelligence. They optimize for perpetuation of the genome. Constructed intelligence allows for systems that are optimized for intelligence. This was what I was getting at with the mentions of the fact that evolution does not optimize for what we optimize for; that there is no evolutionary equivalent of the atom bomb nor the Saturn-V rocket.
So mentioning “ways that can go wrong” and reinforcing that point with evolutionary precedent seems to be rather missing the point. It’s apples-to-oranges.
After all: even if there is diminishing return on investment in terms of invested energy to achieve a more-intelligent design, once that new design is achieved it can be replicated essentially indefinitely.
Taking energy to get there isn’t what is relevant in that context. The relevant issue is that being intelligent takes a lot of resources up. This is an important distinction. And the fact that evolution doesn’t optimize for intelligence but for other goals isn’t really relevant, given that an AGI presumably won’t optimize itself for intelligence (a paperclip maximizer for example will make itself just as intelligent enough as it estimates is optimal for making paperclips everywhere). The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Quite correct, but you’re still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the “aims”/”goals” of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
By the very dint of the fact that it is designed for the purpose of being intelligent, any AGI conceivably constructed by men would be optimized for intelligence; this seems a rather routinely heritable phenomenon. While a paperclip optimizer itself might not seek to optimize itself for intelligence, if we postulate that it is in the business of making a ‘smarter’ paperclip optimizer it will optimize for intelligence. Of course we cannot know the agency of any given point in the sequence; whether it will “make the choice” to recurse upwards.
That being said, there’s a real “non-sequitor” here to your dialogue, insofar as I can see. “The relevant issue is that being intelligent takes a lot of resources up”. -- Compared to what, exactly? Roughly 2⁄3 of our caloric intake goes to our brain. Our brain is not well-optimized for intelligence. “[...] given that an AGI presumably won’t optimize itself for intelligence”—but whatever designed that AGI would have.
I strongly disagree. The basic point is deeply flawed. I’ve already tried to say this repeatedly: evolution does not optimize for intelligence. Pointing at evolution’s history with intelligence and saying, “aha! Optimization finds intelligence expensive!” is missing the point altogether: evolution should find intelligence expensive. It doesn’t match what evolution “does”. Evolution ‘seeks’ stable local minima to perpetuate replication of the genome. That is all it does. Intelligence isn’t integral to that process; humans didn’t need to be any more intelligent than we are in order to reach our local minima of perpetuation, so we didn’t evolve any more intelligence.
To attempt to extrapolate from that to what intelligence-seeking designers would achieve is missing the point on a very deep level: to extrapolate correctly from the ‘lessons’ evolution would ‘teach us’, one would have to postulate a severe selection pressure favoring intelligence.
I don’t see how you’re doing that.
Quite correct, but you’re still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the “aims”/”goals” of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
I don’t understand your remark. No one is going to make an AGI whose goal is to become as intelligent as possible. Evolution is thus in this context one type of optimizer. Whatever one is optimizing for, becoming as intelligent as possible won’t generally be the optimal thing to do even if becoming more intelligent does help achieve its goals more.
I would.
To which intelligence is extraneous.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Intelligence is definitionally instrumental to an artificial general intelligence. Given sufficient time, any AGI capable of constructing a superior AGI will do so.
Are you trying to make sure a bad Singularity happens?
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied environments. Unfortunately, it is a resource intensive tool. That’s why Azathoth doesn’t use except in a few very bright species.
You seem to be confusing two different notions of intelligence. One is the either/or “is it intelligent” and the other is how intelligent it is.
I’m not sure what you mean here.
If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”. (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
As I’ve asked elsewhere to resoundingly negative results:
Why is it automatically “bad” to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience… why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a ‘child’ or ‘inheritor’ of “the human will”—then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?
I do not, furthermore, define “humanity” in so strict terms as to require that it be flesh-and-blood to be “human”. If our FOOMing AGI were a “human” one—I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.
Sure, it would suck for me—but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.
I ask this question because I feel that it is relevant. Why is “inheritor” non-Friendly AGI “bad”?
Caveat: It is possible for me to discuss my own motivations using someone else’s valuative framework. So context matters. The mere fact that I would say “No” does not mean that I could never say “Yes—as you see it.”
It isn’t automatically bad. I just don’t want it. This is why I said your answer is legitimately “No”.
Fair enough.
Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment—without our permission—would you consider this a Friendly outcome?
Potentially, depending on the simulated environment.
Assume Earth-like or video-game-like (in the latter-case including ‘respawns’).
Video game upgrades! Sounds good.
I believe you mean, “I’m here to kick ass and chew bubblegum… and I’m all outta gum!”
Given the available routes to general intelligence available to “the blind idiot god” due to the characteristics it does optimize for. We have a language breakdown here.
The reason I said intelligence is ‘extraneous’ to evolution was because evolution only ‘seeks out’ local minima for perpetuation of the genome. What specific configuration a given local minimum happens to be is extraneous to the algorithm. Intelligence is in the available solution space but it is extraneous to the process. Which is why generalists often lose out to specialists in limited environments. (Read: the pigmy people who “went back to the trees”.)
Intelligence is not a goal to evolution; it is extraneous to its criteria. Intelligence is not the metric by which evolution judges fitness. Successful perpetuation of the genome is. Nothing else.
Not at all. Not even remotely. I’m stating that any agent—a construct (biological or synthetic) that can actively select amongst variable results; a thing that makes choices—inherently values intelligence; the capacity to ‘make good choices’. A more-intelligent agent is a ‘superior’ agent, instrumentally speaking.
Any time there is an intelligent designed agent, the presence of said intelligence is a hard indicator of the agent valuing intelligence. Designed intelligences are “designed to be intelligent”. (This is tautological). This means that whoever designed that intelligence spent effort and time on making it intelligent. That, in turn, means that its designer valued that intelligence. Whatever goalset the designer imparted into the designed intelligence, thusly, is a goalset that requires intelligence to be effected.
Which in turn means that intelligence is definitionally instrumentally useful to a designed intelligence.
What gives you trouble with it? Try rephrasing it and I’ll ‘correct’ said rephrasing towards my intended meaning as possible, perhaps? I want to be understood. :)