No one is going to make an AGI whose goal is to become as intelligent as possible.
I would.
Evolution is thus in this context one type of optimizer.
To which intelligence is extraneous.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Intelligence is definitionally instrumental to an artificial general intelligence. Given sufficient time, any AGI capable of constructing a superior AGI will do so.
No one is going to make an AGI whose goal is to become as intelligent as possible.
I would.
Are you trying to make sure a bad Singularity happens?
Evolution is thus in this context one type of optimizer.
To which intelligence is extraneous.
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied environments. Unfortunately, it is a resource intensive tool. That’s why Azathoth doesn’t use except in a few very bright species.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
You seem to be confusing two different notions of intelligence. One is the either/or “is it intelligent” and the other is how intelligent it is.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Are you trying to make sure a bad Singularity happens?
If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”. (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”.
As I’ve asked elsewhere to resoundingly negative results:
Why is it automatically “bad” to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience… why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a ‘child’ or ‘inheritor’ of “the human will”—then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?
I do not, furthermore, define “humanity” in so strict terms as to require that it be flesh-and-blood to be “human”. If our FOOMing AGI were a “human” one—I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.
Sure, it would suck for me—but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.
I ask this question because I feel that it is relevant. Why is “inheritor” non-Friendly AGI “bad”?
(This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Caveat: It is possible for me to discuss my own motivations using someone else’s valuative framework. So context matters. The mere fact that I would say “No” does not mean that I could never say “Yes—as you see it.”
Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment—without our permission—would you consider this a Friendly outcome?
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied enviornments. Unfortunately, it is a resource intensive tool.
Given the available routes to general intelligence available to “the blind idiot god” due to the characteristics it does optimize for. We have a language breakdown here.
The reason I said intelligence is ‘extraneous’ to evolution was because evolution only ‘seeks out’ local minima for perpetuation of the genome. What specific configuration a given local minimum happens to be is extraneous to the algorithm. Intelligence is in the available solution space but it is extraneous to the process. Which is why generalists often lose out to specialists in limited environments. (Read: the pigmy people who “went back to the trees”.)
Intelligence is not a goal to evolution; it is extraneous to its criteria. Intelligence is not the metric by which evolution judges fitness. Successful perpetuation of the genome is. Nothing else.
You seem to be confusing two different notions of intelligence. One is the either/or “is it intelligent” and the other is how intelligent it is.
Not at all. Not even remotely. I’m stating that any agent—a construct (biological or synthetic) that can actively select amongst variable results; a thing that makes choices—inherently values intelligence; the capacity to ‘make good choices’. A more-intelligent agent is a ‘superior’ agent, instrumentally speaking.
Any time there is an intelligent designed agent, the presence of said intelligence is a hard indicator of the agent valuing intelligence. Designed intelligences are “designed to be intelligent”. (This is tautological). This means that whoever designed that intelligence spent effort and time on making it intelligent. That, in turn, means that its designer valued that intelligence. Whatever goalset the designer imparted into the designed intelligence, thusly, is a goalset that requires intelligence to be effected.
Which in turn means that intelligence is definitionally instrumentally useful to a designed intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
I’m not sure what you mean here.
What gives you trouble with it? Try rephrasing it and I’ll ‘correct’ said rephrasing towards my intended meaning as possible, perhaps? I want to be understood. :)
I would.
To which intelligence is extraneous.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Intelligence is definitionally instrumental to an artificial general intelligence. Given sufficient time, any AGI capable of constructing a superior AGI will do so.
Are you trying to make sure a bad Singularity happens?
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied environments. Unfortunately, it is a resource intensive tool. That’s why Azathoth doesn’t use except in a few very bright species.
You seem to be confusing two different notions of intelligence. One is the either/or “is it intelligent” and the other is how intelligent it is.
I’m not sure what you mean here.
If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”. (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
As I’ve asked elsewhere to resoundingly negative results:
Why is it automatically “bad” to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience… why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a ‘child’ or ‘inheritor’ of “the human will”—then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?
I do not, furthermore, define “humanity” in so strict terms as to require that it be flesh-and-blood to be “human”. If our FOOMing AGI were a “human” one—I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.
Sure, it would suck for me—but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.
I ask this question because I feel that it is relevant. Why is “inheritor” non-Friendly AGI “bad”?
Caveat: It is possible for me to discuss my own motivations using someone else’s valuative framework. So context matters. The mere fact that I would say “No” does not mean that I could never say “Yes—as you see it.”
It isn’t automatically bad. I just don’t want it. This is why I said your answer is legitimately “No”.
Fair enough.
Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment—without our permission—would you consider this a Friendly outcome?
Potentially, depending on the simulated environment.
Assume Earth-like or video-game-like (in the latter-case including ‘respawns’).
Video game upgrades! Sounds good.
I believe you mean, “I’m here to kick ass and chew bubblegum… and I’m all outta gum!”
Given the available routes to general intelligence available to “the blind idiot god” due to the characteristics it does optimize for. We have a language breakdown here.
The reason I said intelligence is ‘extraneous’ to evolution was because evolution only ‘seeks out’ local minima for perpetuation of the genome. What specific configuration a given local minimum happens to be is extraneous to the algorithm. Intelligence is in the available solution space but it is extraneous to the process. Which is why generalists often lose out to specialists in limited environments. (Read: the pigmy people who “went back to the trees”.)
Intelligence is not a goal to evolution; it is extraneous to its criteria. Intelligence is not the metric by which evolution judges fitness. Successful perpetuation of the genome is. Nothing else.
Not at all. Not even remotely. I’m stating that any agent—a construct (biological or synthetic) that can actively select amongst variable results; a thing that makes choices—inherently values intelligence; the capacity to ‘make good choices’. A more-intelligent agent is a ‘superior’ agent, instrumentally speaking.
Any time there is an intelligent designed agent, the presence of said intelligence is a hard indicator of the agent valuing intelligence. Designed intelligences are “designed to be intelligent”. (This is tautological). This means that whoever designed that intelligence spent effort and time on making it intelligent. That, in turn, means that its designer valued that intelligence. Whatever goalset the designer imparted into the designed intelligence, thusly, is a goalset that requires intelligence to be effected.
Which in turn means that intelligence is definitionally instrumentally useful to a designed intelligence.
What gives you trouble with it? Try rephrasing it and I’ll ‘correct’ said rephrasing towards my intended meaning as possible, perhaps? I want to be understood. :)