A Truth-Guardian is someone who ‘guards’ an Idea by zapping (in its myriad forms) rather than through rational argument.
Are you willing to tell me that you’ve never met a Singularitarian who has attacked an opponent’s authority (zap), or denigrated another’s work (zap), or sought to work on their Idea’s strong points to the neglect of its weak points (subtle zap), or acted in an elitist manner in order to confer perceived authority on themself (smug zap), or presented new data in such a way as to strengthen their previous predictions (super Bottom Line zap!)? Have you, Eliezer, never ever guarded your view of the future rather than argued dispassionately, even against a plainly wrong argument?
If you say no, I wholly withdraw my (well-meant) comment. Caledonian can be my second example. :p
The moment anyone makes a biased argument because of their attachment to an Idea, they become a Guardian. Singularitarians are people, and they take criticism, and defend their beliefs, requisitely passionately. Apologies if it seemed as though I was singling anybody out for specific criticism of bias—not my intention. For the record, I’m a firm believer. :)
Is that an actual question, or an oblique way of suggesting that the thesis of “Well tended gardens die by pacifism” is promoting a form of Truth-Guardianism, and therefore contradicts the thesis of “Guardians of the Truth”, and therefore perhaps both theses are flawed?
If it’s the latter: yes, yes, very clever.
Assuming charitably that it’s the former, my two cents about how they relate:
WTGDBF predicts that where local community norms N1 differ from global norms N2 there’s a tendency for N2 to displace N1 whenever the local community interacts with the larger world, and suggests that if I consider N1 superior to N2 I have a moral responsibility to counteract this tendency, which sometimes requires violating N2.
GOTT suggests that certain norms which involve punishing attempts to challenge or question certain ideas regardless of how novel, well-formed, or carefully reasoned those challenges/questions are, are bad for communities that embrace them, despite being well-protected from outside norms.
Combining the two suggests that when I choose to defend my local community norms against corruption by outside norms, I also have a moral responsibility to be right about the superiority of my community’s norms.
There is some difference between group ideas and group norms, although sometimes these two overlap. There is also a difference between challenging group ideas, and breaking group norms.
An example of a group idea: “It is reasonable to give million dollars to an organization that will freeze your head when you die, because someone might scan your brain and make a machine simulation of you, and it will be really you.”
An example of a group norm: “We should refrain from political examples, personal attacks, irrational arguments, etc.”
An example of challenging a group idea: “I think the machine simulation is not really you. Even if it is ‘alive’, it is a new life form; and your old self is dead.”
An example of breaking group norms: “This is so stupid!!! I guess you have also voted for [political party]!”
Sometimes these two things can be confused. For example it can be a group norm to never challenge group ideas (or to limit challenging them to ways that have no chance to succeed). This should not happen. On the other hand, it is also very frequent to obviously break group norms and then complain about group’s intolerance to challenging its ideas—this is a typical pattern for many internet trolls, and the community should be able to recognize it.
An example: “Cryonics does not work, f*** you!” “Downvoted for swearing.” “You just downvote me because I disagree with you, f*** you!”
Also sometimes the group’s norms are as problematic as its ideas; e.g. KKK, Nazis.
But usually the norms are not too bad, it’s just the ideas that are ridiculous (moderate religion in a nutshell). So it definitely makes sense to make a distinction for practical purposes.
Yes, agreed with all of this. Though as you suggest, the two can overlap. “Give million dollars to an organization that will freeze your head when you die” can become a group norm, and “refrain from political examples, personal attacks, irrational arguments, etc.” can be a group idea. And as you say, it is common for one to be confused for the other, sometimes deliberately for rhetorical effect.
or an oblique way of suggesting that the thesis of “Well tended gardens die by pacifism” is promoting a form of Truth-Guardianism, and therefore contradicts the thesis of “Guardians of the Truth”, and therefore perhaps both theses are flawed?
Apparent contradictions are often interesting areas of inquiry. Since I had to join my girlfriend for Phở, I only had time to post the one sentence.
It doesn’t mean that both theses are flawed. They could be opposing forces. The apparent contradiction might indicate a point where optimization is challenging. This could explain why groups seem doomed to fall to one pathology or another. There’s probably positive feedback in either direction, making groups dynamically unstable on this “axis”—whatever that might be. Maybe this is to be explained by our group cohesion mechanisms being designed to help us survive when the next tribe over decides to attack, and why things too easily devolve into might makes right?
Combining the two suggests that when I choose to defend my local community norms against corruption by outside norms, I also have a moral responsibility to be right about the superiority of my community’s norms.
The last phrase makes me cautious. I think one has a moral responsibility to respect the truth by seeking the truth. If we look at ideologies, how well do they deal with the notion of superiority? How many past notions of superiority seem barbaric? Is there a way of transcending or sidestepping this notion of superiority altogether?
Agreed that apparent contradictions are often interesting areas of inquiry.
The only ways I know of to sidestep having to decide which norms best align with my values is to adopt values such that either no community’s norms are superior to any others’, or such that whatever norms happen to emerge victorious from the interaction of social groups are superior to all the norms they displace. Neither of those tempt me at all, though I know people who endorse both.
If I reject both of those options, I’m left with the possibility that two communities C1 and C2 might exist such that C1′s norms are superior to C2′s, but the interaction of C1 and C2 results in C1′s norms being displaced by C2′s.
I don’t see a fourth option. Do you?
For example… you say I have a moral responsibility to seek truth, which suggests that if I’m in a community whose values oppose truthseeking in certain areas, I have a moral responsibility to violate my community’s norms. No?
This has interesting parallels to the Friendly AI problem. For example, one could posit that material wealth might somehow be a suitable arbiter, but I can imagine plenty of situations where C2 displaces C1 (Corporate lobbying?) followed by global ecological catastrophes. Here, dollars take the place of smiley faces strewn across the solar system. Maybe the problem of a sustainably benevolent truth-seeking group is somehow the same problem as FAI on some level?
Yes! The problem of Friendly Corporate Behavior is an urgent and unsolved one. (Indeed, corporations have many of the attributes of artificial intelligences, though of course not all.)
The sustainably benevolent moral group is not Friendly AI; it is Friendly NI (natural intelligence). The two problems are probably closely related, but I can see a few important differences: NIs had to evolve, so they’re going to start out optimized for reproduction. AIs are designed, so they’re optimized for whatever you optimize them for.
Not necessarily, because there’s no law saying that AIs have to die. This changes the evolutionary calculus significantly; you don’t need to reproduce if you can just keep existing and expand your power over the cosmos.
But you’re right, insofar as AIs that rapidly self-destruct and never reproduce are not going to stick around long. (I think this is actually a tautology, but it’s a tautology with the character of a mathematical theorem—definitely true, but not obvious or trivial.)
It’s also worth considering that there are different constraints between NIs and AIs though. NIs have to change gradually, piece by piece, gene by gene. AIs can be radically overhauled in a single generation. This gives them access to places on the fitness landscape that we could never reach—even places that are in fact evolutionarily stable once you get there.
Not necessarily, because there’s no law saying that AIs have to die. This changes the evolutionary calculus significantly; you don’t need to reproduce if you can just keep existing and expand your power over the cosmos.
As wedrifid pointed out, that depends on what one can do about the lightspeed limit. And thermodynamics. I don’t think not dying of old age changes evolution that much. Humans are prone to geriatric diseases because evolution can’t do much for us past the reproductive years. Beings without a lifespan won’t face that.
I highly doubt that no AI won’t ever destroy another, though.
It’s also worth considering that there are different constraints between NIs and AIs though. NIs have to change gradually, piece by piece, gene by gene. AIs can be radically overhauled in a single generation. This gives them access to places on the fitness landscape that we could never reach—even places that are in fact evolutionarily stable once you get there.
That just means that they’ll evolve without the constraints of genetics, much as designs and memes do.
I think it’s a mistake to treat superhuman AI as magic. In some contexts it will seem magic, but not all. Human habitations viewed from 10,000 meters look like growths of lichen. In some contexts, some dogs are “smarter” than some people. Human intelligence gives us a tremendous advantage over all other life on Earth, but it is not magic. Superhuman intelligence is not magic. It’s just intelligence.
you don’t need to reproduce if you can just keep existing and expand your power over the cosmos.
Apart from the practical lightspeed limitations. You do need to reproduce or in some other way split yourself into space-separated parts if you wish to expand your power over a sufficient distance.
Solving Friendliness involves capturing desirable ethical guidelines in a robust and sustainable way, so I’d expect the relationship between Friendliness and sustainably benevolent truth-seeking to depend a lot on the relationship between ethics and truth-seeking. I’d agree that they are thematically related, but very much non-identical.
A Truth-Guardian is someone who ‘guards’ an Idea by zapping (in its myriad forms) rather than through rational argument.
Are you willing to tell me that you’ve never met a Singularitarian who has attacked an opponent’s authority (zap), or denigrated another’s work (zap), or sought to work on their Idea’s strong points to the neglect of its weak points (subtle zap), or acted in an elitist manner in order to confer perceived authority on themself (smug zap), or presented new data in such a way as to strengthen their previous predictions (super Bottom Line zap!)? Have you, Eliezer, never ever guarded your view of the future rather than argued dispassionately, even against a plainly wrong argument?
If you say no, I wholly withdraw my (well-meant) comment. Caledonian can be my second example. :p
The moment anyone makes a biased argument because of their attachment to an Idea, they become a Guardian. Singularitarians are people, and they take criticism, and defend their beliefs, requisitely passionately. Apologies if it seemed as though I was singling anybody out for specific criticism of bias—not my intention. For the record, I’m a firm believer. :)
How does this idea relate to “Well tended gardens die by pacifism?”
Is that an actual question, or an oblique way of suggesting that the thesis of “Well tended gardens die by pacifism” is promoting a form of Truth-Guardianism, and therefore contradicts the thesis of “Guardians of the Truth”, and therefore perhaps both theses are flawed?
If it’s the latter: yes, yes, very clever.
Assuming charitably that it’s the former, my two cents about how they relate:
WTGDBF predicts that where local community norms N1 differ from global norms N2 there’s a tendency for N2 to displace N1 whenever the local community interacts with the larger world, and suggests that if I consider N1 superior to N2 I have a moral responsibility to counteract this tendency, which sometimes requires violating N2.
GOTT suggests that certain norms which involve punishing attempts to challenge or question certain ideas regardless of how novel, well-formed, or carefully reasoned those challenges/questions are, are bad for communities that embrace them, despite being well-protected from outside norms.
Combining the two suggests that when I choose to defend my local community norms against corruption by outside norms, I also have a moral responsibility to be right about the superiority of my community’s norms.
There is some difference between group ideas and group norms, although sometimes these two overlap. There is also a difference between challenging group ideas, and breaking group norms.
An example of a group idea: “It is reasonable to give million dollars to an organization that will freeze your head when you die, because someone might scan your brain and make a machine simulation of you, and it will be really you.”
An example of a group norm: “We should refrain from political examples, personal attacks, irrational arguments, etc.”
An example of challenging a group idea: “I think the machine simulation is not really you. Even if it is ‘alive’, it is a new life form; and your old self is dead.”
An example of breaking group norms: “This is so stupid!!! I guess you have also voted for [political party]!”
Sometimes these two things can be confused. For example it can be a group norm to never challenge group ideas (or to limit challenging them to ways that have no chance to succeed). This should not happen. On the other hand, it is also very frequent to obviously break group norms and then complain about group’s intolerance to challenging its ideas—this is a typical pattern for many internet trolls, and the community should be able to recognize it.
An example: “Cryonics does not work, f*** you!” “Downvoted for swearing.” “You just downvote me because I disagree with you, f*** you!”
Also sometimes the group’s norms are as problematic as its ideas; e.g. KKK, Nazis.
But usually the norms are not too bad, it’s just the ideas that are ridiculous (moderate religion in a nutshell). So it definitely makes sense to make a distinction for practical purposes.
Yes, agreed with all of this. Though as you suggest, the two can overlap. “Give million dollars to an organization that will freeze your head when you die” can become a group norm, and “refrain from political examples, personal attacks, irrational arguments, etc.” can be a group idea. And as you say, it is common for one to be confused for the other, sometimes deliberately for rhetorical effect.
Yes.
Apparent contradictions are often interesting areas of inquiry. Since I had to join my girlfriend for Phở, I only had time to post the one sentence.
It doesn’t mean that both theses are flawed. They could be opposing forces. The apparent contradiction might indicate a point where optimization is challenging. This could explain why groups seem doomed to fall to one pathology or another. There’s probably positive feedback in either direction, making groups dynamically unstable on this “axis”—whatever that might be. Maybe this is to be explained by our group cohesion mechanisms being designed to help us survive when the next tribe over decides to attack, and why things too easily devolve into might makes right?
The last phrase makes me cautious. I think one has a moral responsibility to respect the truth by seeking the truth. If we look at ideologies, how well do they deal with the notion of superiority? How many past notions of superiority seem barbaric? Is there a way of transcending or sidestepping this notion of superiority altogether?
Agreed that apparent contradictions are often interesting areas of inquiry.
The only ways I know of to sidestep having to decide which norms best align with my values is to adopt values such that either no community’s norms are superior to any others’, or such that whatever norms happen to emerge victorious from the interaction of social groups are superior to all the norms they displace. Neither of those tempt me at all, though I know people who endorse both.
If I reject both of those options, I’m left with the possibility that two communities C1 and C2 might exist such that C1′s norms are superior to C2′s, but the interaction of C1 and C2 results in C1′s norms being displaced by C2′s.
I don’t see a fourth option. Do you?
For example… you say I have a moral responsibility to seek truth, which suggests that if I’m in a community whose values oppose truthseeking in certain areas, I have a moral responsibility to violate my community’s norms. No?
This has interesting parallels to the Friendly AI problem. For example, one could posit that material wealth might somehow be a suitable arbiter, but I can imagine plenty of situations where C2 displaces C1 (Corporate lobbying?) followed by global ecological catastrophes. Here, dollars take the place of smiley faces strewn across the solar system. Maybe the problem of a sustainably benevolent truth-seeking group is somehow the same problem as FAI on some level?
Yes! The problem of Friendly Corporate Behavior is an urgent and unsolved one. (Indeed, corporations have many of the attributes of artificial intelligences, though of course not all.)
The sustainably benevolent moral group is not Friendly AI; it is Friendly NI (natural intelligence). The two problems are probably closely related, but I can see a few important differences: NIs had to evolve, so they’re going to start out optimized for reproduction. AIs are designed, so they’re optimized for whatever you optimize them for.
My prediction: The ones optimized for reproduction are the ones that will be around in the long term.
Not necessarily, because there’s no law saying that AIs have to die. This changes the evolutionary calculus significantly; you don’t need to reproduce if you can just keep existing and expand your power over the cosmos.
But you’re right, insofar as AIs that rapidly self-destruct and never reproduce are not going to stick around long. (I think this is actually a tautology, but it’s a tautology with the character of a mathematical theorem—definitely true, but not obvious or trivial.)
It’s also worth considering that there are different constraints between NIs and AIs though. NIs have to change gradually, piece by piece, gene by gene. AIs can be radically overhauled in a single generation. This gives them access to places on the fitness landscape that we could never reach—even places that are in fact evolutionarily stable once you get there.
As wedrifid pointed out, that depends on what one can do about the lightspeed limit. And thermodynamics. I don’t think not dying of old age changes evolution that much. Humans are prone to geriatric diseases because evolution can’t do much for us past the reproductive years. Beings without a lifespan won’t face that.
I highly doubt that no AI won’t ever destroy another, though.
That just means that they’ll evolve without the constraints of genetics, much as designs and memes do.
I think it’s a mistake to treat superhuman AI as magic. In some contexts it will seem magic, but not all. Human habitations viewed from 10,000 meters look like growths of lichen. In some contexts, some dogs are “smarter” than some people. Human intelligence gives us a tremendous advantage over all other life on Earth, but it is not magic. Superhuman intelligence is not magic. It’s just intelligence.
Apart from the practical lightspeed limitations. You do need to reproduce or in some other way split yourself into space-separated parts if you wish to expand your power over a sufficient distance.
One of our mind children might read this someday and think, “Distance? What a quaint idea!”
Solving Friendliness involves capturing desirable ethical guidelines in a robust and sustainable way, so I’d expect the relationship between Friendliness and sustainably benevolent truth-seeking to depend a lot on the relationship between ethics and truth-seeking. I’d agree that they are thematically related, but very much non-identical.