Re. relationships: The only people I’ve heard use “polyamorous” are referring to committed, marriage-like relationships involving more than two adults. There ought to be a category for those of us who don’t want exclusivity with any number.
I’ve left most of the probability questions blank, because I don’t think it is meaningfully possible to assign numbers to events I have little or no quantitative information about. For instance, I’ll try P(Aliens) when we’ve looked at several thousand planets closely enough to be reasonably sure of answers about them.
In addition, I don’t think some of the questions can have meaningful answers. For example, the “Many Worlds” interpretation of quantum mechanics, if true, would have no testable (falsifiable) effect on the observable universe, and therefore I consider the question to be objectively meaningless. The same goes for P(Simulation), and probably P(God).
P(religion) also suffers from vagueness: what conditions would satisfy it? Not only are some religions vaguely defined, but there are many belief systems that are arguably relgions or not religions. Buddhism? Communism? Atheism?
The singularity is vague, too. (And as I usually hear it described, I would see it as a catastrophe if it happened. The SF story “With Folded Hands” explains why.)
Extra credit items:
Great Stagnation—I believe that the rich world’s economy IS in a great stagnation that has lasted for most of a century, but NOT for the reasons Cowen and Thiel suggest. The stagnation is because of “progressive” politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people’s opportunities to innovate and profit by it. This is not a trivial matter, but a problem quite comparable to those listed in the “catastrophe” section, and one which may very well prevent a solution to a real catastrophe if we become headed for one. (Both parties’ constant practice of campaigning-by-inventing-a-new-phony-emergency-every-month makes the problem worse, too: most rational people now dismiss any cry of alarm as the boy who cried wolf. Certainly the environmental movement, including its best known “scientists”, have discredited themselves this way.) This is why the struggle for liberty is so critical.
Re. relationships: The only people I’ve heard use “polyamorous” are referring to committed, marriage-like relationships involving more than two adults. There ought to be a category for those of us who don’t want exclusivity with any number.
Huh. This is what I’ve usually heard referred to as “polyfidelity”. The poly social circles that I’m familiar with encompass also (among others) people who have both “marriage-like” and “dating-like” relationships, people who have multiple dating-like relationships and no marriage-like ones, and people who have more complicated arrangements.
P(religion) also suffers from vagueness: what conditions would satisfy it? Not only are some religions vaguely defined, but there are many belief systems that are arguably relgions or not religions. Buddhism? Communism? Atheism?
The question is “What is the probability that any of humankind’s revealed religions is more or less correct?”
“Revealed religion”, to my interpretation, means “a religion whose teachings are presented as revelation from divine or supernatural entities”. (See Wikipedia, where “revealed religion” links to the article on religious revelation.)
This would not include Communism or atheism. Buddhism (as usual) is complicated, since there are sects of Buddhism that make what sure sound to me like claims of revelation, while others sound more evidence-based. For that matter, it might not include Scientology, which presents itself as scientific discovery by human genius, rather than divine revelation, at least at the lower levels.
I’ve left most of the probability questions blank, because I don’t think it is meaningfully possible to assign numbers to events I have little or no quantitative information about. For instance, I’ll try P(Aliens) when we’ve looked at several thousand planets closely enough to be reasonably sure of answers about them.
I left them blank myself because I haven’t developed the skill to do it, but the obvious other interpretation … are you saying it’s in-principle impossible to operate rationally under uncertainty?
In addition, I don’t think some of the questions can have meaningful answers. For example, the “Many Worlds” interpretation of quantum mechanics, if true, would have no testable (falsifiable) effect on the observable universe, and therefore I consider the question to be objectively meaningless. The same goes for P(Simulation), and probably P(God).
Do you usually consider statements you don’t anticipate being able to verify meaningless?
The obvious next question would be to ask if you’re OK with your family being tortured uner the various circumstances this would suggest you would be.
The singularity is vague, too. (And as I usually hear it described, I would see it as a catastrophe if it happened. The SF story “With Folded Hands” explains why.)
I believe I’ve read that story. Azimov-style robots prevent humans from interacting with the environment because they might be harmed and that would violate the First Law, right?
Could you go into more detail regarding how as you “usually hear it described” it would be a “catastrophe if it happened”? I can imagine a few possibilities but I’d like to be clearer on the thoughts behind this before commenting.
The stagnation is because of “progressive” politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people’s opportunities to innovate and profit by it.
Hmm. On the one hand, political stupidity does seem like a very serious problem that needs fixing and imposes massive opportunity costs on humanity. On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
Certainly the environmental movement, including its best known “scientists”, have discredited themselves this way.
I don’t know, I find most people don’t identify such a pattern and thus avoid a BWCW effect; while most people above a certain standard of rationality are able to take advantage of evidence, public-spirited debunkers and patterns to screen out most of the noise. Your milage may vary, of course; I tend not to may much attention to environmental issues except when they impinge on something I’m already interested in, so perhaps this is harder at a higher volume of traffic.
I’ve left most of the probability questions blank, because I don’t think it is meaningfully possible to assign numbers to events I have little or no quantitative information about. For instance, I’ll try P(Aliens) when we’ve looked at several thousand planets closely enough to be reasonably sure of answers about them.
I left them blank myself because I haven’t developed the skill to do it, but the obvious other interpretation … are you saying it’s in-principle impossible to operate rationally under uncertainty?
No, I just don’t think I can assign probability numbers to a guess. If forced to make a real-life decision based on such a question then I’ll guess.
In addition, I don’t think some of the questions can have meaningful answers. For example, the “Many Worlds” interpretation of quantum mechanics, if true, would have no testable (falsifiable) effect on the observable universe, and therefore I consider the question to be objectively meaningless. The same goes for P(Simulation), and probably P(God).
Do you usually consider statements you don’t anticipate being able to verify meaningless?
No, and I discussed that in another reply.
The obvious next question would be to ask if you’re OK with your family being tortured uner the various circumstances this would suggest you would be.
I’ve lost the context to understand this question.
The singularity is vague, too. (And as I usually hear it described, I would see it as a catastrophe if it happened. The SF story “With Folded Hands” explains why.)
I believe I’ve read that story. Azimov-style robots prevent humans from interacting with the environment because they might be harmed and that would violate the First Law, right?
Yes. Eventually most human activity is banned. Any research or exploration that might make it possible for a human to get out from under the bots’ rule is especially banned.
Could you go into more detail regarding how as you “usually hear it described” it would be a “catastrophe if it happened”? I can imagine a few possibilities but I’d like to be clearer on the thoughts behind this before commenting.
The usual version of this I hear is from people who’ve read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated—maybe not perfectly, but to an arbitrarily high difficulty of disproving it—by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
The stagnation is because of “progressive” politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people’s opportunities to innovate and profit by it.
Hmm. On the one hand, political stupidity does seem like a very serious problem that needs fixing and imposes massive opportunity costs on humanity. On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn’t open to reason? You need the sales skill of a demagogue, which I haven’t got.
Certainly the environmental movement, including its best known “scientists”, have discredited themselves this way.
I don’t know, I find most people don’t identify such a pattern and thus avoid a BWCW effect;
What’s that?
while most people above a certain standard of rationality are able to take advantage of evidence, public-spirited debunkers and patterns to screen out most of the noise. Your milage may vary, of course; I tend not to may much attention to environmental issues except when they impinge on something I’m already interested in, so perhaps this is harder at a higher volume of traffic.
One of the ways in which the demagogues have taken control of politics is to multiply political entities and the various debates, hearings, and elections they hold until no non-demagogue can hope to influence more than a vanishingly small fraction of them. This is another very common, nasty tactic that ought to have a name, although “Think globally, act locally” seems to be the slogan driving it.
The obvious next question would be to ask if you’re OK with your family being tortured under the various circumstances this would suggest you would be.
I’ve lost the context to understand this question.
How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?
I mean, it’s unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)
The usual version of this I hear is from people who’ve read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated—maybe not perfectly, but to an arbitrarily high difficulty of disproving it—by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
Oh. That’s an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.
Um, if something is smart enough to solve every problem a human can, ho relevant is the distinction? I mean, sure, it might (say) be lying about it’s preferences, but … surely it’ll have exactly the same impact on society, regardless?
On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn’t open to reason?
ahem … I’m … actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I’m not sure I’d go quite so far as to say it’s “obvious” and anyone who disagrees must be “senseless … not open to reason”.
Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.
I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It’s a problem. (That’s why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)
What’s that?
Ahh … “Boy Who Cried Wolf”. Sorry, that was way too opaque, I could barely parse it myself. Not sure why I thought that was a good idea to abbreviate.
The obvious next question would be to ask if you’re OK with your family being tortured under the various circumstances this would suggest you would be.
I’ve lost the context to understand this question.
How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?
I mean, it’s unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)
I don’t like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
The usual version of this I hear is from people who’ve read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated—maybe not perfectly, but to an arbitrarily high difficulty of disproving it—by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
Oh. That’s an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.
Um, if something is smart enough to solve every problem a human can, [how] relevant is the distinction? I mean, sure, it might (say) be lying about it’s preferences, but … surely it’ll have exactly the same impact on society, regardless?
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can’t. This is another reason I’d prefer that the capability continue not to exist.
On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn’t open to reason?
ahem … I’m … actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I’m not sure I’d go quite so far as to say it’s “obvious” and anyone who disagrees must be “senseless … not open to reason”.
Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don’t accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they’re a unique exception to the old saw “there’s no accounting for taste” because a person’s code of ethics determines whether he’s trustworthy and in what ways).
I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It’s a problem. (That’s why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)
One can’t really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
For the same reason, I never expect judges, journalists, or historians to be “unbiased” because I don’t believe true “unbiasedness” is possible even in principle.
I don’t like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.
But I see we agree on this.
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can’t. This is another reason I’d prefer that the capability continue not to exist.
But is it possible to impersonate intelligence? Isn’t anything that can “fake” problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don’t accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they’re a unique exception to the old saw “there’s no accounting for taste” because a person’s code of ethics determines whether he’s trustworthy and in what ways).
I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.
What makes you think that “individual rights” are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they’re the correct moral theory, what evidence would you point to? You might change my mind.
One can’t really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
Oh, everyone is misguided. (Hence the name of the site.) But they generally aren’t actual evil strawmen.
a self-aware entity can be simulated—maybe not perfectly, but to an arbitrarily high difficulty of disproving it—by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
So, there’s two pieces there, and I’m not sure how those pieces interact on your view.
Like, if we had a highly reliable test for true self-awareness, but it turned out that interest groups could manufacture large numbers of genuinely self-aware systems that would reliably vote and/or fight for their side of political questions, would that be better? Why?
Conversely, if we can’t reliably test for true self-awareness, but we don’t have a reliable way to manufacture apparently-self-aware systems that vote or fight a particular way, would that be better? Why?
I would consider the genuinely self-aware systems to be real people. I suppose it’s a matter of ethics (and therefore taste) whether or not that’s important to you.
I don’t understand how that answers my question, or whether it was intended to.
I mean, OK, let’s say the genuinely self-aware systems are real people. Then we can rephrase my question as:
Like, if we had a highly reliable test for real personhood, but it turned out that interest groups could manufacture large numbers of real people that would reliably vote and/or fight for their side of political questions, would that be better? Why?
Conversely, if we can’t reliably test for real personhood, but we don’t have a reliable way to manufacture apparently real people that vote or fight a particular way, would that be better? Why?
But I still don’t know your answer.
I also disagree that matters of ethics are therefore matters of taste.
We have votes because we want to maximize utility for the voters. Allowing easily manufactured people to vote creates incentives to manufacture people.
So the answer to this depends on your belief about utilitarianism. If you aggregate utility in such a way that adding more people increases utility in an unbounded way, then you should do whatever you can to encourage the creation of more people regardless of whether their votes cause harm to existing people, so it is good to create incentives for their creation and you should let them vote. (You also get the Repugnant Conclusion.) If you aggregate utility in some way that produces diminishing returns and avoids the Repugnant Conclusion, then it is possible that at some point creating more new people is a net negative. If so, you’d be better off precommitting to not let them vote because not letting them vote prevents them from being created, increasing utility.
Note: Most people, insofar as they can be described as utilitarian at all, will fall into the second category (with the precommitment being enforced by their inherent inability to care much for people who they cannot see as individuals).
This also works when you substitute “allowing unlimited immigration” for “creating unlimited amounts of people”, Your choice of how to aggregate utility also affects whether it is good to trade off utility among already existing people just like it affects whether it is good to create new people.
And yes, like most people, I don’t have a coherent understanding of how to aggregate intersubjective utility but I certainly don’t aggregate it in ways that cause me to embrace the Repugnant Conclusion. (By contrast, on consideration I do seem to embrace Utility Monsters, distasteful as the prospect feels on its face.)
Your choice of how to aggregate utility also affects whether it is good to trade off utility among already existing people just like it affects whether it is good to create new people.
Well, not “just like.” That is, I might have a mechanism for aggregating utility that treats N existing people in other countries differently from N people who don’t exist, and makes different tradeoffs for the two cases. But, yes, those are both examples of tradeoffs which a utility-aggregating mechanism affects.
Great Stagnation—I believe that the rich world’s economy IS in a great stagnation that has lasted for most of a century, but NOT for the reasons Cowen and Thiel suggest. The stagnation is because of “progressive” politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people’s opportunities to innovate and profit by it.
I get the impression that this is actually a core part of Thiel’s argument. Consider this, for example.
Did that.
Re. relationships: The only people I’ve heard use “polyamorous” are referring to committed, marriage-like relationships involving more than two adults. There ought to be a category for those of us who don’t want exclusivity with any number.
I’ve left most of the probability questions blank, because I don’t think it is meaningfully possible to assign numbers to events I have little or no quantitative information about. For instance, I’ll try P(Aliens) when we’ve looked at several thousand planets closely enough to be reasonably sure of answers about them.
In addition, I don’t think some of the questions can have meaningful answers. For example, the “Many Worlds” interpretation of quantum mechanics, if true, would have no testable (falsifiable) effect on the observable universe, and therefore I consider the question to be objectively meaningless. The same goes for P(Simulation), and probably P(God).
P(religion) also suffers from vagueness: what conditions would satisfy it? Not only are some religions vaguely defined, but there are many belief systems that are arguably relgions or not religions. Buddhism? Communism? Atheism?
The singularity is vague, too. (And as I usually hear it described, I would see it as a catastrophe if it happened. The SF story “With Folded Hands” explains why.)
Extra credit items:
Great Stagnation—I believe that the rich world’s economy IS in a great stagnation that has lasted for most of a century, but NOT for the reasons Cowen and Thiel suggest. The stagnation is because of “progressive” politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people’s opportunities to innovate and profit by it. This is not a trivial matter, but a problem quite comparable to those listed in the “catastrophe” section, and one which may very well prevent a solution to a real catastrophe if we become headed for one. (Both parties’ constant practice of campaigning-by-inventing-a-new-phony-emergency-every-month makes the problem worse, too: most rational people now dismiss any cry of alarm as the boy who cried wolf. Certainly the environmental movement, including its best known “scientists”, have discredited themselves this way.) This is why the struggle for liberty is so critical.
Huh. This is what I’ve usually heard referred to as “polyfidelity”. The poly social circles that I’m familiar with encompass also (among others) people who have both “marriage-like” and “dating-like” relationships, people who have multiple dating-like relationships and no marriage-like ones, and people who have more complicated arrangements.
The question is “What is the probability that any of humankind’s revealed religions is more or less correct?”
“Revealed religion”, to my interpretation, means “a religion whose teachings are presented as revelation from divine or supernatural entities”. (See Wikipedia, where “revealed religion” links to the article on religious revelation.)
This would not include Communism or atheism. Buddhism (as usual) is complicated, since there are sects of Buddhism that make what sure sound to me like claims of revelation, while others sound more evidence-based. For that matter, it might not include Scientology, which presents itself as scientific discovery by human genius, rather than divine revelation, at least at the lower levels.
My circle uses polyamorous to include wholly non-exclusive relationships; to indicate exclusivity we’d say “polyfidelity”.
I left them blank myself because I haven’t developed the skill to do it, but the obvious other interpretation … are you saying it’s in-principle impossible to operate rationally under uncertainty?
Do you usually consider statements you don’t anticipate being able to verify meaningless?
The obvious next question would be to ask if you’re OK with your family being tortured uner the various circumstances this would suggest you would be.
I believe I’ve read that story. Azimov-style robots prevent humans from interacting with the environment because they might be harmed and that would violate the First Law, right?
Could you go into more detail regarding how as you “usually hear it described” it would be a “catastrophe if it happened”? I can imagine a few possibilities but I’d like to be clearer on the thoughts behind this before commenting.
Hmm. On the one hand, political stupidity does seem like a very serious problem that needs fixing and imposes massive opportunity costs on humanity. On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
I don’t know, I find most people don’t identify such a pattern and thus avoid a BWCW effect; while most people above a certain standard of rationality are able to take advantage of evidence, public-spirited debunkers and patterns to screen out most of the noise. Your milage may vary, of course; I tend not to may much attention to environmental issues except when they impinge on something I’m already interested in, so perhaps this is harder at a higher volume of traffic.
Upvoted entirely for this phrase.
No, I just don’t think I can assign probability numbers to a guess. If forced to make a real-life decision based on such a question then I’ll guess.
No, and I discussed that in another reply.
I’ve lost the context to understand this question.
Yes. Eventually most human activity is banned. Any research or exploration that might make it possible for a human to get out from under the bots’ rule is especially banned.
The usual version of this I hear is from people who’ve read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated—maybe not perfectly, but to an arbitrarily high difficulty of disproving it—by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn’t open to reason? You need the sales skill of a demagogue, which I haven’t got.
What’s that?
One of the ways in which the demagogues have taken control of politics is to multiply political entities and the various debates, hearings, and elections they hold until no non-demagogue can hope to influence more than a vanishingly small fraction of them. This is another very common, nasty tactic that ought to have a name, although “Think globally, act locally” seems to be the slogan driving it.
How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?
I mean, it’s unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)
Oh. That’s an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.
Um, if something is smart enough to solve every problem a human can, ho relevant is the distinction? I mean, sure, it might (say) be lying about it’s preferences, but … surely it’ll have exactly the same impact on society, regardless?
ahem … I’m … actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I’m not sure I’d go quite so far as to say it’s “obvious” and anyone who disagrees must be “senseless … not open to reason”.
Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.
I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It’s a problem. (That’s why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)
Ahh … “Boy Who Cried Wolf”. Sorry, that was way too opaque, I could barely parse it myself. Not sure why I thought that was a good idea to abbreviate.
I don’t like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can’t. This is another reason I’d prefer that the capability continue not to exist.
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don’t accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they’re a unique exception to the old saw “there’s no accounting for taste” because a person’s code of ethics determines whether he’s trustworthy and in what ways).
One can’t really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
For the same reason, I never expect judges, journalists, or historians to be “unbiased” because I don’t believe true “unbiasedness” is possible even in principle.
Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.
But I see we agree on this.
But is it possible to impersonate intelligence? Isn’t anything that can “fake” problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)
I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.
What makes you think that “individual rights” are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they’re the correct moral theory, what evidence would you point to? You might change my mind.
Oh, everyone is misguided. (Hence the name of the site.) But they generally aren’t actual evil strawmen.
So, there’s two pieces there, and I’m not sure how those pieces interact on your view.
Like, if we had a highly reliable test for true self-awareness, but it turned out that interest groups could manufacture large numbers of genuinely self-aware systems that would reliably vote and/or fight for their side of political questions, would that be better? Why?
Conversely, if we can’t reliably test for true self-awareness, but we don’t have a reliable way to manufacture apparently-self-aware systems that vote or fight a particular way, would that be better? Why?
I would consider the genuinely self-aware systems to be real people. I suppose it’s a matter of ethics (and therefore taste) whether or not that’s important to you.
I don’t understand how that answers my question, or whether it was intended to.
I mean, OK, let’s say the genuinely self-aware systems are real people. Then we can rephrase my question as:
Like, if we had a highly reliable test for real personhood, but it turned out that interest groups could manufacture large numbers of real people that would reliably vote and/or fight for their side of political questions, would that be better? Why?
Conversely, if we can’t reliably test for real personhood, but we don’t have a reliable way to manufacture apparently real people that vote or fight a particular way, would that be better? Why?
But I still don’t know your answer.
I also disagree that matters of ethics are therefore matters of taste.
We have votes because we want to maximize utility for the voters. Allowing easily manufactured people to vote creates incentives to manufacture people.
So the answer to this depends on your belief about utilitarianism. If you aggregate utility in such a way that adding more people increases utility in an unbounded way, then you should do whatever you can to encourage the creation of more people regardless of whether their votes cause harm to existing people, so it is good to create incentives for their creation and you should let them vote. (You also get the Repugnant Conclusion.) If you aggregate utility in some way that produces diminishing returns and avoids the Repugnant Conclusion, then it is possible that at some point creating more new people is a net negative. If so, you’d be better off precommitting to not let them vote because not letting them vote prevents them from being created, increasing utility.
Note: Most people, insofar as they can be described as utilitarian at all, will fall into the second category (with the precommitment being enforced by their inherent inability to care much for people who they cannot see as individuals).
This also works when you substitute “allowing unlimited immigration” for “creating unlimited amounts of people”, Your choice of how to aggregate utility also affects whether it is good to trade off utility among already existing people just like it affects whether it is good to create new people.
Yes, agreed with all this.
And yes, like most people, I don’t have a coherent understanding of how to aggregate intersubjective utility but I certainly don’t aggregate it in ways that cause me to embrace the Repugnant Conclusion. (By contrast, on consideration I do seem to embrace Utility Monsters, distasteful as the prospect feels on its face.)
Well, not “just like.” That is, I might have a mechanism for aggregating utility that treats N existing people in other countries differently from N people who don’t exist, and makes different tradeoffs for the two cases. But, yes, those are both examples of tradeoffs which a utility-aggregating mechanism affects.
This phenomenon is very real and should have a catchy phrase to describe it.
In my workplace we call it “crisis management,” fully aware of the ambiguity of that phrase.
The observation that scared populace is easy to control is very very old.
Similar to the “shock doctrine”, but that is an explicitly leftist idea so it probably doesn’t work to name the generalized phenomenon.
Briefly, how do you usually see the singularity described?
I get the impression that this is actually a core part of Thiel’s argument. Consider this, for example.