I would consider the genuinely self-aware systems to be real people. I suppose it’s a matter of ethics (and therefore taste) whether or not that’s important to you.
jdgalt
The obvious next question would be to ask if you’re OK with your family being tortured under the various circumstances this would suggest you would be.
I’ve lost the context to understand this question.
How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?
I mean, it’s unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)
I don’t like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
The usual version of this I hear is from people who’ve read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated—maybe not perfectly, but to an arbitrarily high difficulty of disproving it—by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
Oh. That’s an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.
Um, if something is smart enough to solve every problem a human can, [how] relevant is the distinction? I mean, sure, it might (say) be lying about it’s preferences, but … surely it’ll have exactly the same impact on society, regardless?
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can’t. This is another reason I’d prefer that the capability continue not to exist.
On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn’t open to reason?
ahem … I’m … actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I’m not sure I’d go quite so far as to say it’s “obvious” and anyone who disagrees must be “senseless … not open to reason”.
Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don’t accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they’re a unique exception to the old saw “there’s no accounting for taste” because a person’s code of ethics determines whether he’s trustworthy and in what ways).
I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It’s a problem. (That’s why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)
One can’t really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
For the same reason, I never expect judges, journalists, or historians to be “unbiased” because I don’t believe true “unbiasedness” is possible even in principle.
I’ve left most of the probability questions blank, because I don’t think it is meaningfully possible to assign numbers to events I have little or no quantitative information about. For instance, I’ll try P(Aliens) when we’ve looked at several thousand planets closely enough to be reasonably sure of answers about them.
I left them blank myself because I haven’t developed the skill to do it, but the obvious other interpretation … are you saying it’s in-principle impossible to operate rationally under uncertainty?
No, I just don’t think I can assign probability numbers to a guess. If forced to make a real-life decision based on such a question then I’ll guess.
In addition, I don’t think some of the questions can have meaningful answers. For example, the “Many Worlds” interpretation of quantum mechanics, if true, would have no testable (falsifiable) effect on the observable universe, and therefore I consider the question to be objectively meaningless. The same goes for P(Simulation), and probably P(God).
Do you usually consider statements you don’t anticipate being able to verify meaningless?
No, and I discussed that in another reply.
The obvious next question would be to ask if you’re OK with your family being tortured uner the various circumstances this would suggest you would be.
I’ve lost the context to understand this question.
The singularity is vague, too. (And as I usually hear it described, I would see it as a catastrophe if it happened. The SF story “With Folded Hands” explains why.)
I believe I’ve read that story. Azimov-style robots prevent humans from interacting with the environment because they might be harmed and that would violate the First Law, right?
Yes. Eventually most human activity is banned. Any research or exploration that might make it possible for a human to get out from under the bots’ rule is especially banned.
Could you go into more detail regarding how as you “usually hear it described” it would be a “catastrophe if it happened”? I can imagine a few possibilities but I’d like to be clearer on the thoughts behind this before commenting.
The usual version of this I hear is from people who’ve read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated—maybe not perfectly, but to an arbitrarily high difficulty of disproving it—by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
The stagnation is because of “progressive” politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people’s opportunities to innovate and profit by it.
Hmm. On the one hand, political stupidity does seem like a very serious problem that needs fixing and imposes massive opportunity costs on humanity. On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn’t open to reason? You need the sales skill of a demagogue, which I haven’t got.
Certainly the environmental movement, including its best known “scientists”, have discredited themselves this way.
I don’t know, I find most people don’t identify such a pattern and thus avoid a BWCW effect;
What’s that?
while most people above a certain standard of rationality are able to take advantage of evidence, public-spirited debunkers and patterns to screen out most of the noise. Your milage may vary, of course; I tend not to may much attention to environmental issues except when they impinge on something I’m already interested in, so perhaps this is harder at a higher volume of traffic.
One of the ways in which the demagogues have taken control of politics is to multiply political entities and the various debates, hearings, and elections they hold until no non-demagogue can hope to influence more than a vanishingly small fraction of them. This is another very common, nasty tactic that ought to have a name, although “Think globally, act locally” seems to be the slogan driving it.
I don’t buy it. We have many existing laws and spending programs that make us worse off than not having them (or, equivalently, leaving it up to the market rather than the taxpayers to provide them). The free market is known to work well enough, and broadly enough, that demanding “What would you replace it with?” when someone proposes ending one of those laws or programs is un-called-for. (If anyone really does doubt that the market will do better, the thing to do is to try it and see, not to demand proof that can’t exist because the change in question hasn’t been tried recently.) After a few repetitions, I simply lump the asker in with the kind of troll whose reply to every comment is “Cite?” and add him to my spam filter.
I think I see what you are trying to say, but I don’t think the Boltzmann Cake Theory is comparable to Many Worlds.
In the Boltzmann Cake case, it may be impossible to physically test the theory (though I don’t conclusively assume so—there could well be some very subtle effect on the Sun’s output that would facilitate such a test), but the question of fact it raises is still of objective fact.
But the truth or falsity of the Many Worlds Theory can only exist in a reference frame which spans the entire conceptual space in which the many worlds would have to coexist. And I don’t believe such a frame can exist. The very fabric of logic itself requires a space-time in which to exist; without one (or extending beyond one) its very postulates become open to doubt.
I’m not sure what I could post here that would back that up: it requires some economics knowledge. I can refer you to good economics blogs such as Marginal Revolution and Cafe Hayek, or to Mises’ Human Action.
It was MR that sent me here to LW in the first place.
I interpreted the two as completely disjunct. In other words anti-agathics would be drugs or treatments that prevent or repair the symptoms of aging. Some of the same tech (cell repair nanites) could potentially do both jobs, but if you have to be frozen to use the tech then I wouldn’t call it anti-agathics. I guess I’m basing this usage on Blish’s “They Shall Have Stars” which predicted it in the fifties.
If that’s true I wish I’d known it before choosing keys.
Largest is ambiguous. It could mean longest, or largest volume (with or without counting the volume enclosed, if we’re talking about the skull), or even heaviest.
Somehow this made me think of Larry Niven’s “Rotating Cylinders and the Possibility of Global Causality Violation”.
I wondered about that too, but for me “wiping out civilization” includes the possibility that some disaster leaves half of humanity alive, but smashes all our tech, knocking us back to the stone age. Intelligence forbid!
I see liberal vs. libertarian as a two dimensional thing as depicted here.
I wouldn’t mind the survey being twice as long if it allowed it to handle these can’t-answer situations, though I would expect it to be the same length but just have a button or two to the right of each entry blank.
I’ve learned to use the mouse, and not the keyboard, when answering this sort of thing. YMMV.
I did the survey.
I felt that I had to leave blank some of the questions that ask for a probability number, because no answer that complies with the instructions would be right. For instance, I consider the “Many Worlds” hypothesis to be effectively meaningless, since while it does describe a set of plausible alleged facts, there is, as far as I know, no possible experiment that could falsify it. (“Supernatural” is also effectively meaningless, but for a different reason: vagueness. “Magic”, to me, describes only situations where Clarke’s Third Law applies. And so forth.)
I would like to participate in a deeper discussion of the idea of the Singularity, but don’t know if that’s welcome on LW. I want to attack the idea on several levels: (1) the definition of it, which may be too vague to be falsifiable; (2) the definition of intelligence—I don’t think we’re talking about a mere chess-playing computer, but it’s not clear to me whether Minsky’s criteria are sufficient; (3) if those first two points are somehow nailed down, then I’m not at all sure that a machine intelligence is desirable, and certainly I’d hesitate to connect one to hardware with enough abilities that the revolution in “I, Robot” becomes possible; and (4) if such a change does happen, I would prefer, and I think most people would insist, that it happen relatively slowly to give everyone then alive time to cope with the change, thus making it not really a singularity in the mathematical sense.
(I do like the transhumanist notion that humans should feel free to modify our own hardware individually, but I don’t see that as necessarily connected with a Singularity, and I don’t use the jargon of transhumanism for the same reason I avoid the jargon of anarchism when talking politics—it scares people needlessly.)
I left both MIRI questions blank because I don’t know who or what MIRI is.
Re. The Great Stagnation: This theory asserts that we are in an economic stall, if you will, because of a lack of innovation, and is set against the assertion of a “Great Divergence” in which rising income inequality and globalization are to blame for the stall. I didn’t answer because I consider both views to be baloney—we are in an economic stall because of unnecessary and crony-driven overregulation, much of it done in the name of the misguided green and “social justice” movements.
I didn’t do the finger length questions; not sure what “the bottom crease” is, or maybe I don’t have them. (Do you mean the crease at the base of the fingers, or one farther down on the hand?)
Re. feminism, I answered based on what I believe the current use of the term is, which is not at all like the definition on Wikipedia. Wikipedia calls it more or less pro-equality and I support that, but the current usage is more like “social justice” and that whole concept is complete hooey.
I wish there were an LW-related forum/location where politics are allowed (but easy for those not so inclined to ignore). I would use it, not so much for election-type stuff but for tossing out beliefs/theories on controversies (including some things relevant to a lot of the community, such as the Singularity) and seeing what bounces back.
I wouldn’t want to do it if I thought it would generate ill will, but there are certainly lots of folk here whose perspectives would be useful, and who, even if they disagree, would not immediately reach for the slogans of demonization that I hear so much in the outside world.
Apologies if even this post turns out to be so contentious that I shouldn’t have said it here.
The big problem with habitually “telling” is that you just about need to already be in an intimate relationship with the person you Tell before you do it more than once or twice. Otherwise you will be dismissed as either a bore or a wimp.
I do think that it is unfair, and a common failure mode, to use the guess culture and then get angry if the other person doesn’t read you correctly.
I think it is unfair to get angry at another person (or equivalently, to label him/her “rude”) for asking or saying anything when he/she doesn’t have good reason to know that the speech is unwelcome.
However, I don’t like the notion of these protocols as “cultures” because I don’t think anybody follows, or should follow, any one of them consistently all or nearly all the time.
Instead, I believe reality is and should be, that the meaning of a statement which can be parsed as a request depends on how reasonable it would be if the asker (1) expects compliance (perhaps to the point of getting upset if it doesn’t happen), (2) intends it merely as a request (“asker culture”), and/or (3) would only dare ask if he is fairly sure the hearer will not take offense. Obviously, as a request goes up the spectrum from something trivial (“Excuse me” as I push through a crowd to get out of a bus) to something the hearer is likely to find quite burdensome, both speakers and hearers tend to move up from interpretation (1) to (2) to (3). Familiarity with the other person also modifies this calculation, but that change can go in either direction depending on what you know about that person and about how he views you.
But where I part ways from the article writer is where he talks about “ask culture” as being superior to “guess culture”. About the only place I see anything resembling “guess culture” is where a request (or a statement being parsed as a request, maybe erroneously) is about a subject the hearer has issues about{1}, for instance, when trying to get laid. And as I see it, the mere fact that a typical woman hearing such a request interprets it as a demand (and/or “an example of the guess culture”) does not mean that the asker should be blamed for anything of the kind.
{1} I have phrased this to step on as few toes as possible, and thus am avoiding conclusions about what such “issues” may imply about anyone’s rationality. And for the same reason I should probably stop here.
I find this only a partly useful concept, since it is sometimes used to “discredit” arguments I consider quite valid, such as your last two examples. At most, if called on to defend either of those examples I would have to say more about why our usual condemnation of racism should apply to the entire category, and of why taking others’ property without their consent should be condemned even when done by a group that some people consider ought to be allowed special privileges.