I think that violates the spirit of the thought experiment. The point of the dust speck is that it is a fleeting, momentary discomfort with no consequences beyond itself. So if you multiply the choice by a billion, I would say that the billion dust specks should aggregate in a way they don’t pile up and “completely shred one person”—e.g., each person gets one dust speck per week. This doesn’t help solving the dilemma, at least for me.
Alejandro1
The “clearly” is not at all clear to me, could you explain?
Another dilemma where the same dichotomy applies is torture vs. dust specks. One might reason that torturing one person 50 years is better than torturing 100 people infinitesimally less painfully for 50 years minus one second, and that this is better than torturing 10,000 people very slightly less painfully for 50 years minus two seconds……. and at the end of this process accept the unintuituive conclusion that torturing someone 50 years is better than having a huge number of people suffer a tiny pain for a second (differential thinking). Or one might refuse to accept the conclusion and decide that one of these apparently unproblematic differential comparisons is in fact wrong (integral thinking).
The exposure of the general public to the concept of AI risk probably increased exponentially a few days ago, when Stephen Colbert mentioned Musk’s warnings and satirized them. (Unrelatedly but also of potential interest to some LWers, Terry Tao was the guest of the evening).
You could have a question about the scientific consensus on whether abortion can cause breast cancer (to catch biased pro-lifers). For bias on the other side, perhaps there is some human characteristic the fetus develops earlier than the average uninformed pro-choicer would guess? There seems to be no consensus on fetus pain, but maybe some uncontroversial-yet-surprising fact about nervous system development? I couldn’t find anything too surprising on a quick Wiki read, but maybe there is something.
Took the survey. As usual, immense props to Yvain for the dedication and work he puts into this.
If Alice was born in January and Bob was born in December, she will be 11 months older than him when they start going to school (and their classmates will be in average 5.5 months younger than her and 5.5 months older than him), which I hear can make a difference.
I think this way of sorting classes by calendar year of birth might also be six months shifted in different hemispheres (or perhaps vary with country in more capricious ways). IIRC, in Argentina my classes had people born from one July to the following June, not from one January to the following December.
Is the “Birth Month” bonus question just to sort people arbitrarily into groups to do statistics, or to find correlations between birth month and other traits? If the latter, since the causal mechanism is almost certainly seasonal weather, the question should ask directly for seasonal weather at birth to avoid South Hemisphere confounders.
The question about “Country” should clarify whether you are asking about nationality or residence.
Philosopher Richard Chapell gives a positive review of Superintelligence.
An interesting point made by Brandon in the comments (the following quote combines two different comments):
I think there’s a pretty straightforward argument for taking this kind of discussion seriously, on general grounds independent of one’s particular assessment of the possibility of AI itself. The issues discussed by Bostrom tend to be limit-case versions of issues that arise in forming institutions, especially ones that serve a wide range of purposes. Most of the things Bostrom discusses, on both the risk and the prevention side, have lower-level, less efficient efficient analogues in institution-building.
A lot of the problems—perverse instantiation and principal agent problems, for instance—are standard issues in law and constitutional theory, and a lot of constitutional theory is concerned with addressing them. In checks and balances, for instance, we are ‘stunting’ and ‘tripwiring’ different institutions to make them work less efficiently in matters where we foresee serious risks. Enumeration of powers is an attempt to control a government by direct specification, and political theories going back to Plato that insist on the importance of education are using domesticity and indirect normativity. (Plato’s actually very interesting in this respect, because the whole point of Plato’s Republic is that the constitution of the city is deliberately set up to mirror the constitution of a human person, so in a sense Plato’s republic functions like a weird artificial intelligence.)
The major differences arise, I think, from two sources: (1) With almost all institutions, we are dealing with less-than-existential risks. If government fails, that’s bad, but it’s short of wiping out all of humanity. (2) The artificial character of an AI introduces some quirks—e.g., there are fewer complications in setting out to hardwire AIs with various things than trying to do it with human beings and institutions. But both of these mean that a lot of Bostrom’s work on this point can be seen as looking at the kind of problems and strategies involved in institutions, in a sort of pure case where usual limits don’t apply.
I had never thought of it from this point of view. Might it benefit AI theorists to learn political science?
0) CEV doesn’t exist even for a single individual, because human preferences are too unstable and contingent on random factors for the extrapolation process to give a definite answer.
The American Conservative is definitely socially conservative and, if not exactly fiscally liberal, at least much more sympathetic to economic redistribution than mainstream conservatism. But it is more composed of opinion pieces than of news reports, so I don’t know if it works for way you want.
As others suggested, Vox could be a good choice for a left-leaning news source. It has decent summaries of “everything you need to know about X” (where X = many current news stories).
But “Would you pay a penny to avoid scenario X?” in no way means “Would you sacrifice a utilon to avoid scenario X?” (the latter is meaningless, since utilons are abstractions subject to arbitrary rescaling). The meaningful rephrasing of the penny question in terms of utilons is “Ceteris paribus, would you get more utilons if X happens, or if you lose a penny and X doesn’t happen?” (which is just roundabout way of asking which you prefer). And this is unobjectionable as a way of testing whether you have really a preference and getting a vague handle on how strong it is.
I would prefer if people avoided the word “utilon” altogether (and also “utility” outside of formal decision theory contexts) because there is an inevitable tendency to reify these terms and start using them in meaningless ways. But again, nothing special about money here.
Right; assuming (falsely of course) that humans have coherent preferences satisfying the VNM axioms, what can be measured in utilons are not “amount of dollars” in the abstract, but “amount of dollars obtained in such-and-such way in such-and-such situation”. But I wouldn’t call this “not being meaningfully comparable”. And there is nothing special about dollars here, any other object, event or experience is subject to the same.
Utilons do not exist. They are abstractions defined out of idealized, coherent preferences. To the extent that they are meaningful, though, their whole point is that anything one might have a preference over can be quantified in utilons—including dollars.
If the rotating pie is a pie that when nonrotating had the same radius as the other one, when it rotates it has a slightly larger radius (and circumference) because of centrifugal forces. This effect completely dominates over any relativistic one.
Why is it inconsistent?
I am really torn between wanting to downvote this as having no place in LW and going against the politics-talk-taboo, and wanting to upvote it for being a clear, fair and to the point summary of ideological differences I find fascinating.
You are the fourth or fifth person who has reached the same suspicion, as far I as know, independently. Which of course is moderate additional Bayesian evidence for its truth (at the very least, it means you are seeing a objective pattern even if it turns out to be coincidental, instead of being paranoid or deluded)