Is the story currently complete?
alexey
The issue with the first justification is that no one has actually claimed that the existence of such a rule is obvious or self-evident. Publicly holding a non-obvious belief does not obligate the holder to publicly justify that belief to the satisfaction of the author.
However, Yudkowsky also called the rule “straightforward” and said that
violating it this hugely and explicitly is sufficiently bad news that people should’ve been wary about this post and hesitated to upvote it for that reason alone
That is, he expected majority of EA Forum members (at least) to also consider is a “basic rule”.
That right there shows autogynephilia isn’t a universal explanation.
Do any prominent pro-AGP people claim it is? Even when I see them described by their opponents, the claim is that there are two clusters of trans women and AGP people are one of them, so aroace trans women could belong to the other cluster without contradicting that theory.
There are similar claims in Russia as well, for what it’s worth.
and author intentionally cropped
The author is visible in the next screenshot, unless you meant something else (also, even if he wasn’t, the name is part of the URL).
If I were going to play chess against Magnus Carlsen I’d definitely study his games with a computer, and if that computer found a stunning refutation to an opening he liked I’d definitely play it.
Conditionally on him continuing to play the opening, I would expect he has a refutation to that refutation, but no reason to use the counter-refutation in public games against the computer. On the other hand, he may not want to burn it on you either.
is obviously different than what you said, though
To me it doesn’t seem to be? “condoned by social consensus” == “isn’t broadly condemned by their community” in the original comment. And
because the “social consensus” is something designed by people, in many cases with the explicit goal of including circles wider than “them and their friends”
doesn’t seem to work unless you believe a majority of people are both actively designing the “social consensus” and have this goal; majority of people who design the consensus having this as a goal is not sufficient.
It’s explicitly the second:
But if they can do that with an AGI capable of ending the acute risk period, then they’ve probably solved most of the alignment problem. Meaning that it should be easy to drive the probability of disaster dramatically lower.
You might have confused “singularity” and “a singleton” (that is, a single AI (or someone using AI) getting control of the world)?
Cairo is a problem too, then (it was founded after Arthur lived).
It’s also interesting that apparently field experts only did about as well as the traditional students:
Differences between Fleet and ITTC participants were generally smaller and neither consistently positive nor negative.
Does experience not help at all?
I don’t believe the original novels imply the humanity nearly went extinct and then banded together, that was only in “the junk Herbert’s son wrote”. Or that Strong AI was developed only a short time before the Jihad started.
Neither of these are true in the Dune Encyclopedia version, which Frank Hebert at least didn’t strongly disapprove of.
There is still some Goodhart’s-Law-ing there, to quote https://dune.wikia.com/wiki/Butlerian_Jihad/DE:
After Jehanne’s death, she became a martyr, but her generals continued exponentially with more zeal. Jehanne knew her weaknesses and fears, but her followers did not. The politics of Urania were favored. Around that time, the goals of the Jihad were the destruction of machine technology operating at the expense of human values; but by this point they would have be replaced by indiscriminate slaughter.
Whereas I can look at a regular triangle and see its ∆-ness from outside the simulation, I cannot do the same (let’s suppose) for keys of the right shape to open lock L.
Why suppose this and not the opposite? If you understand L well enough to see if a key opens it immediately, does this make L-openingness intrinsic, so intrinsicness/extrinsicness is relative to the observer?
And on the other hand, someone else needs to simulate a ruler to check for ∆-ness, so it is an extrinsic property to him.
Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state.
I certainly would consider this much more difficult than merely checking whether a key opens a lock. I could after spending enough time understand the lock well enough for this, but even considering a complete state of affairs e.g. on Earth?
I’ve taken the survey.
Most leftists … believe we can all agree on what crops to grow (what social values to have [2])
Whose slogan is “family values”, again?
and pull out and burn the weeds of nostalgia, counter-revolution, and the bourgeoisie
Or the weeds of revolution, hippies, and trade unions...
Conservatives view their own society the way environmentalists view the environment: as a complex organism best not lightly tampered with. They’re skeptical of the ability of new policies to do what they’re supposed to do, especially a whole bunch of new policies all enacted at once.
Bunch of new policies like War on Drugs, for example?
I’ve taken the survey.
Second AI: If I just destroy all humans, I can be very confident any answers I receive will be from AIs!
The amount of line emission from a galaxy is thus a rough proxy for the rate of star formation – the greater the rate of star formation, the larger the number of large stars exciting interstellar gas into emission nebulae… Indeed, their preferred model to which they fit the trend converges towards a finite quantity of stars formed as you integrate total star formation into the future to infinity, with the total number of stars that will ever be born only being 5% larger than the number of stars that have been born at this time.
Is this a good proxy for total star formation, or only large star formation? Is it plausible that while no/few large stars are forming, many dwarfs are?
But my point is that at some point, a “static analysis” becomes functionally equivalent to running it. If I do a “static analysis” to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for “real”, and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.
Crucial words here are “at some point”. And Benja’s original comment (as I understand it) says precisely that Omega doesn’t need to get to that point in order to find out with high confidence what Eliezer’s reaction to counterfactual mugging would be.
In fact it seems that the linked argument relies on a version of the orthogonality thesis instead of being refuted by it:
Nothing about the argument contradicts “the true meaning of life”—which seems in that argument to be effectively defined as “whatever the AI ends up with as a goal if it starts out without a goal”—being e.g. paperclips.