No, I don’t believe he did, but I’ll save the critique of that paper for my upcoming “why MWI is flawed” post.
titotal
Motivation gaps: Why so much EA criticism is hostile and lazy
I’m not talking about the implications of the hypothesis, I’m pointing out the hypothesis itself is incomplete. To simplify, if you observe an electron which has a 25% chance of spin up and 75% chance of spin down, naive MWI predicts that one version of you sees spin up and one version of you sees spin down. It does not explain where the 25% or 75% numbers come from. Until we have a solution to that problem (and people are trying), you don’t have a full theory that gives predictions, so how can you estimate it’s kolmogorov complexity?
I am a physicist who works in a quantum related field, if that helps you take my objections seriously.
It’s the simplest explanation (in terms of Kolmogorov complexity).
Do you have proof of this? I see this stated a lot, but I don’t see how you could know this when certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved.
The basic premise of this post is wrong, based on the strawman that an empiricist/scientist would only look at a single piece of information. You have the empiricist and scientists just looking at the returns on investment on bankmans scheme, and extrapolating blindly from there.
But an actual empiricist looks at all the empirical evidence. They can look the average rate of return of a typical investment, noting that this one is unusually high.They can learn how the economy works and figure out if there are any plausible mechanisms for this kind of economic returns. They can look up economic history, and note that Ponzi schemes are a thing that exists and happen reasonably often. From all the empirical evidence, the conclusion “this is a Ponzi scheme” is not particularly hard to arrive at.
Your “scientist” and “empricist” characters are neither scientists nor empiricists: they are blathering morons.
As for AI risk, you’ve successfully knocked down the very basic argument that AI must be safe because it hasn’t destroyed us yet. But that is not the core of any skeptics argument that I know.
Instead, an actual empiricist skeptic might look at the actual empirical evidence involved. They might say hey, a lot of very smart AI developers have predicted imminent AGI before and been badly wrong, so couldn’t this be that again? A lot of smart people have also predicted the doom of society, and they’ve also been wrong, so couldn’t this be that again? Is there a reasonable near-term physical pathway by which an AI could actually carry out the destruction of humanity? Is there any evidence of active hostile rebellion of AI? And then they would balance that against the empirical evidence you have provided to come to a conclusion on which side is stronger.
Which, really, is also what a good epistemologist would do? This distinction does not make sense to me, it seems like all you’ve done is (perhaps unwittingly) smeared and strawmanned scientists.
The Leeroy Jenkins principle: How faulty AI could guarantee “warning shots”
I think some of the quotes you put forward are defensible, even though I disagree with their conclusions.
Like, Stuart Russell was writing an opinion piece in a newspaper for the general public. Saying AGI is “sort of like” meeting an alien species seems like a reasonable way to communicate his views, while making it clear that the analogy should not be treated as 1 to 1.
Similarly, with Rob wilbin, he’s using the analogy to get across one specific point, that future AI may be very different from current AI. He also disclaims with the phrase “a little bit like” so people don’t take it too seriously. I don’t think people would come away from reading this thinking that AI is directly analogous to an octopus.
Now, compare these with Yudkowsky’s terrible analogy. He states outright “The AI is an unseen actress who, for now, is playing this character.”. No disclaimers, no specifying which part of the analogy is important. It directly leads people into a false impression about how current day AI works, based on an incredibly weak comparison.
Why Yudkowsky is wrong about “covalently bonded equivalents of biology”
Right, and when you do wake up, before the machine is opened and the planet you are on is revealed, you would expect to see yourself in planet A 50% of the time in scenario 1, and 33% of the time in scenario 2?
What’s confusing me is with scenario 2: say you are actually on planet A, but you don’t know it yet. Before the split, it’s the same as scenario 1, so you should expect to be 50% on planet A. But after the split, which occurs to a different copy ages away, you should expect to be 33% on planet A. When does the probability change? Or am I confusing something here?
[Question] Thoughts on teletransportation with copies?
While Wikipedia can definitely be improved, I think it’s still pretty damn good.
I really cannot think of a better website on the internet, in terms of informativeness and accuracy. I suppose something like Khan academy or so on might be better for special topics, but they don’t have the breadth that Wikipedia does. Even google search appears to be getting worse and worse these days.
Okay, I’m gonna take my skeptical shot at the argument, I hope you don’t mind!
an AI that is *better than people at achieving arbitrary goals in the real world* would be a very scary thing, because whatever the AI tried to do would then actually happen
It’s not true that whatever the AI tried to do would happen. What if an AI wanted to travel faster than the speed of light, or prove that 2+2=5, or destroy the sun within 1 second of being turned on?
You can’t just say “arbitrary goals”, you have to actually explain what goals there are that would be realistically achievable by an realistic AI that could be actually built in the near future. If those abilities fall short of “destroy all of humanity”, then there is no x-risk.
As stories of magically granted wishes and sci-fi dystopias point out, it’s really hard to specify a goal that can’t backfire
This is fictional evidence. Genies don’t exist, and if they did, it probably wouldn’t be that hard to add enough caveats to your wish to prevent global genocide. A counterexample might be the use of laws: sure, there are loopholes, but not big enough that the law would let you off on a broad daylight killing spree.
Current AI systems certainly fall far short of being able to achieve arbitrary goals in the real world better than people, but there’s nothing in physics or mathematics that says such an AI is *impossible*
Well, there is laws of physics and maths that put limits on available computational power, which in turn puts a limit on what an AI can actually achieve. For example, a perfect Bayesian reasoner is forbidden by the laws of mathematics.
If Ilya was willing to cooperate, the board could fire Altman, with the Thanksgiving break available to aid the transition, and hope for the best.
Alternatively, the board could choose once again not to fire Altman, watch as Altman finished taking control of OpenAI and turned it into a personal empire, and hope this turns out well for the world.
Could they not have also gone with option 3: fill the vacant board seats with sympathetic new members, thus thwarting Altman’s power play internally?
Alternative framing: The board went after Altman with no public evidence of any wrongdoing. This appears to have backfired. If they had proof of significant malfeasance, and presented it to their employees, the story may have gone a lot differently.
Applying this to the AGI analogy would be be a statement that you can’t shut down an AGI without proof that it is faulty or malevolent in some way. I don’t fully agree though: I think if a similar AGI design had previously done a mass murder, people would be more willing to hit the off switch early.
Civilization involves both nice and mean actions. It involves people being both nice and mean to each other.
From this perspective, if you care about Civilization, optimizing solely for niceness is as meaningless and ineffective as optimizing for meanness.
Who said anything about optimizing solely for niceness? Everyone has many different values that sometimes conflict with each other, that doesn’t mean that “niceness” shouldn’t be one of them. I value “not killing people”, but I don’t optimize solely for that: I would still kill Mega-Hitler if I had the chance.
Would you rather live in a society that valued “niceness, community and civilization”, or one that valued “meanness, community and civilization”? I don’t think it’s a tough choice.
I think that being mean is sometimes necessary in order to preserve other, more important values, but that doesn’t mean that you shouldn’t be nice, all else being equal.
Partially, but it is still true that Eliezer was critical of NN’s at the time, see the comment on the post:
I’m no fan of neurons; this may be clearer from other posts.
“position” is nearly right. The more correct answer would be “position of one photon”.
If you had two electrons, say, you would have to consider their joint configuration. For example, one possible wavefunction would look like the following, where the blobs represent high amplitude areas:
This is still only one dimensional: the two electrons are at different points along a line. I’ve entangled them, so if electron 1 is at position P, electron 2 can’t be.
Now, try and point me to where electron 1 is on the graph above.
You see, I’m not graphing electrons here, and neither were you. I’m graphing the wavefunction. This is where your phrasing seems a little weird: you say the electron is the collection of amplitudes you circled: but those amplitudes are attached to configurations saying “the electron is at position x1” or “the electron is at position x2″. It seems circular to me. Why not describe that lump as “a collection of worlds where the electron is in a similar place”?
If you have N electrons in a 3d space, the wavefunction is not a vector in 3d space (god I wish, it would make my job a lot easier). It’s a vector in 3N+1 dimensions, like the following:
where r1, r2, etc are pointing to the location of electron 1, 2, 3, etc, and each possible configuration of electron 1 here, electron 2 there, etc, has an amplitude attached, with configurations that are more often encountered experimentally empirically having higher amplitudes.
Nice graph!
But as a test, may I ask what you think the x-axis of the graph you drew is? Ie: what are the amplitudes attached to?
I’m not claiming the conceptual boundaries I’ve drawn or terminology I’ve used in the diagram above are standard or objective or the most natural or anything like that. But I still think introducing probabilities and using terminology like “if you now put a detector in path A , it will find a photon with probability 0.5” is blurring these concepts together somewhat, in part by placing too much emphasis on the Born probabilities as fundamental / central.
I think you’ve already agreed (or at least not objected to) saying that the detector “found the photon” is fine within the context of world A. I assume you don’t object to me saying that I will find the detector flashing with probability 0.5. And I assume you don’t think me and the detector should be treated differently. So I don’t think there’s any actual objection left here, you just seem vaguely annoyed that I mentioned the empirical fact that amplitudes can be linked to probabilities of outcomes. I’m not gonna apologise for that.
Okay, let me break in down in terms of actual states, and this time, let’s add in the actual detection mechanism, say an electron in a potential well. Say the detector is in the ground state energy, E=0, and the absorption of a photon will bump it up to the next highest state, E=1. We will place this detector in path A, but no detector in path B.
At time t = 0, our toy wavefunction is:
1/sqrt2 |photon in path A, detector E=0> + 1/sqrt2 |photon in path B, detector E=0>
If the photon in A collides with the detector at time t =1, then at time t=2, our evolved wavefunction is:
1/sqrt2 |no free photon, detector E=1> + 1/sqrt2 |photon in path B, detector E=0>
Within the context of world A, a photon was found by the detector. This is a completely normal way to think and talk about this.
I think it’s straight up wrong to say “the photon is in the detector and in path B”. Nature doesn’t label photons, and it doesn’t distinguish between them. And what is actually in world A is an electron in a higher energy state: it would be weird to say it “contains” a photon inside of it.
Quantum mechanics does not keep track of individual objects, it keeps track of configurations of possible worlds, and assigns amplitudes to each possible way of arranging everything.
From here onwards? Most of those tweets that chatgpt generated are not noticeably different from the background noise of political twitter (which is what it was trained on anyway). Also, twitter is not published media so I’m not sure where this statement comes from.
You should be willing to absorb information from published media with a healthy skepticism based on the source and an awareness of potential bias. This was true before chatgpt, and will still be true in the future.