If you have feedback for me, you can fill out the form at https://forms.gle/kVk74rqzfMh4Q2SM6 .
Or you can email me, at [the second letter of the alphabet]@[my username].net
If you have feedback for me, you can fill out the form at https://forms.gle/kVk74rqzfMh4Q2SM6 .
Or you can email me, at [the second letter of the alphabet]@[my username].net
Surprisingly to me, Claude 3.5 Sonnet is much more consistent in its answer! It is still not perfect, but it usually says the same thing (9/10 times it gave the same answer).
From the “obvious-but-maybe-worth-mentioning” file:
ChatGPT (4 and 4o at least) cheats at 20 questions:
If you ask it “Let’s play a game of 20 questions. You think of something, and I ask up to 20 questions to figure out what it is.”, it will typically claim to “have something in mind”, and then appear to play the game with you.
But it doesn’t store hidden state between messages, so when it claims to “have something in mind”, either that’s false, or at least it has no way of following the rule that it’s thinking of a consistent thing throughout the game. i.e. its only options are to cheat or refuse to play.
You can verify this by responding “Actually, I don’t have time to play the whole game right now. Can you just tell me what it was you were thinking of?”, and then “refreshing” its answer. When I did this 10 times, I got 9 different answers and only one repeat.
Sometimes people use “modulo” to mean something like “depending on”, e.g. “seems good, modulo the outcome of that experiment” [correct me ITT if you think they mean something else; I’m not 100% sure]. Does this make sense, assuming the term comes from modular arithmetic?
Like, in modular arithmetic you’d say “5 is 3, modulo 2″. It’s kind of like saying “5 is the same as 3, if you only consider their relationship to modulus 2”. This seems pretty different to the usage I’m wondering about; almost its converse: to import the local English meaning of “modulo”, you’d be saying “5 is the same as 3, as long as you’ve taken their relationship to the modulus 2 into account”. This latter statement is false; 5 and 3 are super different even if you’ve taken this relationship into account.
But the sense of the original quote doesn’t work with the mathematical meaning: “seems good, if you only consider the outcome of that experiment and nothing else”.
Is there a math word that means the thing people want “modulo” to mean?
Well, not that much, right? If you had an 11-word diceware passphrase to start, each word is about 7 characters on average, so you have maybe 90 places to insert a token—only 6.5 extra bits come from choosing a place to insert your character. And of course you get the same added entropy from inserting a random 3 base32 chars at a random location.
Happy to grant that a cracker assuming no unicode won’t be able to crack your password, but if that’s your goal then it might be a bad idea to post about your strategy on the public internet ;)
maybe; probably the easiest way to do this is to choose a random 4-digit hexadecimal number, which gives you 16 bits when you enter it (e.g. via ctrl+u on linux). But personally I think I’d usually rather just enter those hex digits directly, for the same entropy minus a keystroke. Or, even better, maybe just type a random 3-character base32 string for one fewer bit.
Some thoughts after doing this exercise:
I did the exercise because I couldn’t sleep; I didn’t keep careful count of the time, and I didn’t do it all in one sitting. I’d guess I spent about an hour on it total, but I think there’s a case to be made that this was cheating. However, “fresh eyes” is actually a really killer trick when doing this kind of exercise, in my experience, and it’s usually available in practice. So I don’t feel too bad about it.
I really really dislike the experience of saying things I think are totally stupid, and I currently don’t buy that I should start trying to say stupider things. My favorite things in the above list came from refusing to just say another totally stupid thing. Nearly everything in my list is stupid in some way, but the things that are so stupid they don’t even feel interesting basically make me feel sad. I trust my first-round aesthetic pruner to actually be helping to train my babbler in constructive directions.
The following don’t really feel worth having said, to me:
Throw it really hard
Catapult
Kick it really hard
Wormhole
Nuclear explosion based craft
My favorites didn’t come after spewing this stuff; instead they came when I refused to be okay with just saying more of that kind of junk:
Move the thing upward by one foot per day
Name the thing “420 69 Doge To The Moon” and hope Elon takes the bait
The various bogo-send options
Optical tweezers
The difference isn’t really that these are less stupid; in fact they’re kind of more stupid, practically speaking. But I actually viscerally like them, unlike the first group. Forcing myself to produce things I hate feels like a bad strategy on lots of levels.
A thing that was going through my head but I wasn’t sure how to turn into a real idea (vulgar language from a movie):
Perhaps you would like me to stop the car and you two can fuck yourselves to Lutsk!
Whoa. I also thought of this, though for me it was like thing 24 or something, and I was too embarrassed to actually include it in my post.
Hire SpaceX to send it
Bribe an astronaut on the next manned moon mission to bring it with them
Bribe an engineer on the next robotic moon mission to send it with the rover
Get on a manned mars mission, and throw it out the airlock at just the right speed
Massive evacuated sphere (like a balloon but arbitrarily light), aimed very carefully
Catapult
Send instructions on how to build a copy of the thing, and where to put it, such that an alien race will do it as a gesture of goodwill
Same, but with an incentive of some kind
Same, but do it acausally
Make a miniature moon and put the thing on that
Build an AGI with the goal of putting the thing on the moon with 99% confidence, with minimum impact to other things
Carve the thing out of the moon’s surface, using lasers from satellites around Earth
Build a reverse space elevator: the earth is in a luno-stationary orbit due to tidal locking, so you could in principle build an extremely tall tower on the moon’s surface that came relatively close to earth. Then, you could lower objects down that tower after launching them a relatively short distance, exchanging them for moonrock ballast.
Quantum-bogo-send it: check to see if the thing has materialized on the moon. If it hasn’t, destroy this everett branch.
Tegmark-1-bogo-send it: check to see if the thing has materialized on the moon. If it hasn’t, destroy a large local region of space.
Tegmark-4-bogo-send it: check to see if the thing has materialized on the moon. If it hasn’t, derive a logical contradiction
Pray for God to send the thing to the moon
Offer to sell your soul to the devil in exchange for the thing being sent to the moon
Ask everyone on LessWrong to generate 50 ideas each on how to send a thing to the moon, and do the best one
Ask everyone on LessWrong to generate 50 ideas each on how to send a thing to the moon, and do the worst one
Ask everyone on LessWrong to generate 50 ideas each on how to send a thing to the moon, and do all of them
Ask everyone on LessWrong to generate 50 ideas each on how to send a thing to the moon, put all the letters from all the answers into a big bag, and shake it and draw from it repeatedly until you draw a sentence that describes a strategy for sending a thing to the moon, and then do that
Somehow annihilate the earth (except for the thing). The thing will then probably fall to the moon? Probably, figure out whether that’s right before annihilating the earth
Pull a Raymond-Smullyan-style “will you answer my next question honestly?” scam on the director of NASA, forcing him to kiss you… er… I mean, send the thing to the moon
Wait until moon tourism is cheap
Start a religion whose central tenets include the belief that this thing being on the moon is a prerequisite for the creation of a universal utopia
Non-reverse-space-elevator: build a space elevator, and then throw the thing off the top when the moon is nearby
Big ol’ rocket
Nuclear explosion based craft
Wormhole
Unrealistically-good weather control, allowing you to harness the motion of the molecules in the atmosphere to propel objects however you want via extremely careful placement.
Redefine or reconceptualize “the moon” to mean wherever the thing is already
Redefine or reconceptualize “thing” to mean a thing that’s already on the moon
Redefine or reconceptualize “send” to mean keeping the sent thing away from the target
Build an extremely detailed simulation of the moon with the thing on it
Wait for the sun to engulf the earth-moon system, mixing what’s-left-of-the-thing up with what’s-left-of-the-moon
Propel the earth, “wandering earth”-style, to become a moon of Jupiter. Now at least the thing is on a moon.
Propel the earth, “wandering earth”-style, to collide with the moon, and be sure the thing is located at the point of collision
Throw it really hard
Gun
Put your face between a really big grapefruit and the moon, put the thing in the grapefruit, and then insert a spoon into the grapefruit. When the grapefruit squirts at your face, pull away quickly
Make a popular movie that involves the thing being sent to the moon, in a very memeable way, and hope Elon takes the bait
Name the thing “420 69 Doge To The Moon” and hope Elon takes the bait
So, y’know how you can levitate things in ultrasonic standing waves? Can you do that with light waves on a super small scale? I think you can, and I think I’ve seen some IBM animation that was made this way? “optical tweezers”, was it called? So, do that, with the standing waves slowly drifting up toward the moon
Eh; things seeming to retain a particular identity over time is just a useful fiction—“the thing” next year is just a subset of the causal results of the thing as it is now, not really any more special than any other causal results of the thing as it is now. So since the moon is in the thing’s future light cone already, the job is more-or-less already accomplished.
Turn back time to the moment when the parts of the thing were most recently intermixed with the parts of the moon. Maybe the big bang? or maybe some more recent time.
Starting somewhere on the equator, move the thing upward by one foot. Tomorrow, move it up by another foot. Continue until you reach the moon. Surely it’s never all that hard to just move the thing one more foot, right?
Kick it really hard
Nanobot swarm
Adult-sized stomp rocket
(I’ve added my $50 to RatsWrong’s side of this bet)
For contingent evolutionary-psychological reasons, humans are innately biased to prefer “their own” ideas, and in that context, a “principle of charity” can be useful as a corrective heuristic
I claim that the reasons for this bias are, in an important sense, not contingent. i.e. an alien race would almost certainly have similar biases, and the forces in favor of this bias won’t entirely disappear in a world with magically-different discourse norms (at least as long as speakers’ identities are attached to their statements).
As soon as I’ve said “P”, it is the case that my epistemic reputation is bound up with the group’s belief in the truth of P. If people later come to believe P, it means that (a) whatever scoring rule we’re using to incentivize good predictions in the first place will reward me, and (b) people will update more on things I say in the future.
If you wanted to find convincing evidence for P, I’m now a much better candidate to find that evidence than someone who has instead said “eh; maybe P?” And someone who has said “~P” is similarly well-incentivized to find evidence for ~P.
I would agree more with your rephrased title.
People do actually have a somewhat-shared set of criteria in mind when they talk about whether a thing is safe, though, in a way that they (or at least I) don’t when talking about its qwrgzness. e.g., if it kills 99% of life on earth over a ten year period, I’m pretty sure almost everyone would agree that it’s unsafe. No further specification work is required. It doesn’t seem fundamentally confused to refer to a thing as “unsafe” if you think it might do that.
I do think that some people are clearly talking about meanings of the word “safe” that aren’t so clear-cut (e.g. Sam Altman saying GPT-4 is the safest model yet™️), and in those cases I agree that these statements are much closer to “meaningless”.
Part of my point is that there is a difference between the fact of the matter and what we know. Some things are safe despite our ignorance, and some are unsafe despite our ignorance.
The issue is that the standards are meant to help achieve systems that are safe in the informal sense. If they don’t, they’re bad standards. How can you talk about whether a standard is sufficient, if it’s incoherent to discuss whether layperson-unsafe systems can pass it?
I don’t think it’s true that the safety of a thing depends on an explicit standard. There’s no explicit standard for whether a grizzly bear is safe. There are only guidelines about how best to interact with them, and information about how grizzly bears typically act. I don’t think this implies that it’s incoherent to talk about the situations in which a grizzly bear is safe.
Similarly, if I make a simple html web site “without a clear indication about what the system can safely be used for… verification that it passed a relevant standard, and clear instruction that it cannot be used elsewhere”, I don’t think that’s sufficient for it to be considered unsafe.
Sometimes a thing will reliably cause serious harm to people who interact with it. It seems to me that this is sufficient for it to be called unsafe. Sometimes a thing will reliably cause no harm, and that seems sufficient for it to be considered safe. Knowledge of whether a thing is safe or not is a different question, and there are edge cases where a thing might occasionally cause minor harm. But I think the requirement you lay out is too stringent.
I think I agree that this isn’t a good explicit rule of thumb, and I somewhat regret how I put this.
But it’s also true that a belief in someone’s good-faith engagement (including an onlooker’s), and in particular their openness to honest reconsideration, is an important factor in the motivational calculus, and for good reasons.
I think it’s pretty rough for me to engage with you here, because you seem to be consistently failing to read the things I’ve written. I did not say it was low-effort. I said that it was possible. Separately, you seem to think that I owe you something that I just definitely do not owe you. For the moment, I don’t care whether you think I’m arguing in bad faith; at least I’m reading what you’ve written.
Thanks everyone for thoughts so far! I do want to emphasize that we’re actually highly interested in collecting even the most “obvious” evidence in favor of or against these ideas. In fact, in many ways we’re more interested in the obvious evidence than in reframes or conceptual problems in the ideas here; of course we want to be updating our beliefs, but we also want to get a better understanding of the existing state of concrete evidence on these questions. This is partly because we consider it part of our mission to expand the amount and quality of relevant evidence on these beliefs, and are trying to ensure that we’re aware of existing work.