A Series of Increasingly Perverse and Destructive Games
Related to: Higher Than the Most High
The linked post describes a game in which (I fudge a little), Omega comes to you and two other people, and ask you to tell him an integer. The person who names the largest integer is allowed to leave. The other two are killed.
This got me thinking about variations on the same concept, and here’s what I’ve come up, taking that game to be GAME0. The results are sort of a fun time-waster, and bring up some interesting issues. For your enjoyment...
THE GAMES:
GAME1: Omega takes you and two strangers (all competent programmers), and kidnaps and sedates you. You awake in three rooms with instructions printed on the wall explaining the game, and a computer with an operating system and programming language compiler, but no internet. Food, water, and toiletries are provided, but no external communication. The participants are allowed to write programs on the computer in a language that supports arbitrarily large numerical values. The programs are taken by Omega and run on a hypercomputer in finite time (this hypercomputer can resolve the halting problem and infinite loops, but programs that do not eventually halt return no output). The person who wrote the program with the largest output is allowed to leave. The others are instantly and painlessly killed. In the event of a tie, everyone dies. If your program returns no output, that is taken to be zero.
GAME2: Identical to GAME1, except that each program you write has to take two inputs, which will be the text of the other players’ programs (assume they’re all written in the same language). The reward for outputting the largest number apply normally.
GAME3: Identical to Game2, except that while you are sedated, Omega painlessly and imperceptibly uploads you. Additionally, the instructions on the wall now specify that your program must take four inputs—blackbox functions which represent the uploaded minds of all three players, plus a simulation of the room you’re in, indistinguishable from the real thing. We’ll assume that players can’t modify or interpret the contents of their opponents’ brains. The room function take an argument of a string (which controls the text printed on the wall, and outputs whatever number the person in the simulation’s program returns).
In each of these games, which program should you write if you wish to survive?
SOME DISCUSSION OF STRATEGY:
GAME1: Clearly, the trivial strategy (implement the Ackerman or similar fast-growing functions and generate some large integer), gives no better than random results, because it’s the bare minimal strategy anyone will employ, and your ranking in the results, without knowledge of your opponents is entirely up to chance / how long you’re willing to sit there typing nines for your Ackermann argument.
A few alternatives for your consideration:
1: if you are aware of an existence hypothesis (say, a number with some property which is not conclusively known to exist and could be any integer), write a program that brute-force tests all integers until it arrives at an integer which matches the requirements, and use this as the argument for your rapidly-growing function. While it may never return any output, if it does, the output will be an integer, and the expected value goes towards infinity.
2: Write a program that generates all programs shorter than length n, and finds the one with the largest output. Then make a separate stab at your own non-meta winning strategy. Take the length of the program you produce, tetrate it for safety, and use that as your length n. Return the return value of the winning program.
On the whole, though, this game is simply not all that interesting in a broader sense.
GAME2: This game has its own amusing quirks (primarily that it could probably actually be played in real life on a non-hypercomputer), however, most of its salient features are also present in GAME3, so I’m going to defer discussion to that. I’ll only say that the obvious strategy (sum the outputs of the other two players’ programs and return that) leads to an infinite recursive trawl and never halts if everyone takes it. This holds true for any simple strategy for adding or multiplying some constant with the outputs of your opponents’ programs.
GAME3: This game is by far the most interesting. For starters, this game permits acausal negotiation between players (by parties simulating and conversing with one another). Furthermore, anthropic reasoning plays a huge role, since the player is never sure if they’re in the real world, one of their own simulations, or one of the simulations of the other players.
Players can negotiate, barter, or threaten one another, they can attempt to send signals to their simulated selves (to indicate that they are in their own simulation and not somebody else’s). They can make their choices based on coin flips, to render themselves difficult to simulate. They can attempt to brute-force the signals their simulated opponents are expecting. They can simulate copies of their opponents who think they’re playing any previous version of the game, and are unaware they’ve been uploaded. They can simulate copies of their opponents, observe their meta-strategies, and plan around them. They can totally ignore the inputs from the other players and play just the level one game. It gets very exciting very quickly. I’d like to see what strategy you folks would employ.
And, as a final bonus, I present GAME4 : In game 4, there is no Omega, and no hypercomputer. You simply take a friend, chloroform them, and put them in a concrete room with the instructions for GAME3 on the wall, and a linux computer not plugged into anything. You leave them there for a few months working on their program, and watch what happens to their psychology. You win when they shrink down into a dead-eyed, terminally-paranoid and entirely insane shell of their former selves. This is the easiest game.
Happy playing!
I think I can win Game 1 against almost anyone—in other words, I think I have a larger computable number than any sort of computable number I’ve seen anyone describe in these sorts of contests, where the top entries typically use the fast-growing hierarchy for large recursive ordinals, in contests where Busy Beaver and beyond aren’t allowed.
Game 2 is interesting. My first thought was that running the other person’s program and adding 1 to the result guarantees that they die—either their program doesn’t halt, or your program is larger. So my first thought was that it just reduced to 3 players who can choose whether to kill each other or not, at least 2 of whom have to die, with no solution except from TDT-type correlations. But suppose I output a large number without looking at my opponents’ code, and my opponents both try to run the other two programs and add the outputs together, plus 1. They both go into an infinite loop and I win. There may be some nontrivial Nash-style equilibrium to be found here.
Okay I have to ask. Care to provide a brief description? You can assume familiarity with all the standard tricks if that helps.
I suspect that TREE(3) is the number he’s referring to, since I’ve seen it mentioned on LW a few times, and it seems to be larger than most other numbers that are considered notable for their largeness.
Of course, it’s easy to come up with a number larger than TREE(3).
I dunno, is that tree sequence related to the busy beaver sequence? And speaking of which, what kind of big number contest bans busy beaver? Srsly : <
As far as I know, the TREE function has no particular relationship to the busy beaver functions. The TREE function is computable, whereas the busy beaver functions are not.
I wonder how TREE(3) compares to Loader’s number. If I understand correctly, if Kruskal’s tree theorem can be proved in the calculus of constructions using a reasonable number of symbols (where 3^^^3 counts as a reasonable number, but TREE(3) does not), then Loader’s number is much larger.
Edit: Wikipedia states that Friedman’s special cases of Kruskal’s tree theorem can “easily” be proved in second-order arithmetic, which can be expressed in the calculus of constructions. I’m pretty sure this means that the TREE function can be written in the calculus of constructions using a reasonable number of symbols, meaning that Loader’s number is much larger than TREE(n) for any reasonable value of n.
A computable one. OP is not clear how his hypercomputer is solving the halting problem—does it have a ‘halts(s)’ function for programs that do not use ‘halts’ function, or what. The solution 2 in the OP is pretty much equivalent to the busy beaver, and it can not be done without use of halting oracle because some of those programs would loop forever, making the final print statement unreachable.
Yup, that makes sense.
How does your proposed solution for Game 1 stack up against the brute-force metastrategy?
Game 2 is a bit tricky. An answer to your described strategy would be to write a large number generator f(1),which produces some R, which does not depend on your opponents’ programs, create a virtual machine that runs your opponents’ programs for r steps, and, if they haven’t halted, swaps the final recursive entry on the call stack with some number (say, R, for simplicity), and iterates upwards to produce real numbers for their function values. Then you just return the max of all three values. This strategy wins against any naive strategy, wins if your opponents are stuck in infinite loops, and, if taken by all players symmetrically, reduces the game to who has a larger R—i.e. the game simplifies to GAME1, and there is still (probably) one survivor.
Well the brute force strategy is going to do a lot better, because it’s pretty easy to come up with a number bigger than the length of the longest program anyone has ever thought to write, and then plugging that into your brute force strategy automatically beats any specific program that anyone has ever thought to write. On the other hand, the meta-strategy isn’t actually computable (you need to be able to decide whether program produces large outputs, which requires a halting oracle or at least a way of coming up with large stopping times to test against). So it doesn’t really make sense to compare them.
Game3 has an entirely separate strategy available to it: Don’t worry initially about trying to win… instead code a nice simulator/etc for all the inhabitants of the simulation, one that can grow without bound and allows them to improve (and control the simulation from inside).
You might not “win”, but a version of three players will go on to found a nice large civilization. :) (Take that Omaga.)
(In the background, have it also running a thread computing increasingly large numbers and some way to randomly decide which of some set of numbers to output, to effectively randomize which one of the three original players wins. Of course, that’s a small matter compared to the simulated world which, by hypothesis, has unbounded computational power available to it.)
Argh, you beat me to it! But frankly, how’s that not obvious? Omega is giving us unbounded computational power, and we wouldn’t use it?
Now there may be a catch. Nothing says the hyper-computer actually computes the programs, even those that do return a value. It could for instance detect the separation between your nice simulated advanced civilization and the background program, and not compute the simulation at all. You could counteract that strategy, but then the Hyper-computer may be smarter than that.
Looking down the thread, I think one or two others may have beat me to it too. But yes, It seems at least that Omega would be handing the programmers a really nice toy and (conditional on the programmers having the skill to wield it), well..
Yes, there is that catch, hrm… Could put something into the code that makes the inhabitants occasionally work on the problem, thus really deeply intertwining the two things.
This is what’s rather unsatisfactory with the notion of subjective experience as ‘computation’ - optimizations that do not affect the output may be unsafe from the inside perspective—even if the beings inside simulator sometimes work on the problem, the hyper-compiler might optimize too much out. Essentially, you end up with ‘zombie’ hypercomputers that don’t have anyone inside, and ‘non zombie’ hypercomputers inside of which beings really live.
Game1 has been done in real life (without the murder): http://djm.cc/bignum-results.txt
Also:
Can’t do that, unless you already know the programs will halt. The winner of the actual contest used a similar strategy, using programs in the calculus of constructions so they are guaranteed to halt.
For Game2, if your opponent’s program (say there are only 2 players) says to return your program’s output + 1, then you can’t win. If your program ever halts, they win. If it doesn’t halt, then you both lose.
Whelp, that’s it, then. Ralph Loader has discovered the largest integer.
Wait, I get that we can’t solve the Halting Problem in general. But if we restrict ourselves to programs of less than a given length, are you sure there is no halting algorithm that can analyse them all? There certainly is one, for very small sizes. I don’t expect it would break down for larger sizes, only for arbitrary sizes.
For every n, a program exists that will solve the halting problem for programs up to length n, but the size of this program must grow with n. I don’t really see any practical way for a human to write this program other than generating an extremely large number and then testing all programs up to length n for halting within this bound, in which case you’ve already pretty much solved the original problem. If you use some proof system to try to prove that programs halt and then take the maximum running time of only those, then you might as well use a formalism like the calculus of constructions.
Wait, its even worse. A human in a room is an algorithm, and as such cannot solve the halting problem. There’s got to be some programs we just can’t know if they will halt or not. Which means there’s got to be an
n
beyond which some programs of lengthn
or less cannot be analysed by humans.That, or we have some special magic in us.
I think Game1 is much more interesting than you’re making it out to be. Scott Aaronson wrote a lovely essay about a variant of it.
Typing 9s into an Ackermann function performs much worse than iterating the Ackermann function. Now you can define an iterated Ackermann function and start feeding that function into itself… there are lots of fun things to try.
Game 1, solution 2 would not work as I understand it. Because some of the programs that you generate will not terminate, and so your program wouldn’t terminate either. Omega can detect that, but your program can’t.
Game 4 is also interesting, precisely because it actually has a non-zero probability of happening, I don’t believe Omega exists, and I doubt the existence hypercomputers even more. So if I were to find myself in such a game situation, I would think that it is more likely to be scenario 4 than to be 3.
The person doesn’t return a number, the person returns a computer program. Or are you saying it returns the number that will be output by that program? What happens if that program doesn’t terminate? If you say that the room function returns no output (but does terminate), then running such a program requires a halting oracle. And then running a program that calls the room function requires a halting+1 oracle (oracle of halting +1?). Which means that Omega needs a halting+ω oracle!
So I guess I should have specified which model of hypercomputation Omega is using. Omega’s computer can resolve ANY infinite trawl in constant time (assume time travel and an enormous bucket of phlebotinum is involved) - including programs which generate programs. So, the players also have the power to resolve any infinite computation in constant time. Were they feeling charitable, in an average utilitarian sense, they could add a parasitic clause to their program that simply created a few million copies of themselves which would work together to implement FAI, allow the FAI to reverse-engineer humanity by talking to all three of the contestants, then creating arbitrarily large numbers of value-fulfilled people and simulating them forever. But I digress.
In short, take it as a given that anyone, on any level, has a halting oracle for arbitrary programs, subprograms, and metaprograms, and that non-returning programs are treated as producing no output.
I don’t think players actually have hypercomputational abilities otherwise they could do the following:
function self(param n) { if (hypercompute(self(n)) != 0) return hypercompute(self(n+1)); return n; }
self(1);
If the recursive function self(n) would not halt for some n then the player’s program would halt and return a value < n. But in that case it would halt for self(n) and return a value larger than n. So it must eventually halt and return The Largest Integer.
I assume you just mean that any program the players write will either halt or not and only Omega will know for sure if it doesn’t halt.
So, there are actually compelling reasons that halting oracles can’t actually exist. Quite aside from your solution, it’s straightforward to write programs with undefined behavior. Ex:
function undef():
For the sake of the gdanken-experiment, can we just assume that Omega has a well-established policy of horribly killing tricky people who try to set up recursive hypercomputational functions whose halting behavior depends on their own halting behavior?
Oh, yes please, that makes it easy.
if (halt(other_player_1)) do_not_halt(); if (halt(other_player_2)) do_not_halt(); return 0;
At least in GAME3 this kills off the clever opponents who try to simulate me.
That would definitely make you one of those tricky people.
Does the problem have a time limit? Does the computer you use to write your program have any storage limits? For a non-tricky answer I think the only solution is to repeatedly apply the busy beaver function BB as many times as physically possible before you fall over from exhaustion. You have to be fairly clever about it; first write a macro expander and then run that for as long as possible recursively expanding bb(X) on X = bb(some-big-number). Submit the result when it’s physically impossible to continue expanding the macro.
Actually, this can be improved if the lower limit is the length of program that you can submit. Assume there is a maximum of N bits that you can submit. If you have hypercomputation locally available then just find a particular solution to bb(N) and submit it. If you do not have hypercomputation locally available then you have to try your best at writing a solution to bb(N). The solution in the first paragraph is not bad as a starting point, and in fact I was silly and didn’t realize I was trying to approximate bb(N) and actually spent some time thinking of other strategies that beat it. The basic idea is to realize that every solution is (probably!) going to be roughly of the form F(G()) = return BB^G()(3), where G() is the largest function you can think of that fits in the remaining bits. If the notation is confusing, I mean bb(bb(bb … bb(3) .. ))) as many times as G() returns. Naively I thought that meant I could just treat G() as a sub-problem of the form G(H()) = BB^H()(3) and so on until I ran out of bits. But that can be composed into a meta-function of F(G()) = BB^(BB^H()(3)) = BB^^H()(3). And then BB^^^^^...^^^^H()(3) with H() up-arrows as a meta-meta-solution. Presumably there is a meta-function for that process as well, and so on, meta-function piled upon meta-function. So that means there is probably a procedure to derive new meta-functions and you should just run that derivation procedure until you find a sufficiently large meta-function and apply it. In this case, there is no sufficiently large meta-function because the derivation function would run on Omega’s hypercomputer with arbitrary precision. So you’d have to find some way to interrupt the meta-function derivation using however many of the remaining N bits you have left. Oh, but that is a meta-meta-function that should stop deriving meta-functions after the biggest number you can specify which is the subproblem we were trying to solve in the first place. So the meta-meta-meta-program can derive as many meta-meta-programs as possible with the bits left… I think (somewhat weakly) that at some point this particular chain of metas will exhaust the number of bits in N. But perhaps there is an Omega-function that can easily handle any number of meta-extensions and we have to limit the resulting Omega-Omega-function...
In short, I think a limited number of bits in the submission combined with the inability to locally hypercompute yields the most interesting situation by far (build your own bb(N)!). Being able to locally hypercompute and having a limited submission length is trivial. Having an unbounded submission length falls, I think, to the nested bb() approach unless you can relatively quickly write a meta-strategy and start repeatedly expanding that while you work on an omega-strategy, etc. using roughly the same strategy as the limited length submission without hypercomputing approach, until you start to mentally deteriorate significantly. At that point you better submit your best result before you make a non-halting mistake in your senility or from potential illness or injury.
Of course if Omega sticks you in a room with infinite food and toiletries and time and health then why bother submitting a program?
In this case, I have no desire to escape from the room.
That’s fair.
Actually, my secret preferred solution to GAME3 is to immediately give up, write a program that uses all of us working together for arbitrary amounts of time (possibly with periodic archival and resets to avoid senescence and insanity), to create an FAI, then plugging our minds into an infinite looping function in which the FAI makes a universe for us, populates it with agreeable people, and fulfills all of our values forever. Program never halts, return value is taken to be 0, Niger0 is instantly and painlessly killed, and Niger1 (the simulation) eventually gets to go live in paradise for eternity.
This would be a fun puzzle except for that what I’d do would greatly depend on how long we were given to write the program. Optimizing for a ten minute program would be totally different from optimizing for a program that I’ve got a week to complete. And were I given no deadline, my strategy may simply be to opt to work on it so slowly that he could never kill anybody for losing.
I escape by writing a program that simulates 3^^3 copies of myself escaping and living happily ever after (generating myself by running Solomonoff Induction on a large amount of text I type directly into the source code).
I don’t think your alternative strategies for GAME1 would work. They might be worth trying if you had a lot more than three people.
That being said, I think there’s a way you could improve it. Write a program to randomly pick a program. It then attempts to prove that it won’t halt for TREE(3) steps. If it cannot, it runs the program. If it halts in less than TREE(3) steps, it picks a new program. This way, you can get rid of most of the programs that don’t halt, and you won’t get small numbers.
You can make this strategy more sophisticated by taking several programs that don’t seem to halt but don’t halt quickly, and waiting until a certain number halt. If you pick a proportion too low, you won’t get a sufficiently large number. If you pick a proportion too high, you won’t get enough that halt.
Note that the code is being run halting oracle hypercomputer, which simplifies your strategy to strategy number two.
So? I’m not allowed to actually use the oracle. It’s just used to make sure my program halts.
No. Strategy number two has an upper bound for how high it can answer, where mine does not. For example, it may be that you reach a program that does not halt before you reach one that takes TREE(3) steps to halt. In fact, I’m pretty sure you will. Second, strategy two is highly likely to fail due to reaching an obviously unhalting program. My version would not do so.
This was supposed to be an improvement on strategy two.