I don’t know if this is possible in the programming language in question, but you probably don’t want any programs that use some kind of trickery to retroactively change what they chose in previous rounds. Or implement anthropic computing with the Scheme equivalents of fork() and exit(). (Before doing anything, call fork() to make a copy of the universe. If you don’t like the result, call exit() to destroy it.)
Okay, it’s not. But I’m sure there’s a way to circumvent the spirit of your rule, while still abiding the letter. What about network I/O, for instance? As in, download some code from some remote location, and execute that? Or even worse, run your code in the remote location, where you can enjoy superior computing power?
Yes, but then you’re in a very bad position if the test is run without network access. (I.e., you’re allowed to use the network, but there’s no network adapter.)
Well, I just though about it for 2 seconds. I tend to be a purist: if it were me, I would start from pure call-by-need λ-calculus, and limit the number of β-reductions, instead of the number of seconds. Cooperation and defection would be represented by Church booleans. From there, I could extend the language (explicit bindings, fast arithmetic…), provide a standard library, including some functions specific to this contest.
Or, I would start from the smallest possible subset of Scheme that can implement a meta-circular evaluator. It may be easier to examine an S-expression in Scheme than a λ-expression in λ-calculus.
Or, I would start from lambda calculus with de-Brujin indices, so we don’t have to worry about α-conversions. In this case, I would provide a compiler from regular λ-calculus. (I suspect however that this one doesn’t change a thing.)
Or, I would start from a Forth dialect (probably with an implicit return stack).
Or, I would start from BrainFuck (only half joking: the language is really dead simple, and fast interpreters already exist).
But it’s not me, and I don’t have the courage right now. If I ever implement the necessary tools (which I may: I’m studying programming languages in my spare time), then I will submit them here, and possibly run a contest myself.
I’m sorry, how is that relevant to my no-network-adapter comment? (I mean this literally, not rhetorically. I don’t see the connection. Did you mean to reply to a different comment?)
It’s the whole thread. I was not sure where to place my comment. The connection is, the network may not be the only source of “cheating”. My solutions plug them all in one fell swoop.
Oh, OK. In that case, what you are trying to achieve is (theoretically) boxing a (potential) AGI, without a gatekeeper. Which is kind of overkill in this case, and wouldn’t be solved with a choice of language anyway :)
I think it is possible to prove that a given boxing works, if it’s sufficiently simple. Choosing the language isn’t enough, but choosing the interpreter should be.
Take Brainfuck for instance: replace the dot (.″), which prints a character, by two other statements: one that prints “yes” and exits, and one that prints “no” and exits. If the interpreter has no bug, a program can only:
Print “yes” and kill itself.
Print “no” and kill itself.
Do nothing until we kill it, or otherwise fail.
Assuming the AI doesn’t control the external world by heating the host Intel processor in a smart way, we should be able to prove that we’re otherwise safe.
I haven’t forbade use of any library functions except for file IO. I’m not confident this is optimal, but how is it underspecified?
I don’t know if this is possible in the programming language in question, but you probably don’t want any programs that use some kind of trickery to retroactively change what they chose in previous rounds. Or implement anthropic computing with the Scheme equivalents of fork() and exit(). (Before doing anything, call fork() to make a copy of the universe. If you don’t like the result, call exit() to destroy it.)
Okay, it’s not. But I’m sure there’s a way to circumvent the spirit of your rule, while still abiding the letter. What about network I/O, for instance? As in, download some code from some remote location, and execute that? Or even worse, run your code in the remote location, where you can enjoy superior computing power?
Good point. File IO was too specific.
Yes, but then you’re in a very bad position if the test is run without network access. (I.e., you’re allowed to use the network, but there’s no network adapter.)
Well, I just though about it for 2 seconds. I tend to be a purist: if it were me, I would start from pure call-by-need λ-calculus, and limit the number of β-reductions, instead of the number of seconds. Cooperation and defection would be represented by Church booleans. From there, I could extend the language (explicit bindings, fast arithmetic…), provide a standard library, including some functions specific to this contest.
Or, I would start from the smallest possible subset of Scheme that can implement a meta-circular evaluator. It may be easier to examine an S-expression in Scheme than a λ-expression in λ-calculus.
Or, I would start from lambda calculus with de-Brujin indices, so we don’t have to worry about α-conversions. In this case, I would provide a compiler from regular λ-calculus. (I suspect however that this one doesn’t change a thing.)
Or, I would start from a Forth dialect (probably with an implicit return stack).
Or, I would start from BrainFuck (only half joking: the language is really dead simple, and fast interpreters already exist).
But it’s not me, and I don’t have the courage right now. If I ever implement the necessary tools (which I may: I’m studying programming languages in my spare time), then I will submit them here, and possibly run a contest myself.
I’m sorry, how is that relevant to my no-network-adapter comment? (I mean this literally, not rhetorically. I don’t see the connection. Did you mean to reply to a different comment?)
It’s the whole thread. I was not sure where to place my comment. The connection is, the network may not be the only source of “cheating”. My solutions plug them all in one fell swoop.
Oh, OK. In that case, what you are trying to achieve is (theoretically) boxing a (potential) AGI, without a gatekeeper. Which is kind of overkill in this case, and wouldn’t be solved with a choice of language anyway :)
I think it is possible to prove that a given boxing works, if it’s sufficiently simple. Choosing the language isn’t enough, but choosing the interpreter should be.
Take Brainfuck for instance: replace the dot (
.
″), which prints a character, by two other statements: one that prints “yes” and exits, and one that prints “no” and exits. If the interpreter has no bug, a program can only:Print “yes” and kill itself.
Print “no” and kill itself.
Do nothing until we kill it, or otherwise fail.
Assuming the AI doesn’t control the external world by heating the host Intel processor in a smart way, we should be able to prove that we’re otherwise safe.