Well, I just though about it for 2 seconds. I tend to be a purist: if it were me, I would start from pure call-by-need λ-calculus, and limit the number of β-reductions, instead of the number of seconds. Cooperation and defection would be represented by Church booleans. From there, I could extend the language (explicit bindings, fast arithmetic…), provide a standard library, including some functions specific to this contest.
Or, I would start from the smallest possible subset of Scheme that can implement a meta-circular evaluator. It may be easier to examine an S-expression in Scheme than a λ-expression in λ-calculus.
Or, I would start from lambda calculus with de-Brujin indices, so we don’t have to worry about α-conversions. In this case, I would provide a compiler from regular λ-calculus. (I suspect however that this one doesn’t change a thing.)
Or, I would start from a Forth dialect (probably with an implicit return stack).
Or, I would start from BrainFuck (only half joking: the language is really dead simple, and fast interpreters already exist).
But it’s not me, and I don’t have the courage right now. If I ever implement the necessary tools (which I may: I’m studying programming languages in my spare time), then I will submit them here, and possibly run a contest myself.
I’m sorry, how is that relevant to my no-network-adapter comment? (I mean this literally, not rhetorically. I don’t see the connection. Did you mean to reply to a different comment?)
It’s the whole thread. I was not sure where to place my comment. The connection is, the network may not be the only source of “cheating”. My solutions plug them all in one fell swoop.
Oh, OK. In that case, what you are trying to achieve is (theoretically) boxing a (potential) AGI, without a gatekeeper. Which is kind of overkill in this case, and wouldn’t be solved with a choice of language anyway :)
I think it is possible to prove that a given boxing works, if it’s sufficiently simple. Choosing the language isn’t enough, but choosing the interpreter should be.
Take Brainfuck for instance: replace the dot (.″), which prints a character, by two other statements: one that prints “yes” and exits, and one that prints “no” and exits. If the interpreter has no bug, a program can only:
Print “yes” and kill itself.
Print “no” and kill itself.
Do nothing until we kill it, or otherwise fail.
Assuming the AI doesn’t control the external world by heating the host Intel processor in a smart way, we should be able to prove that we’re otherwise safe.
Well, I just though about it for 2 seconds. I tend to be a purist: if it were me, I would start from pure call-by-need λ-calculus, and limit the number of β-reductions, instead of the number of seconds. Cooperation and defection would be represented by Church booleans. From there, I could extend the language (explicit bindings, fast arithmetic…), provide a standard library, including some functions specific to this contest.
Or, I would start from the smallest possible subset of Scheme that can implement a meta-circular evaluator. It may be easier to examine an S-expression in Scheme than a λ-expression in λ-calculus.
Or, I would start from lambda calculus with de-Brujin indices, so we don’t have to worry about α-conversions. In this case, I would provide a compiler from regular λ-calculus. (I suspect however that this one doesn’t change a thing.)
Or, I would start from a Forth dialect (probably with an implicit return stack).
Or, I would start from BrainFuck (only half joking: the language is really dead simple, and fast interpreters already exist).
But it’s not me, and I don’t have the courage right now. If I ever implement the necessary tools (which I may: I’m studying programming languages in my spare time), then I will submit them here, and possibly run a contest myself.
I’m sorry, how is that relevant to my no-network-adapter comment? (I mean this literally, not rhetorically. I don’t see the connection. Did you mean to reply to a different comment?)
It’s the whole thread. I was not sure where to place my comment. The connection is, the network may not be the only source of “cheating”. My solutions plug them all in one fell swoop.
Oh, OK. In that case, what you are trying to achieve is (theoretically) boxing a (potential) AGI, without a gatekeeper. Which is kind of overkill in this case, and wouldn’t be solved with a choice of language anyway :)
I think it is possible to prove that a given boxing works, if it’s sufficiently simple. Choosing the language isn’t enough, but choosing the interpreter should be.
Take Brainfuck for instance: replace the dot (
.
″), which prints a character, by two other statements: one that prints “yes” and exits, and one that prints “no” and exits. If the interpreter has no bug, a program can only:Print “yes” and kill itself.
Print “no” and kill itself.
Do nothing until we kill it, or otherwise fail.
Assuming the AI doesn’t control the external world by heating the host Intel processor in a smart way, we should be able to prove that we’re otherwise safe.