To further elaborate 4: your example of the string “1” being a conscious agent because you can “unpack” it into an agent really feels like it shouldn’t count: you’re just throwing away the “1” and replaying a separate recording of something that was conscious. This sounds about as much of a non-sequitur as “I am next to this pen, so this pen is conscious”.
We could, however, make it more interesting by making the computation depend “crucially” on the input. But what counts?
Suppose I have a program that turns noise into a conscious agent (much like generative models can turn a noise vector into a face, say). If we now seed this with a waterfall, is the waterfall now a part of the computation, enough to be granted some sentience/moral patienthood? I think the usual answer is “all the non-trivial work is being done by the program, not the random seed”, as Scott Aaronson seems to say here. (He also makes the interesting claim of “has to participate fully in the arrow of time to be conscious”, which would disqualify caching and replaying.)
But this can be made a little more confusing, because it’s hard to tell which bit is non-trivial from the outside: suppose I save and encrypt the conscious-generating-program. This looks like random noise from the outside, and will pass all randomness tests. Now I have another program with the stored key decrypt it and run it. From the outside, you might disregard the random-seed-looking-thingy and instead try to analyze the decryption program, thinking that’s where the magic is.
I’d love to hear about ideas to pin down the difference between Seeding and Decrypting in general, for arbitrary interpretations. It seems within reach, and like a good first step, since the two lie on roughly opposite ends of a spectrum of “cruciality” when the system breaks down into two or more modules.
To further elaborate 4: your example of the string “1” being a conscious agent because you can “unpack” it into an agent really feels like it shouldn’t count: you’re just throwing away the “1” and replaying a separate recording of something that was conscious. This sounds about as much of a non-sequitur as “I am next to this pen, so this pen is conscious”.
We could, however, make it more interesting by making the computation depend “crucially” on the input. But what counts?
Suppose I have a program that turns noise into a conscious agent (much like generative models can turn a noise vector into a face, say). If we now seed this with a waterfall, is the waterfall now a part of the computation, enough to be granted some sentience/moral patienthood? I think the usual answer is “all the non-trivial work is being done by the program, not the random seed”, as Scott Aaronson seems to say here. (He also makes the interesting claim of “has to participate fully in the arrow of time to be conscious”, which would disqualify caching and replaying.)
But this can be made a little more confusing, because it’s hard to tell which bit is non-trivial from the outside: suppose I save and encrypt the conscious-generating-program. This looks like random noise from the outside, and will pass all randomness tests. Now I have another program with the stored key decrypt it and run it. From the outside, you might disregard the random-seed-looking-thingy and instead try to analyze the decryption program, thinking that’s where the magic is.
I’d love to hear about ideas to pin down the difference between Seeding and Decrypting in general, for arbitrary interpretations. It seems within reach, and like a good first step, since the two lie on roughly opposite ends of a spectrum of “cruciality” when the system breaks down into two or more modules.