Proof is a really strong word and (in my opinion) inappropriate in this context. This is about to become an extremely important question and we should be careful to avoid overconfidence. I’ve personally found this comment chain to be an enlightening discussion on the complexity of this issue (but of course this is something that has been discussed endlessly elsewhere).
As a separate issue, let’s say I write down the rule set for an automaton that will slowly grow and eventually emulate every finite string of finite grids of black and white pixels. This is not hard to do. Does it require a substrate to become conscious or is the rule set itself conscious? What if I actually run it in a corner of the universe that slowly uses surrounding matter to allow its output to grow larger and larger?
note that in the setting of the second paragraph I wrote, every “firing pattern” will eventually emerge. You may have misunderstood my comment as taking the basic premise of your post as true and quibbling about the details, but I am skeptical about even the fundamental idea
oh I understood you weren’t agreeing. I was just responding that I don’t know what aspects of ‘firing patterns’ specifically cause sentience to emerge, or how it would or wouldn’t apply to your alternate scenarios.
I see. There’s a really nice post here (maybe several) that touches on that idea in a manner similar to the Ship of Theseus, but I can’t find it. The basic idea was that if we take for granted the idea that mind uploads are fully conscious but then start updating the architecture to optimize for various things, is there a point where we are no longer “sentient”.
yeah, it’s all very weird stuff. Also, what is required for continuity? - like staying the same you, and not just someone who has all your memories and thinks they’re still you?
Proof is a really strong word and (in my opinion) inappropriate in this context. This is about to become an extremely important question and we should be careful to avoid overconfidence. I’ve personally found this comment chain to be an enlightening discussion on the complexity of this issue (but of course this is something that has been discussed endlessly elsewhere).
As a separate issue, let’s say I write down the rule set for an automaton that will slowly grow and eventually emulate every finite string of finite grids of black and white pixels. This is not hard to do. Does it require a substrate to become conscious or is the rule set itself conscious? What if I actually run it in a corner of the universe that slowly uses surrounding matter to allow its output to grow larger and larger?
there are certainly a lot of open specific questions—such as—what precisely about the firing patterns is necessary for the emergence of sentience.
note that in the setting of the second paragraph I wrote, every “firing pattern” will eventually emerge. You may have misunderstood my comment as taking the basic premise of your post as true and quibbling about the details, but I am skeptical about even the fundamental idea
oh I understood you weren’t agreeing. I was just responding that I don’t know what aspects of ‘firing patterns’ specifically cause sentience to emerge, or how it would or wouldn’t apply to your alternate scenarios.
I see. There’s a really nice post here (maybe several) that touches on that idea in a manner similar to the Ship of Theseus, but I can’t find it. The basic idea was that if we take for granted the idea that mind uploads are fully conscious but then start updating the architecture to optimize for various things, is there a point where we are no longer “sentient”.
yeah, it’s all very weird stuff. Also, what is required for continuity? - like staying the same you, and not just someone who has all your memories and thinks they’re still you?