I was wondering whether things might be slightly different if you simulated batman-sapience by running the internal representation through simulations of self-awareness and decision-making, using one’s own blackboxes as substitutes, attempting to mentally simulate in as much detail as possible every conscious mental process while sharing braintime on the subconscious ones.
Then I got really interested in this crazy idea and decided to do science and try it.
This whole train of discussion started with
I’d argue that those characteristics of sapience still belong to the system that’s playing “what-if”, not to the what-if itself. There, no exist :-)
I was wondering whether things might be slightly different if you simulated batman-sapience by running the internal representation through simulations of self-awareness and decision-making, using one’s own blackboxes as substitutes, attempting to mentally simulate in as much detail as possible every conscious mental process while sharing braintime on the subconscious ones.
Then I got really interested in this crazy idea and decided to do science and try it.
Shouldn’t have done that.