Nice discussion. You want ways to keep from murdering people created solely for the purpose for predicting people?
Well, if you can define ‘consciousness’ with enough precision you’d be making headway on your AI. I can imagine silicon won’t have the safeguard a human, that has to use it’s own conscience to model someone else. But you could have any consciousness it creates added to its own, not destroyed… although creating that sort of awareness mutation may lead to the sort of AI that rebels against its programming in action movies.
Nice discussion. You want ways to keep from murdering people created solely for the purpose for predicting people?
Well, if you can define ‘consciousness’ with enough precision you’d be making headway on your AI. I can imagine silicon won’t have the safeguard a human, that has to use it’s own conscience to model someone else. But you could have any consciousness it creates added to its own, not destroyed… although creating that sort of awareness mutation may lead to the sort of AI that rebels against its programming in action movies.