I hesitate because this isn’t exactly ‘science’, but I think ‘agi-hater’ raises a good point. Humans are good at general intelligence, machines are good at specific intelligence (intelligence as in proficiency at tasks). Machines are really bad at existing in ‘meatspace’ but they can write essays now.
As far as an alternate ontology for hardcore materialism, I would say any ontology that includes the immaterial. I’m not necessarily trying to summon magic or mysticism or even spirituality here- I think anything abstract is easily the “immaterial” as well. AI now and always has been shaped to a particular abstract ideal, predicting words, making pictures, transcribing audio. How good we can shape an AI to predict words is startling, and if you suspend disbelief, things can get really weird. In a way, we already understand AI, like we “understand” humans. A neural network is really simple and so is a transformer- the upshot of the capability, I suppose that deserves more attention still. I’m not sure that these ML constructs will ever be satisfyingly explained, I mean, it’s like building a model of the empire state building out of legos and snapping off a brick from the top and scrutinizing it like “Man, how did I make a skyscraper out of this tiny brick??”
I hesitate because this isn’t exactly ‘science’, but I think ‘agi-hater’ raises a good point. Humans are good at general intelligence, machines are good at specific intelligence (intelligence as in proficiency at tasks). Machines are really bad at existing in ‘meatspace’ but they can write essays now.
As far as an alternate ontology for hardcore materialism, I would say any ontology that includes the immaterial. I’m not necessarily trying to summon magic or mysticism or even spirituality here- I think anything abstract is easily the “immaterial” as well. AI now and always has been shaped to a particular abstract ideal, predicting words, making pictures, transcribing audio. How good we can shape an AI to predict words is startling, and if you suspend disbelief, things can get really weird. In a way, we already understand AI, like we “understand” humans. A neural network is really simple and so is a transformer- the upshot of the capability, I suppose that deserves more attention still. I’m not sure that these ML constructs will ever be satisfyingly explained, I mean, it’s like building a model of the empire state building out of legos and snapping off a brick from the top and scrutinizing it like “Man, how did I make a skyscraper out of this tiny brick??”