A hypothetical GAI remains hypothetical. We’ve been working on self-managing systems for 25+ years and still have very little idea how to make them. IF we can solve hallucinations (which have recently been shown to be equivalent to patients with various stages of dementia) and research can give us new self-* technologies capable of configuring, healing, protecting, and optimizing both logical (software) and physical infrastructure (because what does a GAI do with a cable cut?), then we might see a quick devaluation of human labor. Also, everyone is speaking as if GAI is imminent. It’s not. LLMs are impressive as a nascent technology (primarily because they interact with us using natural language which is one of the largest initial necessary but wholly insufficient preconditions to strong AI) but we are very far off from something that can replace all or even most specialties of human labor.
A hypothetical GAI remains hypothetical. We’ve been working on self-managing systems for 25+ years and still have very little idea how to make them. IF we can solve hallucinations (which have recently been shown to be equivalent to patients with various stages of dementia) and research can give us new self-* technologies capable of configuring, healing, protecting, and optimizing both logical (software) and physical infrastructure (because what does a GAI do with a cable cut?), then we might see a quick devaluation of human labor. Also, everyone is speaking as if GAI is imminent. It’s not. LLMs are impressive as a nascent technology (primarily because they interact with us using natural language which is one of the largest initial necessary but wholly insufficient preconditions to strong AI) but we are very far off from something that can replace all or even most specialties of human labor.