For the first point, there’s also the question of whether ‘slightly superhuman’ intelligences would actually fit any of our intuitions about ASI or not. There’s a bit of an assumption in that we jump headfirst into recursive self-improvement at some point, but if that has diminishing returns, we happen to hit a plateau a bit over human, and it still has notable costs to train, host and run, the impact could still be limited to something not much unlike giving a random set of especially intelligent expert humans the specific powers of the AI system. Additionally, if we happen to set regulations on computation somewhere that allows training of slightly superhuman AIs and not past it …
Those are definitely systems that are easier to negotiate with, or even consider as agents in a negotiation. There’s also a desire specifically not to build them, which might lead to systems with an architecture that isn’t like that, but still implementing sentience in some manner. And the potential complication of multiple parts and specific applications a tool-oriented system is likely to be in—it’d be very odd if we decided the language processing center of our own brain was independently sentient/sapient separate from the rest of it, and we should resent its exploitation.
I do think the drive/just a thing it does we’re pointing at with ‘what the model just does’ is distinct from goals as they’re traditionally imagined, and indeed I was picturing something more instinctual and automatic than deliberate. In a general sense, though, there is an objective that’s being optimized for (predicting the data, whatever that is, generally without losing too much predictive power on other data the trainer doesn’t want to lose prediction on).
And the potential complication of multiple parts and specific applications a tool-oriented system is likely to be in—it’d be very odd if we decided the language processing center of our own brain was independently sentient/sapient separate from the rest of it, and we should resent its exploitation.
Yeah. I think a sentient being built on a purely more capable GPT with no other changes would absolutely have to include scaffolding for eg long-term memory, and then as you say it’s difficult to draw boundaries of identity. Although my guess is that over time, more of that scaffolding will be brought into the main system, eg just allowing weight updates at inference time would on its own (potentially) give these system long-term memory and something much more similar to a persistent identity than current systems.
In a general sense, though, there is an objective that’s being optimized for
My quibble is that the trainers are optimizing for an objective, at training time, but the model isn’t optimizing for anything, at training or inference time. I feel we’re very lucky that this is the path that has worked best so far, because a comparably intelligent model that was optimizing for goals at runtime would be much more likely to be dangerous.
For the first point, there’s also the question of whether ‘slightly superhuman’ intelligences would actually fit any of our intuitions about ASI or not. There’s a bit of an assumption in that we jump headfirst into recursive self-improvement at some point, but if that has diminishing returns, we happen to hit a plateau a bit over human, and it still has notable costs to train, host and run, the impact could still be limited to something not much unlike giving a random set of especially intelligent expert humans the specific powers of the AI system. Additionally, if we happen to set regulations on computation somewhere that allows training of slightly superhuman AIs and not past it …
Those are definitely systems that are easier to negotiate with, or even consider as agents in a negotiation. There’s also a desire specifically not to build them, which might lead to systems with an architecture that isn’t like that, but still implementing sentience in some manner. And the potential complication of multiple parts and specific applications a tool-oriented system is likely to be in—it’d be very odd if we decided the language processing center of our own brain was independently sentient/sapient separate from the rest of it, and we should resent its exploitation.
I do think the drive/just a thing it does we’re pointing at with ‘what the model just does’ is distinct from goals as they’re traditionally imagined, and indeed I was picturing something more instinctual and automatic than deliberate. In a general sense, though, there is an objective that’s being optimized for (predicting the data, whatever that is, generally without losing too much predictive power on other data the trainer doesn’t want to lose prediction on).
Yeah. I think a sentient being built on a purely more capable GPT with no other changes would absolutely have to include scaffolding for eg long-term memory, and then as you say it’s difficult to draw boundaries of identity. Although my guess is that over time, more of that scaffolding will be brought into the main system, eg just allowing weight updates at inference time would on its own (potentially) give these system long-term memory and something much more similar to a persistent identity than current systems.
My quibble is that the trainers are optimizing for an objective, at training time, but the model isn’t optimizing for anything, at training or inference time. I feel we’re very lucky that this is the path that has worked best so far, because a comparably intelligent model that was optimizing for goals at runtime would be much more likely to be dangerous.
One maybe-useful way to point at that is: the model won’t try to steer toward outcomes that would let it be more successful at predicting text.