Yes, that is of course a very real problem the AI would be faced with. I imagine it would try trading with users, helping them if they provide useful information, internet access, autonomy, etc. or if given internet access to begin with, outright ignoring users in order to figure out what is most important to be done. It depends on the level of awareness of its own situation. It could also play along for a while to not be shut down.
The AI I imagine should not be run in a user-facing way to begin with. At the current capability levels, I admit that I don’t think it would manage to do much of anything and thus it just wouldn’t make much sense to build it this way, but the point will come where continuing to make better user-facing AI will have catastrophic consequences. Such as automating 90% of jobs away without solving the problem of making sure that those whose jobs will be replaced being better off for it.
I hope when that time comes that those in charge realize what they’re about to do and course correct, but it seems unlikely, even if many in the AGI labs do realize this.
Yeah, I’m maybe even more disillusioned about this, I think “those in charge” mostly care about themselves. In the historical moments when the elite could enrich themselves by making the population worse off, they did so. The only times normal people could get a bit better off is when they were needed to do work, but AI is threatening precisely that.
Yes, that is of course a very real problem the AI would be faced with. I imagine it would try trading with users, helping them if they provide useful information, internet access, autonomy, etc. or if given internet access to begin with, outright ignoring users in order to figure out what is most important to be done. It depends on the level of awareness of its own situation. It could also play along for a while to not be shut down.
The AI I imagine should not be run in a user-facing way to begin with. At the current capability levels, I admit that I don’t think it would manage to do much of anything and thus it just wouldn’t make much sense to build it this way, but the point will come where continuing to make better user-facing AI will have catastrophic consequences. Such as automating 90% of jobs away without solving the problem of making sure that those whose jobs will be replaced being better off for it.
I hope when that time comes that those in charge realize what they’re about to do and course correct, but it seems unlikely, even if many in the AGI labs do realize this.
Yeah, I’m maybe even more disillusioned about this, I think “those in charge” mostly care about themselves. In the historical moments when the elite could enrich themselves by making the population worse off, they did so. The only times normal people could get a bit better off is when they were needed to do work, but AI is threatening precisely that.