On the surface, “alignment is a convergent goal for AI operators” seems like a plausible expectation, but most operators (if I may say by design) prioritize the apparent short term benefits over long term concerns, this is seen in almost every industry. Even the roll-out of “Ask me anything”, while we all generally agree that GPT 3.5 is not AGI, it has been given access to internet (not sure to what level, can it do a POST instead of a GET? lots of GETs out there that act like a POST), in the heat of competition, I doubt the operators would weigh concerns heavier than a “competitive edge” and hold back rollout of a v4.0 or a v10.0.
This may be absurd to say but in my opinion AI doesn’t have to be sentient or self-aware to do harm, all it needs is to attain a state that triggers survival and an operator willing to run that model in a feedback loop.
On the surface, “alignment is a convergent goal for AI operators” seems like a plausible expectation, but most operators (if I may say by design) prioritize the apparent short term benefits over long term concerns, this is seen in almost every industry. Even the roll-out of “Ask me anything”, while we all generally agree that GPT 3.5 is not AGI, it has been given access to internet (not sure to what level, can it do a POST instead of a GET? lots of GETs out there that act like a POST), in the heat of competition, I doubt the operators would weigh concerns heavier than a “competitive edge” and hold back rollout of a v4.0 or a v10.0.
This may be absurd to say but in my opinion AI doesn’t have to be sentient or self-aware to do harm, all it needs is to attain a state that triggers survival and an operator willing to run that model in a feedback loop.