Possibly somewhat off-topic: my hunch is that the actual motivation of the initial AGI will be random, rather than orthogonal to anything.
Consider this: how often has a difficult task been accomplished right the first time, even with all the careful preparation beforehand? For example, how many rockets blew up, killing people in the process, before the first successful lift-off? People were careless but lucky with the first nuclear reactor, though note “Fermi had convinced Arthur Compton that his calculations were reliable enough to rule out a runaway chain reaction or an explosion, but, as the official historians of the Atomic Energy Commission later noted, the “gamble” remained in conducting “a possibly catastrophic experiment in one of the most densely populated areas of the nation!”
I doubt that one can count on luck in the AGI development, but I would bet on unintentional carelessness (and other manifestations of the Murphy’s law).
The bottom line is (nothing new here), no matter how much you research things beforehand, the first AGI will have bugs, with unpredictable consequences for its actual motivation. If we are lucky, there will be a chance to fix the bugs. Whether it is even possible to constrain the severity of bugs is way too early to tell, given how little is currently known about the topic.
Possibly somewhat off-topic: my hunch is that the actual motivation of the initial AGI will be random, rather than orthogonal to anything.
Consider this: how often has a difficult task been accomplished right the first time, even with all the careful preparation beforehand? For example, how many rockets blew up, killing people in the process, before the first successful lift-off? People were careless but lucky with the first nuclear reactor, though note “Fermi had convinced Arthur Compton that his calculations were reliable enough to rule out a runaway chain reaction or an explosion, but, as the official historians of the Atomic Energy Commission later noted, the “gamble” remained in conducting “a possibly catastrophic experiment in one of the most densely populated areas of the nation!”
I doubt that one can count on luck in the AGI development, but I would bet on unintentional carelessness (and other manifestations of the Murphy’s law).
The bottom line is (nothing new here), no matter how much you research things beforehand, the first AGI will have bugs, with unpredictable consequences for its actual motivation. If we are lucky, there will be a chance to fix the bugs. Whether it is even possible to constrain the severity of bugs is way too early to tell, given how little is currently known about the topic.