I think the reason that organizations haven’t gone ‘FOOM’ is due to the lack of a successful “goal focused self improvement method.” There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build “goals” into organization’s structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don’t think the information in sparse inter-linkages of real organizations can store or process such information without having a significant part outsources to human scale processing, thus it couldn’t even have stumbled upon it by chance.
In theory there is no reason why a computation devices build out of humans can’t go FOOM. In practice, making a system that work on humans is extremely noisy, slow to change (‘education’ is slow) while countless experimental constraints exists with no robust engineering solutions is simply harder. Management isn’t even a full science at this point. The selection power from existing theory still leaves open a vast space of unfocused exploration, and only a tiny and unknown subset of that can go FOOM. Imagine the space of all valid training manuals and organizational structures and physical aid assets and recruitment policies and so on, and our knowledge of finding the FOOMing one.
AGI running on electronic computers is a bigger threat compared to other recursive intelligence improvement problems because the engineering problems are lower and the rate of progress is higher. Most other recursive intelligence self improvement strategies take pace at “human” time scales and does not leave humans completely helpless.
I think the reason that organizations haven’t gone ‘FOOM’ is due to the lack of a successful “goal focused self improvement method.” There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build “goals” into organization’s structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don’t think the information in sparse inter-linkages of real organizations can store or process such information without having a significant part outsources to human scale processing, thus it couldn’t even have stumbled upon it by chance.
In theory there is no reason why a computation devices build out of humans can’t go FOOM. In practice, making a system that work on humans is extremely noisy, slow to change (‘education’ is slow) while countless experimental constraints exists with no robust engineering solutions is simply harder. Management isn’t even a full science at this point. The selection power from existing theory still leaves open a vast space of unfocused exploration, and only a tiny and unknown subset of that can go FOOM. Imagine the space of all valid training manuals and organizational structures and physical aid assets and recruitment policies and so on, and our knowledge of finding the FOOMing one.
AGI running on electronic computers is a bigger threat compared to other recursive intelligence improvement problems because the engineering problems are lower and the rate of progress is higher. Most other recursive intelligence self improvement strategies take pace at “human” time scales and does not leave humans completely helpless.