I believe there is a lot of discussion about singleton AI (what a singleton even is, whether a “community of agents” or a singleton is more likely, or more preferable from the safety perspective, what are the safety implications, etc.), with which I’m basically unfamiliar.
Here, I want to make an observation from the engineering/performance perspective. If there will be a singleton (a single model/algorithm, or a collection of models which we can treat as a singleton) that “controls everything”, then at least some of the models close to the top of the hierarchy (where smaller agents/models operate in real-time on the edge, higher level or several layers of hierarchy is/are algorithm(s) that somehow control these edge agents/models) must be relatively slow, order(s) of magnitude slower than those fast edge models that are likely to “think” much faster than humans.
At least one of the higher-level models will be responsible for grasping and controlling slowly unfolding trends. It must be comparatively slow because the input data size will be enormous, and simple incremental summarization techniques won’t help to reduce this data size because then the model would fail to recognise deeper patterns, attempts to hide from or game the control by the edge agents, etc.
This idea comes from John Doyle, he writes that robust control must incorporate heterogeneous feedback loops.
I’m not sure that this conclusion, that at least one of the “governing models” (if there will be more than one) will be slow, has any safety implications, though.
I believe there is a lot of discussion about singleton AI (what a singleton even is, whether a “community of agents” or a singleton is more likely, or more preferable from the safety perspective, what are the safety implications, etc.), with which I’m basically unfamiliar.
Here, I want to make an observation from the engineering/performance perspective. If there will be a singleton (a single model/algorithm, or a collection of models which we can treat as a singleton) that “controls everything”, then at least some of the models close to the top of the hierarchy (where smaller agents/models operate in real-time on the edge, higher level or several layers of hierarchy is/are algorithm(s) that somehow control these edge agents/models) must be relatively slow, order(s) of magnitude slower than those fast edge models that are likely to “think” much faster than humans.
At least one of the higher-level models will be responsible for grasping and controlling slowly unfolding trends. It must be comparatively slow because the input data size will be enormous, and simple incremental summarization techniques won’t help to reduce this data size because then the model would fail to recognise deeper patterns, attempts to hide from or game the control by the edge agents, etc.
This idea comes from John Doyle, he writes that robust control must incorporate heterogeneous feedback loops.
I’m not sure that this conclusion, that at least one of the “governing models” (if there will be more than one) will be slow, has any safety implications, though.