Oracle AI and the (non-)differences between tool AIs and agents
Instrumental AGI is something that I am working on. There might be value in collaborating. Specifically I am interested in practical boxing mechanisms informed by real-world AGI designs and the fundamental limitations of finite computational substrates.
My informed prior belief is that boxed AI is not a fundamentally hard problem: it fully maps onto computer science and security problems which have already been solved in other contexts. Further, all of the arguments I have seen against boxing suffer from either invalid premises or flawed reasoning. Still, there’s much to be done in validating (or disproving) my prior assumptions. Since I am actively working to create AGI, it would be nice to get some answers before we need them. Collaboration with a philosopher on some of the more fundamental epistemic issues might be a good idea.
Instrumental AGI is something that I am working on. There might be value in collaborating. Specifically I am interested in practical boxing mechanisms informed by real-world AGI designs and the fundamental limitations of finite computational substrates.
My informed prior belief is that boxed AI is not a fundamentally hard problem: it fully maps onto computer science and security problems which have already been solved in other contexts. Further, all of the arguments I have seen against boxing suffer from either invalid premises or flawed reasoning. Still, there’s much to be done in validating (or disproving) my prior assumptions. Since I am actively working to create AGI, it would be nice to get some answers before we need them. Collaboration with a philosopher on some of the more fundamental epistemic issues might be a good idea.