What are the questions you are trying to answer about the first AGIs?
How they will behave?
What they will be capable of?
What is the nature of the property we call intelligence?
I find the second one much more interesting, with more data to be acquired. For the second one I would include things like modern computer hardware and what we have managed to achieve with it (and the nature and structure of those achievements).
I’ve argued before that we should understand the process of science (how much analysis vs data processing vs real world tests), in order to understand how likely it is that AGI will be able to do science quickly. Which impacts the types of threats we should expect. We should also look at the process of programming with a similar lens to see how much a human level programmer could be improved upon. There is lots of non-human bounded activity in the process of industrial scale programming, lots of it are in running automated test suites. Will AIs need to run similar suites or can they do things in a more adequate way?
Information from sociology and history should impact our priors on the concrete strategies that may work. But that may be taken as a given and less interesting.
What are the questions you are trying to answer about the first AGIs?
How they will behave?
What they will be capable of?
What is the nature of the property we call intelligence?
I find the second one much more interesting, with more data to be acquired. For the second one I would include things like modern computer hardware and what we have managed to achieve with it (and the nature and structure of those achievements).
All of these, and general orientation around the problem, and what concrete things we should do.
I’ve argued before that we should understand the process of science (how much analysis vs data processing vs real world tests), in order to understand how likely it is that AGI will be able to do science quickly. Which impacts the types of threats we should expect. We should also look at the process of programming with a similar lens to see how much a human level programmer could be improved upon. There is lots of non-human bounded activity in the process of industrial scale programming, lots of it are in running automated test suites. Will AIs need to run similar suites or can they do things in a more adequate way?
Information from sociology and history should impact our priors on the concrete strategies that may work. But that may be taken as a given and less interesting.