I don’t think technological deployment is likely to take that long for AI’s. With a physical device like a car or fridge, it takes time for people to set up the factories, and manufacture the devices. AI can be sent across the internet in moments. I don’t know how long it takes google to go from say an algorithm that detects streets in satellite images to the results showing up in google maps, but its not anything like the decades it took those physical techs to roll out.
The slow roll-out scenario looks like this, AGI is developed using a technique that fundamentally relies on imitating humans, and requires lots of training data. There aren’t nearly enough data from humans that are AI experts to make an AI AI expert. The AI is about as good at AI research as the median human. Or maybe the 80th percentile human. Ie no good at all. The AI design fundamentally requires custom hardware to run at reasonable speeds. Add in some political squabbling and it could take a fair few years before wide use, although there would still be huge economic incentive to create it.
The fast scenario is the rapidly self improving superintelligence. Where we have oodles of compute by the time we crack the algorithms. All the self improvement happens very fast in software. Then the AI takes over the world. (I question that “a few weeks” is the fastest possible timescale for this. )
(For that matter, the curves on the right of the graph look steeper. It takes less time for an invention to be rolled out nowadays)
For your second point, you can name biases that might make people underestimate timelines, I can name biases that might make people overestimate timelines. (eg Failure to consider techniques not known to you) And it all turns into a bias naming competition. Which is hardly truth tracking at all.
As for regulation, I think its what people are doing in R&D labs, not what is rolled out that matters. And that is harder to regulate. I also explicitly don’t expect any AI Chernobyl. I don’t strongly predict there won’t be an AI Chernobyl either. I feel that if the relevant parties act with the barest modicum of competence, there won’t be an AI Chernobyl. And the people being massively stupid will carry on being massively stupid after any AI Chernobyl.
I don’t think technological deployment is likely to take that long for AI’s. With a physical device like a car or fridge, it takes time for people to set up the factories, and manufacture the devices. AI can be sent across the internet in moments.
Most economically important uses of AGI (self driving cars, replacing fast-food workers) require physical infrastructure. There are some areas (e.g. high frequency stock trading and phone voice assistants) that do not, but those are largely automated already so there won’t be a sudden boost when the “cross the threshold” of AGI.
Surely the set of jobs an AGI could do out of the box is wider than that. Lets compare it to the set of jobs that can be done from home over the internet. Most jobs that can be done over the internet can be done by the AI. Judging by how much working from home has been a thing recently, a significant percentage of the economy. Plus a whole load of other jobs that only make sense when the cost of labour is really low, and or the labour is really fast. And I would expect the amount to increase with robotisation. (If you take an existing robot, and put an AGI on it, suddenly it can do a lot more useful stuff.)
In 2020 the average number of days that Americans teleworked doubled from 2.4 to 5.8 per month. If we assume that 100% of that work could be done by AGI and that all of those working days were replaced in a single year, that would be a 29% boost to productivity, just barely above the 25%/year growth definition of TAI.
It is unlikely that 100% of such work can be automated (for example at-home learning makes up a large fraction of telework). And much of what can be automated will be automated long before we reach AGI (travel agents, real estate, …).
I’m not sure how putting AGI on existing robots makes them automatically more useful? Neither my roomba nor car manufacturing robots (to pick two extremes) can be greatly improved by additional intelligence. Undoubtedly self-driving cars would be much easier (perhaps trival) to implement given AGI, but self-driving cars are almost certainly a less than AGI-hard task. Did you have some particular examples in mind of existing robots that need/benefit from AGI specifically?
I don’t think technological deployment is likely to take that long for AI’s. With a physical device like a car or fridge, it takes time for people to set up the factories, and manufacture the devices. AI can be sent across the internet in moments. I don’t know how long it takes google to go from say an algorithm that detects streets in satellite images to the results showing up in google maps, but its not anything like the decades it took those physical techs to roll out.
The slow roll-out scenario looks like this, AGI is developed using a technique that fundamentally relies on imitating humans, and requires lots of training data. There aren’t nearly enough data from humans that are AI experts to make an AI AI expert. The AI is about as good at AI research as the median human. Or maybe the 80th percentile human. Ie no good at all. The AI design fundamentally requires custom hardware to run at reasonable speeds. Add in some political squabbling and it could take a fair few years before wide use, although there would still be huge economic incentive to create it.
The fast scenario is the rapidly self improving superintelligence. Where we have oodles of compute by the time we crack the algorithms. All the self improvement happens very fast in software. Then the AI takes over the world. (I question that “a few weeks” is the fastest possible timescale for this. )
(For that matter, the curves on the right of the graph look steeper. It takes less time for an invention to be rolled out nowadays)
For your second point, you can name biases that might make people underestimate timelines, I can name biases that might make people overestimate timelines. (eg Failure to consider techniques not known to you) And it all turns into a bias naming competition. Which is hardly truth tracking at all.
As for regulation, I think its what people are doing in R&D labs, not what is rolled out that matters. And that is harder to regulate. I also explicitly don’t expect any AI Chernobyl. I don’t strongly predict there won’t be an AI Chernobyl either. I feel that if the relevant parties act with the barest modicum of competence, there won’t be an AI Chernobyl. And the people being massively stupid will carry on being massively stupid after any AI Chernobyl.
Most economically important uses of AGI (self driving cars, replacing fast-food workers) require physical infrastructure. There are some areas (e.g. high frequency stock trading and phone voice assistants) that do not, but those are largely automated already so there won’t be a sudden boost when the “cross the threshold” of AGI.
Surely the set of jobs an AGI could do out of the box is wider than that. Lets compare it to the set of jobs that can be done from home over the internet. Most jobs that can be done over the internet can be done by the AI. Judging by how much working from home has been a thing recently, a significant percentage of the economy. Plus a whole load of other jobs that only make sense when the cost of labour is really low, and or the labour is really fast. And I would expect the amount to increase with robotisation. (If you take an existing robot, and put an AGI on it, suddenly it can do a lot more useful stuff.)
In 2020 the average number of days that Americans teleworked doubled from 2.4 to 5.8 per month. If we assume that 100% of that work could be done by AGI and that all of those working days were replaced in a single year, that would be a 29% boost to productivity, just barely above the 25%/year growth definition of TAI.
It is unlikely that 100% of such work can be automated (for example at-home learning makes up a large fraction of telework). And much of what can be automated will be automated long before we reach AGI (travel agents, real estate, …).
I’m not sure how putting AGI on existing robots makes them automatically more useful? Neither my roomba nor car manufacturing robots (to pick two extremes) can be greatly improved by additional intelligence. Undoubtedly self-driving cars would be much easier (perhaps trival) to implement given AGI, but self-driving cars are almost certainly a less than AGI-hard task. Did you have some particular examples in mind of existing robots that need/benefit from AGI specifically?