Sorry if I’m just misreading—in Compute Trends Across Three eras of Machine Learning it was shown that the doubling time (at that time) had slowed to every ~10 months for the large-scale projects. In this projection you go with a 6-month doubling time for x number of years, then slowing to every 20 months. My questions are:
What would the results be like if we assumed things had already slowed to 10 months?
Is 6 months likely to be a better description of the upcoming computation doubling times for the next few years, versus 10 months? If yes, why?
Sorry if I’m just misreading—in Compute Trends Across Three eras of Machine Learning it was shown that the doubling time (at that time) had slowed to every ~10 months for the large-scale projects. In this projection you go with a 6-month doubling time for x number of years, then slowing to every 20 months. My questions are:
What would the results be like if we assumed things had already slowed to 10 months?
Is 6 months likely to be a better description of the upcoming computation doubling times for the next few years, versus 10 months? If yes, why?