Imagine you have two points, A and B. You’re at A, and you can see B in the distance. How long will it take you to get to B?
Well, you’re a pretty smart fellow. You measure the distance, you calculate your rate of progress, maybe you’re extra clever and throw in a factor of safety to account for any irregularities during the trip. And you figure that you’ll get to point B in a year or so.
Then you start walking.
And you run into a wall.
Turns out, there’s a maze in between you and point B. Huh, you think. Well that’s ok, I put a factor of safety into my calculations, so I should be fine. You pick a direction, and you keep walking.
You run into more walls.
You start to panic. You figured this would only take you a year, but you keep running into new walls! At one point, you even realize that the path you’ve been on is a dead end — it physically can’t take you from point A to point B, and all of the time you’ve spent on your current path has been wasted, forcing you to backtrack to the start.
Fundamentally, this is what I see happening, in various industries: brain scanning, self-driving cars, clean energy, interstellar travel, AI development. The list goes on.
Laymen see a point B in the distance, where we have self-driving cars run on green energy powered by AGI’s. They see where we are now. They figure they can estimate how long it’ll take to get to that point B, slap on a factor of safety, and make a prediction.
But the real world of problem solving is akin to a maze. And there’s no way to know the shape or complexity of that maze until you actually start along the path. You can think you know the theoretical complexity of the maze you’ll encounter, but you can’t.
On the other hand, sometimes people end up walking right through what the established experts thought to be a wall. The rise of deep learning from a stagnant backwater in 2010 to a dominant paradigm today (crushing the old benchmarks in basically all of the most well-studied fields) is one such case.
In any particular case, it’s best to expect progress to take much, much longer than the Inside View indicates. But at the same time, there’s some part of the research world where a major rapid shift is about to happen.
Imagine you have two points, A and B. You’re at A, and you can see B in the distance. How long will it take you to get to B?
Well, you’re a pretty smart fellow. You measure the distance, you calculate your rate of progress, maybe you’re extra clever and throw in a factor of safety to account for any irregularities during the trip. And you figure that you’ll get to point B in a year or so.
Then you start walking.
And you run into a wall.
Turns out, there’s a maze in between you and point B. Huh, you think. Well that’s ok, I put a factor of safety into my calculations, so I should be fine. You pick a direction, and you keep walking.
You run into more walls.
You start to panic. You figured this would only take you a year, but you keep running into new walls! At one point, you even realize that the path you’ve been on is a dead end — it physically can’t take you from point A to point B, and all of the time you’ve spent on your current path has been wasted, forcing you to backtrack to the start.
Fundamentally, this is what I see happening, in various industries: brain scanning, self-driving cars, clean energy, interstellar travel, AI development. The list goes on.
Laymen see a point B in the distance, where we have self-driving cars run on green energy powered by AGI’s. They see where we are now. They figure they can estimate how long it’ll take to get to that point B, slap on a factor of safety, and make a prediction.
But the real world of problem solving is akin to a maze. And there’s no way to know the shape or complexity of that maze until you actually start along the path. You can think you know the theoretical complexity of the maze you’ll encounter, but you can’t.
On the other hand, sometimes people end up walking right through what the established experts thought to be a wall. The rise of deep learning from a stagnant backwater in 2010 to a dominant paradigm today (crushing the old benchmarks in basically all of the most well-studied fields) is one such case.
In any particular case, it’s best to expect progress to take much, much longer than the Inside View indicates. But at the same time, there’s some part of the research world where a major rapid shift is about to happen.