I’m mostly not looking for virtue points, I’m looking for: (i) if your view is right then I get some kind of indication of that so that I can take it more seriously, (ii) if your view is wrong then you get some indication feedback to help snap you out of it.
I don’t think it’s surprising if a GPT-3 sized model can do relatively good translation. If talking about this prediction, and if you aren’t happy just predicting numbers for overall value added from machine translation, I’d kind of like to get some concrete examples of mediocre translations or concrete problems with existing NMT that you are predicting can be improved.
It seems like Eliezer is mostly just more uncertain about the near future than you are, so it doesn’t seem like you’ll be able to find (ii) by looking at predictions for the near future.
It seems to me like Eliezer rejects a lot of important heuristics like “things change slowly” and “most innovations aren’t big deals” and so on. One reason he may do that is because he literally doesn’t know how to operate those heuristics, and so when he applies them retroactively they seem obviously stupid. But if we actually walked through predictions in advance, I think he’d see that actual gradualists are much better predictors than he imagines.
That seems a bit uncharitable to me. I doubt he rejects those heuristics wholesale. I’d guess that he thinks that e.g. recursive self improvement is one of those things where these heuristics don’t apply, and that this is foreseeable because of e.g. the nature of recursion. I’d love to hear more about what sort of knowledge about “operating these heuristics” you think he’s missing!
Anyway, it seems like he expects things to seem more-or-less gradual up until FOOM, so I think my original point still applies: I think his model would not be “shaken out” of his fast-takeoff view due to successful future predictions (until it’s too late).
He says things like AlphaGo or GPT-3 being really surprising to gradualists, suggesting he thinks that gradualism only works in hindsight.
I agree that after shaking out the other disagreements, we could just end up with Eliezer saying “yeah but automating AI R&D is just fundamentally unlike all the other tasks to which we’ve applied AI” (or “AI improving AI will be fundamentally unlike automating humans improving AI”) but I don’t think that’s the core of his position right now.
I’m mostly not looking for virtue points, I’m looking for: (i) if your view is right then I get some kind of indication of that so that I can take it more seriously, (ii) if your view is wrong then you get some indication feedback to help snap you out of it.
I don’t think it’s surprising if a GPT-3 sized model can do relatively good translation. If talking about this prediction, and if you aren’t happy just predicting numbers for overall value added from machine translation, I’d kind of like to get some concrete examples of mediocre translations or concrete problems with existing NMT that you are predicting can be improved.
It seems like Eliezer is mostly just more uncertain about the near future than you are, so it doesn’t seem like you’ll be able to find (ii) by looking at predictions for the near future.
It seems to me like Eliezer rejects a lot of important heuristics like “things change slowly” and “most innovations aren’t big deals” and so on. One reason he may do that is because he literally doesn’t know how to operate those heuristics, and so when he applies them retroactively they seem obviously stupid. But if we actually walked through predictions in advance, I think he’d see that actual gradualists are much better predictors than he imagines.
That seems a bit uncharitable to me. I doubt he rejects those heuristics wholesale. I’d guess that he thinks that e.g. recursive self improvement is one of those things where these heuristics don’t apply, and that this is foreseeable because of e.g. the nature of recursion. I’d love to hear more about what sort of knowledge about “operating these heuristics” you think he’s missing!
Anyway, it seems like he expects things to seem more-or-less gradual up until FOOM, so I think my original point still applies: I think his model would not be “shaken out” of his fast-takeoff view due to successful future predictions (until it’s too late).
He says things like AlphaGo or GPT-3 being really surprising to gradualists, suggesting he thinks that gradualism only works in hindsight.
I agree that after shaking out the other disagreements, we could just end up with Eliezer saying “yeah but automating AI R&D is just fundamentally unlike all the other tasks to which we’ve applied AI” (or “AI improving AI will be fundamentally unlike automating humans improving AI”) but I don’t think that’s the core of his position right now.