The problem is, as the brain gets amplified it always goes in a superintelligent mode and it is utterly worthless. The solution is to kill all the reward that is associated with having the problem.
I’m not sure it’s practical. The only way to do a lot of tasks is to take a step we’re going to eventually solve, and if that happens he’ll be out of his depth because he’ll be able to try to do the task in a superintelligent mode.
You’re going to need a team of trained reasoners to build the kind of automation you advocate: I think it’s a good idea. However, I’m not sure I know of anything better than this, and I don’t think it won’t work. In any case, for certain tasks that are currently a limited tool are good places to work (like writing) the sorts of people who can’t do much of these tasks (like finding and using internet, this will probably get you in some way where you’ll find the sort of people who will be able to also do much of these tasks). I think the first step to solving this is the kind of automation system that I’ve recently been referring to as “AI is unlikely to succeed”:
a problem is a good guess at the general problem (and this problem will probably happen to a reasonably competent agent).
a solution is usually a good guess at the general problem (as long as it doesn’t take a bunch of work).
a solution exists (e.g. solve a non-trivial problem in the first paragraph) and an AI system can solve it even if we have a more effective means (like writing) but no software for that task should be able to solve it for us.
I can’t see any reason this can’t be done in the current world (I’ve never tried to try to do this with the right tools, which would likely lead to a lower quality of automation than our current) and it seems like there is a lot of room for good solutions out there, so my intuition is that the first step would be to make them available for use. In the limit (which is a couple years from now, if anything) AI will be a much easier problem to solve than we are, because the problem of AI systems that are being run in the current world is much more different.
(I’m not sure how to define this: if you don’t like
The problem is, as the brain gets amplified it always goes in a superintelligent mode and it is utterly worthless. The solution is to kill all the reward that is associated with having the problem.
I’m not sure it’s practical. The only way to do a lot of tasks is to take a step we’re going to eventually solve, and if that happens he’ll be out of his depth because he’ll be able to try to do the task in a superintelligent mode.
You’re going to need a team of trained reasoners to build the kind of automation you advocate: I think it’s a good idea. However, I’m not sure I know of anything better than this, and I don’t think it won’t work. In any case, for certain tasks that are currently a limited tool are good places to work (like writing) the sorts of people who can’t do much of these tasks (like finding and using internet, this will probably get you in some way where you’ll find the sort of people who will be able to also do much of these tasks). I think the first step to solving this is the kind of automation system that I’ve recently been referring to as “AI is unlikely to succeed”:
a problem is a good guess at the general problem (and this problem will probably happen to a reasonably competent agent).
a solution is usually a good guess at the general problem (as long as it doesn’t take a bunch of work).
a solution exists (e.g. solve a non-trivial problem in the first paragraph) and an AI system can solve it even if we have a more effective means (like writing) but no software for that task should be able to solve it for us.
I can’t see any reason this can’t be done in the current world (I’ve never tried to try to do this with the right tools, which would likely lead to a lower quality of automation than our current) and it seems like there is a lot of room for good solutions out there, so my intuition is that the first step would be to make them available for use. In the limit (which is a couple years from now, if anything) AI will be a much easier problem to solve than we are, because the problem of AI systems that are being run in the current world is much more different.
(I’m not sure how to define this: if you don’t like