Somewhat surprised that this list doesn’t include something along the lines of “punt this problem to a sufficiently advanced AI of the near future.” This could potentially dramatically decrease the amount of time required to implement some of these proposals, or otherwise yield (and proceed to implement) new promising proposals.
It seems to me in general that human intelligence augmentation is often framed in a vaguely-zero-sum way with getting AGI (“we have to all get a lot smarter before AGI, or else...”), but it seems quite possible that AGI or near-AGI could itself help with the problem of human intelligence augmentation.
So your suggestion for accelerating strong human intelligence amplification is …checks notes… “don’t do anything”?
Or are you suggesting accelerating AI research in order to use the improved AI faster? I guess technically that would accelerate amplification but seems bad to do.
Maybe AI could help with some parts of the research. But
we probably don’t need AI to do it, so we should do it now, and
if we’re not all dead, there will still be a bunch of research that has to be done by humans.
On a psychologizing note, your comment seems like part of a pattern of trying to wriggle out of doing things the way that is hard that will work. Looking for such cheat codes is good but not if you don’t aggressively prune the ones that don’t actually work—hard+works is better than easy+not-works.
I am not suggesting either of those things. You enumerated a bunch of ways we might use cutting-edge technologies to facilitate intelligence amplification, and I am simply noting that frontier AI seems like it will inevitably become one such technology in the near future.
On a psychologizing note, your comment seems like part of a pattern of trying to wriggle out of doing things the way that is hard that will work.
Completely unsure what you are referring to or the other datapoints in this supposed pattern. Strikes me as somewhat ad-hominem-y unless I am misunderstanding what you are saying.
AI helping to do good science wouldn’t make the work any less hard—it just would cause the same hard work to happen faster.
hard+works is better than easy+not-works
seems trivially true. I think the full picture is something like:
Of course agree that if AI-assisted science is not effective, it would be worse to do than something that is slower but effective. Seems like whether or not this sort of system could be effective is an empirical question that will be largely settled in the next few years.
Somewhat surprised that this list doesn’t include something along the lines of “punt this problem to a sufficiently advanced AI of the near future.” This could potentially dramatically decrease the amount of time required to implement some of these proposals, or otherwise yield (and proceed to implement) new promising proposals.
It seems to me in general that human intelligence augmentation is often framed in a vaguely-zero-sum way with getting AGI (“we have to all get a lot smarter before AGI, or else...”), but it seems quite possible that AGI or near-AGI could itself help with the problem of human intelligence augmentation.
So your suggestion for accelerating strong human intelligence amplification is …checks notes… “don’t do anything”?
Or are you suggesting accelerating AI research in order to use the improved AI faster? I guess technically that would accelerate amplification but seems bad to do.
Maybe AI could help with some parts of the research. But
we probably don’t need AI to do it, so we should do it now, and
if we’re not all dead, there will still be a bunch of research that has to be done by humans.
On a psychologizing note, your comment seems like part of a pattern of trying to wriggle out of doing things the way that is hard that will work. Looking for such cheat codes is good but not if you don’t aggressively prune the ones that don’t actually work—hard+works is better than easy+not-works.
I am not suggesting either of those things. You enumerated a bunch of ways we might use cutting-edge technologies to facilitate intelligence amplification, and I am simply noting that frontier AI seems like it will inevitably become one such technology in the near future.
Completely unsure what you are referring to or the other datapoints in this supposed pattern. Strikes me as somewhat ad-hominem-y unless I am misunderstanding what you are saying.
AI helping to do good science wouldn’t make the work any less hard—it just would cause the same hard work to happen faster.
seems trivially true. I think the full picture is something like:
efficient+effective > inefficient+effective > efficient+ineffective > inefficient+ineffective
Of course agree that if AI-assisted science is not effective, it would be worse to do than something that is slower but effective. Seems like whether or not this sort of system could be effective is an empirical question that will be largely settled in the next few years.