How about “able to automate most simple tasks where it has an example of that task being done correctly”? Something like that could make researchers much more productive. Repeat the “the most time consuming part of your workflow now requires effectively none of your time or attention” a few dozen times and that does end up being transformative compared to the state before the series of improvements.
I think “would this technology, in isolation, be transformative” is a trap. It’s easy to imagine “if there was an AI that was better at everything than we do, that would be tranformative”, and then look at the trend line, and notice “hey, if this trend line holds we’ll have AI that is better than us at everything”, and finally “I see lots of proposals for safe AI systems, but none of them safely give us that transformative technology”. But I think what happens between now and when AIs that are better than humans-in-2023 at everything matters.
I’m not particularly concerned about AI being “transformative” or not. I’m concerned about AGI going rogue and killing everyone. And LLMs automatic workflow is great and not (by itself) omnicidal at all, so that’s… fine?
But I think what happens between now and when AIs that are better than humans-in-2023 at everything matters.
As in, AIs boosting human productivity might/should let us figure out how to make stuff safe as it comes up, so no need to be concerned about us not having a solution to the endpoint of that process before we’ve made the first steps?
The problem is that boosts to human productivity also boost the speed at which we’re getting to that endpoint, and there’s no reason to think they differentially improve our ability to make things safe. So all that would do is accelerate us harder as we’re flying towards the wall at a lethal speed.
As in, AIs boosting human productivity might/should let us figure out how to make stuff safe as it comes up, so no need to be concerned about us not having a solution to the endpoint of that process before we’ve made the first steps?
I don’t expect it to be helpful to block individually safe steps on this path, though it would probably be wise to figure out what unsafe steps down this path look like concretely (which you’re doing!).
But yeah. I don’t have any particular reason to expect “solve for the end state without dealing with any of the intermediate states” to work. It feels to me like someone starting a chat application and delaying the “obtain customers” step until they support every language, have a chat architecture that could scale up to serve everyone, and have found a moderation scheme that works without human input.
I don’t expect that team to ever ship. If they do ship, I expect their product will not work, because I think many of the problems they encounter in practice will not be the ones they expected to encounter.
How about “able to automate most simple tasks where it has an example of that task being done correctly”? Something like that could make researchers much more productive. Repeat the “the most time consuming part of your workflow now requires effectively none of your time or attention” a few dozen times and that does end up being transformative compared to the state before the series of improvements.
I think “would this technology, in isolation, be transformative” is a trap. It’s easy to imagine “if there was an AI that was better at everything than we do, that would be tranformative”, and then look at the trend line, and notice “hey, if this trend line holds we’ll have AI that is better than us at everything”, and finally “I see lots of proposals for safe AI systems, but none of them safely give us that transformative technology”. But I think what happens between now and when AIs that are better than humans-in-2023 at everything matters.
I’m not particularly concerned about AI being “transformative” or not. I’m concerned about AGI going rogue and killing everyone. And LLMs automatic workflow is great and not (by itself) omnicidal at all, so that’s… fine?
As in, AIs boosting human productivity might/should let us figure out how to make stuff safe as it comes up, so no need to be concerned about us not having a solution to the endpoint of that process before we’ve made the first steps?
The problem is that boosts to human productivity also boost the speed at which we’re getting to that endpoint, and there’s no reason to think they differentially improve our ability to make things safe. So all that would do is accelerate us harder as we’re flying towards the wall at a lethal speed.
I don’t expect it to be helpful to block individually safe steps on this path, though it would probably be wise to figure out what unsafe steps down this path look like concretely (which you’re doing!).
But yeah. I don’t have any particular reason to expect “solve for the end state without dealing with any of the intermediate states” to work. It feels to me like someone starting a chat application and delaying the “obtain customers” step until they support every language, have a chat architecture that could scale up to serve everyone, and have found a moderation scheme that works without human input.
I don’t expect that team to ever ship. If they do ship, I expect their product will not work, because I think many of the problems they encounter in practice will not be the ones they expected to encounter.