If it was not going to get any unexpected capabilities why is it transformational?
I can at least see a plausible viewpoint in this one. “AI can do everything any human can do, but 1000x faster and cheaper, with arbitrarily many instances in parallel” seems plenty transformational without being unexpected.
I think the things that are practical to do in the world where we have such AIs are so different from our world, so it makes sense to call them “unexpected capabilities” even if formally human can do them also.
I agree with how different they are and how different the world with them will/would be. I disagree with calling them unexpected because, in fact, many of us expect them.
I think we might be talking past each other. I agree that the specific capability I mentioned, even if I can right it down as an advance prediction of a thing that will happen, will have consequences I do not and plausibly cannot expect. In principle, if I had 1000 hours to analyze 1 hour of thinking by 1 instance of such a system, I might be able to figure out what it’s doing and why. I won’t, but it’s possible-in-principle. To me, that means it is an expected capability that is nevertheless transformational. Unexpected capabilities would be a change in quality of thought that I could not bridge with any amount of effort, because it is simply beyond me.
Sorry, couldn’t resist the reference.
Yeah, I think I see yet another interpretation here that neither of you have mentioned yet.
Foreseen and expected capabilities: that which seems possible and likely enough that you predict it will probably become available.
Foreseen but unexpected capabilities: that which seems possible but unlikely, so you haven’t put much effort into preparing for it. Like the Yellowstone supervolcanoe eruption, we believe some things to be possible and yet don’t expect to see them.
Unforeseen but expected capabilities? : this is implied by the booleans I set up, but I’m not sure it makes sense exactly. I think it could make sense to have a rough range of power and set of categories in mind, but not have fleshed out the specifics, and call the capabilities in that set ‘unforeseen but expected ’. Like, imagine getting a thousand scientific experts together in groups of five, and pairing them with science fiction authors, and instructing every group to brainstorm the widest set of physically plausible capabilities given a certain performance level of the underlying model. You can imagine the set of ideas this project produces without being able to come up with them all yourself. Bounded unknown unknowns, in a way.
Unforeseen and unexpected capabilities: what if we are fundamentally mistaken about some key assumptions? Missing a key aspect of physical laws of the universe? Then the very criteria we set up for ourselves to define ‘physically plausible ’ would be ruling out some things incorrectly. These would then be unforeseen and unexpected, totally outside our distribution. Or what if some capabilities are so complex to even imagine that no living human is able to do so, and yet the universe allows for such. Or what if all the experts and sci-fi authors, being human, share some priors in common that systematically bias them away from conceiving of a class of capabilities? If so, then perhaps even a large sample size of seemingly independent groups wouldn’t come up with ideas from the systematically neglected set. Extra-unknown unknowns.
I can at least see a plausible viewpoint in this one. “AI can do everything any human can do, but 1000x faster and cheaper, with arbitrarily many instances in parallel” seems plenty transformational without being unexpected.
I think the things that are practical to do in the world where we have such AIs are so different from our world, so it makes sense to call them “unexpected capabilities” even if formally human can do them also.
I agree with how different they are and how different the world with them will/would be. I disagree with calling them unexpected because, in fact, many of us expect them.
We expect that they will exist, but we don’t know what exactly this capabilities will be, so they are “unexpected” in this sense.
I think we might be talking past each other. I agree that the specific capability I mentioned, even if I can right it down as an advance prediction of a thing that will happen, will have consequences I do not and plausibly cannot expect. In principle, if I had 1000 hours to analyze 1 hour of thinking by 1 instance of such a system, I might be able to figure out what it’s doing and why. I won’t, but it’s possible-in-principle. To me, that means it is an expected capability that is nevertheless transformational. Unexpected capabilities would be a change in quality of thought that I could not bridge with any amount of effort, because it is simply beyond me.
Did somebody say “unforeseen consequences”? https://youtube.com/clip/UgkxPOTzENZIgrts6gSwAlvWZRgPZvyoRP3P?si=rkSF0rljWfcFZ640
Sorry, couldn’t resist the reference. Yeah, I think I see yet another interpretation here that neither of you have mentioned yet.
Foreseen and expected capabilities: that which seems possible and likely enough that you predict it will probably become available.
Foreseen but unexpected capabilities: that which seems possible but unlikely, so you haven’t put much effort into preparing for it. Like the Yellowstone supervolcanoe eruption, we believe some things to be possible and yet don’t expect to see them.
Unforeseen but expected capabilities? : this is implied by the booleans I set up, but I’m not sure it makes sense exactly. I think it could make sense to have a rough range of power and set of categories in mind, but not have fleshed out the specifics, and call the capabilities in that set ‘unforeseen but expected ’. Like, imagine getting a thousand scientific experts together in groups of five, and pairing them with science fiction authors, and instructing every group to brainstorm the widest set of physically plausible capabilities given a certain performance level of the underlying model. You can imagine the set of ideas this project produces without being able to come up with them all yourself. Bounded unknown unknowns, in a way.
Unforeseen and unexpected capabilities: what if we are fundamentally mistaken about some key assumptions? Missing a key aspect of physical laws of the universe? Then the very criteria we set up for ourselves to define ‘physically plausible ’ would be ruling out some things incorrectly. These would then be unforeseen and unexpected, totally outside our distribution. Or what if some capabilities are so complex to even imagine that no living human is able to do so, and yet the universe allows for such. Or what if all the experts and sci-fi authors, being human, share some priors in common that systematically bias them away from conceiving of a class of capabilities? If so, then perhaps even a large sample size of seemingly independent groups wouldn’t come up with ideas from the systematically neglected set. Extra-unknown unknowns.
I would call it “unexplainable”, but yes, it seems to be a terminological difference.