Similarly, AGI is a quite general technical problem. You don’t just need to make an AI that can do narrow task X, it has to work in cases Y and Z too, or it will fall over and fail to take over the world at some point. To do this you need to create very general analysis and engineering tools that generalize across these situations.
I don’t think this is a valid argument. Counter-example: you could build an AGI by uploading a human brain onto an artificial substrate, and you don’t “need to create very general analysis and engineering tools that generalize across these situations” to do this.
More realistically, it seems pretty plausible that all of the necessary patterns/rules/heuristics/algorithms/forms of reasoning necessary for “being generally intelligent” can be found in human culture, and ML can distill these elements of general intelligence into a (language or multimodal) model that will then be generally intelligent. This also doesn’t seem to require very general analysis and engineering tools. What do you think of this possibility?
You’re right that the uploading case wouldn’t necessarily require strong algorithmic insight. However, it’s a kind of bounded technical problem that’s relatively easy to evaluate progress in relative to the difficulty, e.g. based on ability to upload smaller animal brains, so would lead to >40 year timelines absent large shifts in the field or large drivers of progress. It would also lead to a significant degree of alignment by default.
For copying culture, I think the main issue is that culture is a protocol that runs on human brains, not on computers. Analogously, there are Internet protocols saying things like “a SYN/ACK packet must follow a SYN packet”, but these are insufficient for understanding a human’s usage of the Internet. Copying these would lead to imitations, e.g. machines that correctly send SYN/ACK packets and produce semi-grammatical text but lack certain forms of understanding, especially connection to a surrounding “the real world” that is spaciotemporal etc.
If you don’t have logic yourself, you can look at a lot of logical content (e.g. math papers) without understanding logic. Most machines work by already working, not by searching over machine designs that fit a dataset.
Also in the cultural case, if it worked it would be decently aligned, since it could copy cultural reasoning about goodness. (The main reason I have for thinking cultural notions of goodness might be undesirable is thinking that, as stated above, culture is just a protocol and most of the relevant value processing happens in the brains, see this post.)
I don’t think this is a valid argument. Counter-example: you could build an AGI by uploading a human brain onto an artificial substrate, and you don’t “need to create very general analysis and engineering tools that generalize across these situations” to do this.
More realistically, it seems pretty plausible that all of the necessary patterns/rules/heuristics/algorithms/forms of reasoning necessary for “being generally intelligent” can be found in human culture, and ML can distill these elements of general intelligence into a (language or multimodal) model that will then be generally intelligent. This also doesn’t seem to require very general analysis and engineering tools. What do you think of this possibility?
You’re right that the uploading case wouldn’t necessarily require strong algorithmic insight. However, it’s a kind of bounded technical problem that’s relatively easy to evaluate progress in relative to the difficulty, e.g. based on ability to upload smaller animal brains, so would lead to >40 year timelines absent large shifts in the field or large drivers of progress. It would also lead to a significant degree of alignment by default.
For copying culture, I think the main issue is that culture is a protocol that runs on human brains, not on computers. Analogously, there are Internet protocols saying things like “a SYN/ACK packet must follow a SYN packet”, but these are insufficient for understanding a human’s usage of the Internet. Copying these would lead to imitations, e.g. machines that correctly send SYN/ACK packets and produce semi-grammatical text but lack certain forms of understanding, especially connection to a surrounding “the real world” that is spaciotemporal etc.
If you don’t have logic yourself, you can look at a lot of logical content (e.g. math papers) without understanding logic. Most machines work by already working, not by searching over machine designs that fit a dataset.
Also in the cultural case, if it worked it would be decently aligned, since it could copy cultural reasoning about goodness. (The main reason I have for thinking cultural notions of goodness might be undesirable is thinking that, as stated above, culture is just a protocol and most of the relevant value processing happens in the brains, see this post.)