The sort of complication I imagine is: it is possible to build powerful AGI without having good frameworks for understanding its behavior, and then people do that. It seems like all things equal understanding a system is useful, not only for building it but also for having reasonable expectations about its behavior (which is in turn useful for making further preparations, solving safety problems, etc.). To the extent that understanding things at a deep level ends up being necessary to building them at all, then what we’re doing won’t matter (except insofar as people who care about safety making modest technical contributions is indirectly useful).
OK, helpful. This makes more sense to me.
But this distinction doesn’t seem to get at the important thing. To someone working on logic you would say “I don’t know whether deduction systems will be formalized in the future, but I know that agents will be able to reason. So this suggests to me that your particular approach for defining reasoning, via formalization, is unnecessary.” In some sense this is true—if I’m an early mathematician and I don’t do logic, someone else will—but it has relatively little bearing on whether logic is likely to be mathematically productive to work on. If the question is about impact rather than productivity as a research program, then see the discussion above.
This reply would make more sense if I was saying that knowing how to overcome Lobian obstacles would never be necessary for building well-functioning AI. But I was making the weaker claim that either it would never be necessary OR it would be solved in the ordinary development of AI. So if someone is formalizing logic a long time ago with the aim of building thinking machines AND they thought that when thinking machines were built logic wouldn’t be formalized properly and the machines wouldn’t work, then I might have complained. But if they had said, “I’d like to build a thinking machine and I think that formalizing logic will help get us there, whether it is done by others or me. And maybe it will go a bit better or come a bit sooner if I get involved. So I’m working on it.” then I wouldn’t have had anything to say.
Anyway, I think we roughly understand each other on this thread of the conversation, so maybe there is no need to continue.
OK, helpful. This makes more sense to me.
This reply would make more sense if I was saying that knowing how to overcome Lobian obstacles would never be necessary for building well-functioning AI. But I was making the weaker claim that either it would never be necessary OR it would be solved in the ordinary development of AI. So if someone is formalizing logic a long time ago with the aim of building thinking machines AND they thought that when thinking machines were built logic wouldn’t be formalized properly and the machines wouldn’t work, then I might have complained. But if they had said, “I’d like to build a thinking machine and I think that formalizing logic will help get us there, whether it is done by others or me. And maybe it will go a bit better or come a bit sooner if I get involved. So I’m working on it.” then I wouldn’t have had anything to say.
Anyway, I think we roughly understand each other on this thread of the conversation, so maybe there is no need to continue.