What I don’t think “how much of the universe is tractable” by itself captures is “how much more effective would an SI be it if had the ability to interact with a smaller or larger part of the world versus if it had to work out everything by theory”. I think it’s clear human beings are more effective given an ability to interact with the world. It doesn’t seem LLMs get that much more effective.
I think a lot of AI safety arguments assume an SI would be able to deal with problems in a completely tractable/purely-by-theory fashion. Often that is not needed for the argument and it seems implausible to those not believing in such a strongly tractable universe.
My personal intuition is that as one tries to deal with more complex systems effectively, one has to use a more and more experimental/interaction-based approaches regardless of one intelligence. But I don’t think that means you can’t have a very effective SI following that approach. And whether this intuition is correct remains to be seen.
What I don’t think “how much of the universe is tractable” by itself captures is “how much more effective would an SI be it if had the ability to interact with a smaller or larger part of the world versus if it had to work out everything by theory”. I think it’s clear human beings are more effective given an ability to interact with the world. It doesn’t seem LLMs get that much more effective.
I think a lot of AI safety arguments assume an SI would be able to deal with problems in a completely tractable/purely-by-theory fashion. Often that is not needed for the argument and it seems implausible to those not believing in such a strongly tractable universe.
My personal intuition is that as one tries to deal with more complex systems effectively, one has to use a more and more experimental/interaction-based approaches regardless of one intelligence. But I don’t think that means you can’t have a very effective SI following that approach. And whether this intuition is correct remains to be seen.