My first interest here is conceptual: understanding better what “openness” even means for AI. (I see that the Open Source Initiative has been trying to figure out a definition for 7 months so far.) AI is not like ordinary software. E.g. thinking according to the classic distinction between code and data, one might consider model weights to be more like data than code. On the other hand, knowing the model architecture alone should be enough for the weights to be useful, since knowing the architecture means knowing the algorithm.
So far the most useful paradigm I have, is to think of an AI as similar to an uploaded human mind. Then you can think about the difference between: having a digital brain with no memory or personality yet, having an uploaded adult individual, having a model of that individual’s life history detailed enough to recreate the individual; and so on. This way, we can use our knowledge of brains and persons, to tell us the implications of different forms of AI “openness”.
My first interest here is conceptual: understanding better what “openness” even means for AI. (I see that the Open Source Initiative has been trying to figure out a definition for 7 months so far.) AI is not like ordinary software. E.g. thinking according to the classic distinction between code and data, one might consider model weights to be more like data than code. On the other hand, knowing the model architecture alone should be enough for the weights to be useful, since knowing the architecture means knowing the algorithm.
So far the most useful paradigm I have, is to think of an AI as similar to an uploaded human mind. Then you can think about the difference between: having a digital brain with no memory or personality yet, having an uploaded adult individual, having a model of that individual’s life history detailed enough to recreate the individual; and so on. This way, we can use our knowledge of brains and persons, to tell us the implications of different forms of AI “openness”.