Like you, I am a fan of Lem, who is sadly, underrated in the West. And I am quite sure that we will not only be unable to communicate with alien lifeforms, we would not even recognize them as such. (Well, I do not even believe that we are a lifeform to begin with, but that topic is for another day.)
As for the programming languages, and your gazelle analogy, notice that you fixed the gene position, something that is not likely an issue for a non-human mind. Just restructure the algorithm as needed. As long as the effort is not exponential, who cares. Computer languages are crutches for the feeble human brain. An intelligence that is not hindered by human shortcomings would just create the algorithm and run it without any intermediate language/compiler/debugger needed.
How do you define “lifeform” so as to make us not examples? (Is the point e.g. that “we” are our _minds_ which could in principle exist without our _bodies_? Or do you consider that _Homo sapiens_ bodies don’t constitute a lifeform?)
I mentioned multiple times on this site over the years that any definition of life that is more algorithmic and not based on the biological substrate we happened to be built on is necessarily wide enough to include some of what we consider non-living objects, like, say, stars. Also discussed in my blog post.
An intelligence that is not hindered by human shortcomings would just create the algorithm and run it without any intermediate language/compiler/debugger needed.
Is that a “There are 10 types of entities in the universe. Those the understand binary and those that don’t” type of statement ;-)
I did find the initial question interesting but suspect it will remain one debated a while—which is not a bad thing. Our existence is rather messy and tangled so ultimate truths or answers probably more transient than enduring.
AFAIU, your argument is that a super-human intelligence can look at the program as a whole, be aware that both hind legs need to be the same length and can modify the code at both places to satisfy the constraint.
While imaginable, in the real world I don’t see this happening except for toy examples (say, an academic exercise of writing a toy sorting algorithm). Actual software projects are big and modified by many actors, each with little understanding of the whole. Natural selection is performed by a, from human point of view, completely mindless entity. Same for genetic algorithms and, possibly, ML.
The point I was trying to make that in such a piecemal, uninformed development, some patters may emerge that are, in a way, independent of the type of the development process (human-driven, evolution, etc.)
Ah, I agree that mindless factorized development can lead to similar patterns, sure. But to examine this conjecture one has to do some honest numerical modeling of the process as applied to… an emergent language? Something else?
Like you, I am a fan of Lem, who is sadly, underrated in the West. And I am quite sure that we will not only be unable to communicate with alien lifeforms, we would not even recognize them as such. (Well, I do not even believe that we are a lifeform to begin with, but that topic is for another day.)
As for the programming languages, and your gazelle analogy, notice that you fixed the gene position, something that is not likely an issue for a non-human mind. Just restructure the algorithm as needed. As long as the effort is not exponential, who cares. Computer languages are crutches for the feeble human brain. An intelligence that is not hindered by human shortcomings would just create the algorithm and run it without any intermediate language/compiler/debugger needed.
I’m intrigued by your topic for another day.
How do you define “lifeform” so as to make us not examples? (Is the point e.g. that “we” are our _minds_ which could in principle exist without our _bodies_? Or do you consider that _Homo sapiens_ bodies don’t constitute a lifeform?)
I mentioned multiple times on this site over the years that any definition of life that is more algorithmic and not based on the biological substrate we happened to be built on is necessarily wide enough to include some of what we consider non-living objects, like, say, stars. Also discussed in my blog post.
Is that a “There are 10 types of entities in the universe. Those the understand binary and those that don’t” type of statement ;-)
I did find the initial question interesting but suspect it will remain one debated a while—which is not a bad thing. Our existence is rather messy and tangled so ultimate truths or answers probably more transient than enduring.
AFAIU, your argument is that a super-human intelligence can look at the program as a whole, be aware that both hind legs need to be the same length and can modify the code at both places to satisfy the constraint.
While imaginable, in the real world I don’t see this happening except for toy examples (say, an academic exercise of writing a toy sorting algorithm). Actual software projects are big and modified by many actors, each with little understanding of the whole. Natural selection is performed by a, from human point of view, completely mindless entity. Same for genetic algorithms and, possibly, ML.
The point I was trying to make that in such a piecemal, uninformed development, some patters may emerge that are, in a way, independent of the type of the development process (human-driven, evolution, etc.)
Ah, I agree that mindless factorized development can lead to similar patterns, sure. But to examine this conjecture one has to do some honest numerical modeling of the process as applied to… an emergent language? Something else?