My take on this is that civilizations overwhelmingly terminate, overwhelmingly by other means than some independent willed super AI. Which is what I’d expect anyway, because this specific AI doomsday followed by AI expansion scenario is just one of many very speculative ways how a civilization could come to an end.
With regards to “utilities” and “utility functions”, one needs to carefully distinguish between a mathematical function (with some fairly abstract input domain) that may plausibly and uncontroversially describe a goal that may be implemented into an AI, and a hypothetical, speculative function which much more closely follows the everyday meaning of the word “function” (i.e. purpose) and has actual reality as it’s input domain.
My take on this is that civilizations overwhelmingly terminate, overwhelmingly by other means than some independent willed super AI.
Is this counting the failure of intelligent life to develop as “termination”? You wrote elsewhere:
Let me note that the probability of the 1 kilobit of specific genetic code forming spontaneously is 2^-1024 . We don’t know how much of low probability ‘miracle’ does life require, but it can’t be very little (or we’d have abiogenesis in the lab), and intuition often fails exponents. If abiogenesis requires mere several times more lucky bits than “we didn’t have it forming in the lab”, there’s simply no life anywhere in the observable universe, except on Earth.
The dark sky does not scare me, for I know not to take some intuitions too seriously. The abiogenesis is already mindbogglingly improbable, simply for not occurring in the vast number of molecules in a test tube in a lab, over the vast timespan of days; never in the observable universe sure feels like a lot, but it is not far off in terms of bits that are set by luck.
I did somewhat more accurate thinking since with regards to abiogenesis, and I find it considerably less plausible now that life is mindbogglingly improbable.
Plus I was never a big believer that we’re going straight into a space opera.
To recap on the improbability of life (not sure if we argued that point), IMO the meaningful way to ponder such questions is to ponder generalized physics, as is, the world around you (rather than the world from god’s perspective and your incarnation into a specific thinking being, I consider that to be nonsense). When a theory gives some probability for the observables, that means a -log2(probability) addition of bits to the complexity. We should try to minimize total complexity to make our best (least speculative) guess. So there’s a theory with laws of physics and life as we know it requiring a plenty of bits. The bits required should be possible to express shorter via tweaks to the laws of physics, that’s a general consideration for generating data via some code rather than just writing the data down (it is still more shaky than what I’d be comfortable with, though).
What I expect now is that life arises through evolution of some really weird chemical system where long pieces of information replicate via chemistry being very convenient for that particular crazy cocktail, and can catalyse replication in this complicated and diverse cocktail in many different ways (so that evolution has someplace to go, gradually taking over the replication functionality). Having that occur multiple times on a planet will not save any further description bits, too, so the necessary cocktail conditions could be quite specific, of the “happens in just a few spots on the entire planet” kind, very hard to re-create.
edit: also from what I remember my focus on the thermodynamic luck as an explanation was based upon lack of an alternative defined enough to ponder what exactly it substitutes for that luck. In the case of the above, it’s still quite a lot less defined than I’d be comfortable with, but at least it substitutes something and has a plausibility argument that what it substitutes is fewer bits than what it replaces.
My take on this is that civilizations overwhelmingly terminate, overwhelmingly by other means than some independent willed super AI. Which is what I’d expect anyway, because this specific AI doomsday followed by AI expansion scenario is just one of many very speculative ways how a civilization could come to an end.
With regards to “utilities” and “utility functions”, one needs to carefully distinguish between a mathematical function (with some fairly abstract input domain) that may plausibly and uncontroversially describe a goal that may be implemented into an AI, and a hypothetical, speculative function which much more closely follows the everyday meaning of the word “function” (i.e. purpose) and has actual reality as it’s input domain.
Is this counting the failure of intelligent life to develop as “termination”? You wrote elsewhere:
I did somewhat more accurate thinking since with regards to abiogenesis, and I find it considerably less plausible now that life is mindbogglingly improbable.
Plus I was never a big believer that we’re going straight into a space opera.
To recap on the improbability of life (not sure if we argued that point), IMO the meaningful way to ponder such questions is to ponder generalized physics, as is, the world around you (rather than the world from god’s perspective and your incarnation into a specific thinking being, I consider that to be nonsense). When a theory gives some probability for the observables, that means a -log2(probability) addition of bits to the complexity. We should try to minimize total complexity to make our best (least speculative) guess. So there’s a theory with laws of physics and life as we know it requiring a plenty of bits. The bits required should be possible to express shorter via tweaks to the laws of physics, that’s a general consideration for generating data via some code rather than just writing the data down (it is still more shaky than what I’d be comfortable with, though).
What I expect now is that life arises through evolution of some really weird chemical system where long pieces of information replicate via chemistry being very convenient for that particular crazy cocktail, and can catalyse replication in this complicated and diverse cocktail in many different ways (so that evolution has someplace to go, gradually taking over the replication functionality). Having that occur multiple times on a planet will not save any further description bits, too, so the necessary cocktail conditions could be quite specific, of the “happens in just a few spots on the entire planet” kind, very hard to re-create.
edit: also from what I remember my focus on the thermodynamic luck as an explanation was based upon lack of an alternative defined enough to ponder what exactly it substitutes for that luck. In the case of the above, it’s still quite a lot less defined than I’d be comfortable with, but at least it substitutes something and has a plausibility argument that what it substitutes is fewer bits than what it replaces.