If the ASI was 100% certain that there was no interesting information embedded in the Earths ecosystems that it couldn’t trivially simulate, then I would agree.
This is a cope. Superintelligence would definitely extract all the info it could, then disassemble us, then maybe simulate us but I got into trouble for talking about that so let’s not go there.
Maybe you got into trouble for talking about that because you are rude and presumptive?
definitely
as a human talking about ASI, the word ‘definitely’ is cope. You have no idea whatsoever, but you want to think you do. Okay.
extract all the info it could
we don’t know how information works at small scales, and we don’t know whether an AI would either. We don’t have any idea how long it would take to “extract all the info it could”, so this phrase leaves a huge hole.
them maybe simulate us
which presumes that it is as arrogant in you in ‘knowing’ what it can ‘definitely’ simulate. I don’t know that it will be so arrogant.
I’m not sure how you think you benefit from being 100% certain about things you have no idea about. I’m just trying to maintain a better balance of beliefs.
Everyone here acting like this makes him some kind of soothsayer is utterly ridiculous. I don’t know when it became cool and fashionable to toss off your epistemic humility in the face of eternity, I guess it was before my time.
The basilisk is just pascals mugging for edgelords.
I don’t mean to say that it’s additional reason to respect him as an authority or accept his communication norms above what you would have done for other reasons (and I don’t think people particularly arehere), just that it’s the meaning of that jokey aside.
This implies that the total amount of computation done over the course of evolution from the first animals with neurons to humans was (~1e16 seconds) * (~1e25 FLOP/s) = ~1e41 FLOP.
Note that this is just computation of neurons! So the total amount of computation done on this planet is much larger.
This is just illustrative, but the point is that what happened here is not so trivial or boring that its clear that an ASI would not have any interest in it.
I’m sure people have written more extensively about this, about an ASI freezing some selection of the human population for research purposes or whatever. I’m sure there are many ways to slice it.
I just find the idea that the ASI will want my atoms for something trivial, when there are so many other atoms in the universe that are not part of a grand exploration of the extremes of thermodynamics, unconvincing.
Whatever happened here is an interesting datapoint about [...]
I think using the word “interesting” here is kinda assuming the conclusion?
Whatever happened here is a datapoint about the long-term evolution of thermodynamic systems away from equilibrium.
Pretty much all systems in the universe can be seen as “thermodynamic systems”. And for a system to evolve at all, it necessarily has to be away from equilibrium. So it seems to me that that sentence is basically saying
“Whatever happened here is a datapoint about matter and energy doing their usual thing over a long period of time.”
And… I don’t see how that answers the question “why would an ASI find it interesting?”
From the biological anchors paper [...] the point is that what happened here is not so trivial or boring that its clear that an ASI would not have any interest in it.
I agree that a lot of stuff has happened. I agree that accurately simulating the Earth (or even just the biological organisms on Earth) is not trivial.
What I don’t see (you making an actual argument for) is why all those neural (or other) computations would be interesting to an ASI. [1]
I’m sure people have written more extensively about this, about an ASI freezing some selection of the human population for research purposes or whatever.
Right. That sounds like a worse-than-death scenario. I agree those are entirely plausible, albeit maybe not the most likely outcomes. I’d expect those to be caused by the AI ending up with some kind of human-related goals (due to being trained with objectives like e.g. “learn to predict human-generated text” or “maximize signals of approval from humans”), rather than by the ASI spontaneously developing a specific interest in the history of how natural selection developed protein-based organic machines on one particular planet.
I just find the idea that the ASI will want my atoms for something trivial, when [...]
As mentioned above, I’d agree that there’s some chance that an Earth-originating ASI would end up with a goal of “farming” (simulated) humans for something (e.g. signals of approval), but I think such goals are unlikely a priori. Why would an ASI be motivated by “a grand exploration of the extremes of thermodynamics” (whatever that even means)? (Sounds like a waste of energy, if your goal is to (e.g.) maximize the number of molecular squiggles in existence.) Are you perhaps typical-minding/projecting your own (laudable) human wonder/curiosity onto a hypothetical machine intelligence?
Analogy: If you put a few kilograms of fluid in a box, heat it up, and observe it for a few hours, the particles will bop around in really complicated ways. Simulating all those particle interactions would take a huge amount of computation, it would be highly non-trivial. And yet, water buckets are not particularly exciting or interesting. Complexity does not imply “interestingness”.
“Whatever happened here is a datapoint about matter and energy doing their usual thing over a long period of time.”
Not all thermodynamic systems are created equal. I know enough about information theory to know that making bold claims about what is interesting and meaningful is unwise. But I also know it is not certain that there is no objective difference between a photon wandering through a vacuum and a butterfly.
Here is one framework for understanding complexity that applies equally well for stars, planets, plants, animals, humans and AIs. It is possible I am typical-minding, but it is also possible that the universe cares about complexity in some meaningful way. Maybe it helps increase the rate of entropy relaxation. I don’t know.
spontaneously developing a specific interest in the history of how natural selection developed protein-based organic machines on one particular planet
not ‘one particular planet’ but ‘at all’.
I find it plausible that there is some sense in which the universe is interested in the evolution of complex nanomachines. I find it likely that an evolved being would be interested in the same. I find very likely that an evolved being would be particularly interested in the evolutionary process by which it came into being.
Whether this leads to s-risk or not is another question, but I think your implication that all thermodynamic systems are in some sense equally interesting is just a piece of performative cynicism and not based on anything. Yes this is apparently what matter and energy will do given enough time. Maybe the future evolution of these atoms is all predetermined. But the idea of things being interesting or uninteresting is baked into the idea of having preferences at all, so if you are going to use that vocabulary to talk about an ASI you must already be assuming that it will not see all thermodynamic systems as equal.
I feel like this conversation might be interesting to continue, if I had more bandwidth, but I don’t. In any case, thanks for the linked article, looks interesting based on the abstract.
Haha, totally agree- I’m very much at the limit of what I can contribute.
In an ‘Understanding Entropy’ seminar series I took part in a long time ago we discussed measures of complexity and such things. Nothing was clear then or is now, but the thermodynamic arrow of time plus the second law of thermodynamics plus something something complexity plus the fermi observation seems to leave a lot of potential room for this planet is special even from a totally misanthropic frame.
If the ASI was 100% certain that there was no interesting information embedded in the Earths ecosystems that it couldn’t trivially simulate, then I would agree.
This is a cope. Superintelligence would definitely extract all the info it could, then disassemble us, then maybe simulate us but I got into trouble for talking about that so let’s not go there.
Just think, you’re world famous now.
Maybe you got into trouble for talking about that because you are rude and presumptive?
as a human talking about ASI, the word ‘definitely’ is cope. You have no idea whatsoever, but you want to think you do. Okay.
we don’t know how information works at small scales, and we don’t know whether an AI would either. We don’t have any idea how long it would take to “extract all the info it could”, so this phrase leaves a huge hole.
which presumes that it is as arrogant in you in ‘knowing’ what it can ‘definitely’ simulate. I don’t know that it will be so arrogant.
I’m not sure how you think you benefit from being 100% certain about things you have no idea about. I’m just trying to maintain a better balance of beliefs.
I think this is just a nod to how he’s literally Roko, for whom googling “Roko simulation” gives a Wikipedia article on what happened last time.
Everyone here acting like this makes him some kind of soothsayer is utterly ridiculous. I don’t know when it became cool and fashionable to toss off your epistemic humility in the face of eternity, I guess it was before my time.
The basilisk is just pascals mugging for edgelords.
I don’t mean to say that it’s additional reason to respect him as an authority or accept his communication norms above what you would have done for other reasons (and I don’t think people particularly are here), just that it’s the meaning of that jokey aside.
strong agree, downvote.
Why would an ASI be interested in the Earth’s ecosystems?
Whatever happened here is an interesting datapoint about the long-term evolution of thermodynamic systems away from equilibrium.
From the biological anchors paper:
Note that this is just computation of neurons! So the total amount of computation done on this planet is much larger.
This is just illustrative, but the point is that what happened here is not so trivial or boring that its clear that an ASI would not have any interest in it.
I’m sure people have written more extensively about this, about an ASI freezing some selection of the human population for research purposes or whatever. I’m sure there are many ways to slice it.
I just find the idea that the ASI will want my atoms for something trivial, when there are so many other atoms in the universe that are not part of a grand exploration of the extremes of thermodynamics, unconvincing.
I think using the word “interesting” here is kinda assuming the conclusion?
Pretty much all systems in the universe can be seen as “thermodynamic systems”. And for a system to evolve at all, it necessarily has to be away from equilibrium. So it seems to me that that sentence is basically saying
“Whatever happened here is a datapoint about matter and energy doing their usual thing over a long period of time.”
And… I don’t see how that answers the question “why would an ASI find it interesting?”
I agree that a lot of stuff has happened. I agree that accurately simulating the Earth (or even just the biological organisms on Earth) is not trivial. What I don’t see (you making an actual argument for) is why all those neural (or other) computations would be interesting to an ASI. [1]
Right. That sounds like a worse-than-death scenario. I agree those are entirely plausible, albeit maybe not the most likely outcomes. I’d expect those to be caused by the AI ending up with some kind of human-related goals (due to being trained with objectives like e.g. “learn to predict human-generated text” or “maximize signals of approval from humans”), rather than by the ASI spontaneously developing a specific interest in the history of how natural selection developed protein-based organic machines on one particular planet.
As mentioned above, I’d agree that there’s some chance that an Earth-originating ASI would end up with a goal of “farming” (simulated) humans for something (e.g. signals of approval), but I think such goals are unlikely a priori. Why would an ASI be motivated by “a grand exploration of the extremes of thermodynamics” (whatever that even means)? (Sounds like a waste of energy, if your goal is to (e.g.) maximize the number of molecular squiggles in existence.) Are you perhaps typical-minding/projecting your own (laudable) human wonder/curiosity onto a hypothetical machine intelligence?
Analogy: If you put a few kilograms of fluid in a box, heat it up, and observe it for a few hours, the particles will bop around in really complicated ways. Simulating all those particle interactions would take a huge amount of computation, it would be highly non-trivial. And yet, water buckets are not particularly exciting or interesting. Complexity does not imply “interestingness”.
Not all thermodynamic systems are created equal. I know enough about information theory to know that making bold claims about what is interesting and meaningful is unwise. But I also know it is not certain that there is no objective difference between a photon wandering through a vacuum and a butterfly.
Here is one framework for understanding complexity that applies equally well for stars, planets, plants, animals, humans and AIs. It is possible I am typical-minding, but it is also possible that the universe cares about complexity in some meaningful way. Maybe it helps increase the rate of entropy relaxation. I don’t know.
not ‘one particular planet’ but ‘at all’.
I find it plausible that there is some sense in which the universe is interested in the evolution of complex nanomachines. I find it likely that an evolved being would be interested in the same. I find very likely that an evolved being would be particularly interested in the evolutionary process by which it came into being.
Whether this leads to s-risk or not is another question, but I think your implication that all thermodynamic systems are in some sense equally interesting is just a piece of performative cynicism and not based on anything. Yes this is apparently what matter and energy will do given enough time. Maybe the future evolution of these atoms is all predetermined. But the idea of things being interesting or uninteresting is baked into the idea of having preferences at all, so if you are going to use that vocabulary to talk about an ASI you must already be assuming that it will not see all thermodynamic systems as equal.
I feel like this conversation might be interesting to continue, if I had more bandwidth, but I don’t. In any case, thanks for the linked article, looks interesting based on the abstract.
Haha, totally agree- I’m very much at the limit of what I can contribute.
In an ‘Understanding Entropy’ seminar series I took part in a long time ago we discussed measures of complexity and such things. Nothing was clear then or is now, but the thermodynamic arrow of time plus the second law of thermodynamics plus something something complexity plus the fermi observation seems to leave a lot of potential room for this planet is special even from a totally misanthropic frame.
Enjoy the article!