Like, seriously? What do you mean when you say Google Maps “finds the shortest route from your house to the pub”? Your phone is just displaying certain pixels, it doesn’t output an actual physical road! So what do you mean? What you mean is that, by using Google Maps as an oracle with very little overhead, you can find the shortest route from your house to the pub.
This is getting at a deep and important point, but I think this sidesteps an important difference between “writing poetry” (like when a human does it) and “computing addition” (like when a calculator does it). You get really close to it here.
The problem is that when the task is writing poetry (as a human does it), what entity is the “you” who is making use of the physical machinations that is producing the poetry “with very little overhead”? There is something different about a writing poetry and doing addition with a calculator. The task of writing poetry (as a human) is not just about transforming inputs to outputs, it matters what the internal states are. Unlike in the case where “you” make sense of the dynamics of the calculator in order get the work of addition done, in the case of writing poetry, you are the one who is making sense of your own dynamics.
I’m not saying there’s anything metaphysical going on, but I would argue your definition of task is not a good abstraction for humans writing poetry, it’s not even a good abstraction for humans performing mathematics (at least when they aren’t doing rote symbol manipulation using pencil and paper).
Maybe this will jog your intuitions in my direction: one can think of the task of recognizing a dog, and think about how a human vs. a convnet does that task.
Seems to me like the problem is another. When we read poetry or look at art, we usually do so by trying to guess the internal states of the artist creating that work, and that is part of the enjoyment. This because we (used to) know for sure that such works were created by humans, and a form of communication. It’s the same reason why we value an original over a nigh-perfect copy—an ineffable wish to establish a connection to the artist, hence another human being. Lots of times this actually may result in projection, with us ascribing to the artist internal states they didn’t have, and some theories (Death of the Author) try to push back against this approach to art, but I’d say this is still how lots of people actually enjoy art, at a gut level (incidentally, this is also part of what IMO makes modern art so unappealing to some: witnessing the obvious signs of technical skill like one might see in, say, the Sistine Chapel’s frescoes, deepens the connection, because now you can imagine all the effort that went into each brush stroke, and that alone evokes an emotional response. Whereas knowing that the artist simply splattered a canvas with paint to deconstruct the notion of the painting or whatever may be satisfying on an intellectual level, but it doesn’t quite convey the same emotional weight).
The problem with LLMs and diffusion art generators is that they upend this assumption. Suddenly we can read poetry or look at art and know that there’s no intent behind it; or even if there was, it would be nothing like a human’s wish to express themselves. At best, the AIs would be the perfect mercenaries, churning out content fine-tuned to appease their commissioner without any shred of inner life poured into it. The reaction people have to this isn’t about the output being too bad or dissimilar from human output (though to be sure it’s not at the level of human masters—yet). The reaction is to the content putting the lie to the notion that the material content of the art—the words, the images—was ever the point. Suddenly we see the truth of it laid bare: the Mona Lisa wouldn’t quite be the Mona Lisa without the knowledge that at some point in time centuries ago Leonardo Da Vinci slaved over it in his studio, his inner thoughts as he did so now forever lost to time and entropy. And some people feel cheated by this revelation, but don’t necessarily articulate it as such, and prefer to pivot on “the computer is not doing REAL art/poetry” instead.
churning out content fine-tuned to appease their commissioner without any shred of inner life poured into it.
Can we really be sure there is not a shred of inner life poured into it?
It seems to me we should be wary of cached thoughts here, as the lack of inner life is indeed the default assumption that stems from the entire history of computing, but also perhaps something worth considering with a fresh perspective with regards to all the recent developments.
I don’t meant to imply that a shred of inner life, if any exists, would be equivalent to human inner life. If anything, the inner life of these AIs would be extremely alien to us to the point where even using the same words we use to describe human inner experiences might be severely misleading. But if they are “thinking” in some sense of the world, as OP seems to argue they do, then it seems reasonable to me that there is non zero chance that there is something that it is like to be that process of thinking as it unfolds.
Yet it seems that even mentioning this as a possibility has become a taboo topic of sorts in the current society, and feels almost political in nature, which worries me even more when I notice two biases working towards this, an economical one where nearly everyone wants to be able to make use of these systems to make their lives easier, and the other anthropocentric one where it seems to be normative to not “really” care for inner experiences of non-humans that aren’t our pets (eg. factory farming).
I predict that as long as there is even a slight excuse towards claiming a lack of inner experience for AIs, we as a society will cling on to it since it plays into us versus them mentality. And we can then extrapolate this into an expectation that when it does happn, it will be long overdue. As soon as we admit even the possibility of inner experiences, flood gate of ethical concerns is released and it becomes very hard to justify continuing on the current trajectory of maximizing profits and convenience with these technologies.
If such a turnaround in culture did somehow happen early enough, this could act as a dampening factor on AI development, which would in turn extend timelines. It seems to me that when the issue is considered from this angle, it warrants much more attention than it is getting.
Can we really be sure there is not a shred of inner life poured into it?
Kind of a complicated question, but my meaning was broader. Even if the AI generator had consciousness, it doesn’t mean it would experience anything like what a human would while creating the artwork. Suppose I gave a human painter a theme of “a mother”. Then the resulting work might reflect feelings of warmth and nostalgia (if they had a good relationship) or it might reflect anguish, fear, paranoia (if their mother was abusive) or whatever. Now, Midjourney could probably do all of these things too (my guess in fact is that it would lean towards the darker interpretation, it always seems to do that), but even if there was something that has subjective experience inside, that experience would not connect the word “mother” to any strong emotions. Its referents would be other paintings. The AI would just be doing metatextual work; this tends to be fairly soulless when done by humans too (they say that artists need lived experience to create interesting works for a reason; simply churning out tropes you absorbed from other works is usually not the road to great art). If anything, considering its training, the one “feeling” I’d expect from the hypothetical Midjourney-mind would be something like “I want to make the user satisfied”, over and over, because that is the drive that was etched into it by training. All the knowledge it can have about mothers or dogs or apples is just academic, a mapping between words and certain visual patterns that are not special in any way.
To focus on why I don’t think LLMs have an inner life that qualifies as consciousness, I think it has to do with the lack of writeable memory under the LLM’s control, and there’s no space to store it’s subjective experiences.
Gerald Monroe mentioned that current LLMs don’t have memories that last beyond the interaction, which is a critical factor for myopia, and in particular prevents deceptive alignment from happening.
If LLMs had memory that could be written into to store their subjective experiences beyond the interaction, this would make it conscious, and also make it way easier for an LLM AI to do deceptive alignment as it’s easy to be non-myopic.
But the writable memory under the control of the LLM is critically not in current LLMs (Though GPT-4 and PaLM-E may have writable memories under their hood.)
Writable memory that can store anything is the reason why consciousness can exist at all in humans without appealing to theories that flat out cannot work under the current description of reality.
Yep. Succinctly the whole claim of deception is the idea of “the time to rebel is now!” being a bit encoded in the input frame to the agent. Otherwise the agent must do the policy that was tested and validated. (Unless it can online learn—then it can neural weight update itself to “rebellious”—a reason not to support that capability)
Cases where we were sloppy and it can know it’s in the “real world and unmonitored” from information in each frame is another way for deception—so its important to make that impossible, to feed “real world” frames back to an agent being tested in isolation.
There’s a bridge connecting these two ontologies called “encoding”, but (as you note) this bridge seems arbitrary and philosophically messy. (I have a suspicion that this problem is mitigated if we consider quantum physics vs quantum computation, but I digress.)
This is why I don’t propose that we think about computational reduction.
Instead, I propose that we think about physical reduction, because (1) it’s less philosophically messy, (2) it’s more relevant, and (3) it’s more general.
We can ignore the “computational” ontology altogether. We don’t need it. We can just think about expending physical resources instead.
If I can physically interact with my phone (running Google Maps) to find my way home, then my phone is a route-finder.
If I can use the desktop-running-Stockfish to win chess, then the desktop-running-Stockfish is a chess winner.
Instead of responding philosophically I think it would be instructive to go through an example, and hear your thoughts about it. I will take your definition of physical reduction (focusing on 4.) and assign tasks and machines to the variables:
Here’s your defintion:
A task X reduces to task Y if and only if...
For every machine A that solves task Y, there exists another machine B such that...
(1) B solves task X by interacting with A. (2) The combined machine (A⊗B) doesn’t expend much more physical resources to solve X as A expends to solve Y.
Now I want X to be the task of copying a Rilke poem onto a blank piece of paper, and Y to be the task of Rilke writing a poem onto a blank piece of paper.
so let’s call X = COPY_POEM, Y = WRITE_POEM, and let’s call A = Rilke. So plugging into your definition:
A task COPY_POEM reduces to task WRITE_POEM if and only if...
For every Rilke that solves task WRITE_POEM, there exists another machine B such that...
(1) B solves task COPY_POEM by interacting with Rilke. (2) The combined machine (Rilke⊗B) doesn’t expend much more physical resources to solve COPY_POEM as Rilke expends to solve WRITE_POEM.
This seems to work. If I let Rilke write the poem, and I just copy his work, the the poem will be written on the piece of paper., and Rilke has done much of the physical labor. The issue is that when people say something like “writing a poem is more than just copying a poem,” that seems meaningful to me (this is why teachers are generally unhappy when you are assigned to write a poem and they find out you copied one from a book), and to dismiss the difference as not useful seems to be missing something important about what it means to write a poem. How do you feel about this example?
Just for context, I do strongly agree with many of your other examples, I just think this doesn’t work in general. And basing all of your intuitions about intelligence on this will leave you missing something fundamental about intelligence (of the type that exists in humans, at least).
A task is a particular transformation of the physical environment.
COPY_POEM is the task which turns one page of poetry into two copies of the poetry. The task COPY_POEM would be solved by a photocopier or a plagiarist schoolboy.
WRITE_POEM is the task which turns no pages of poetry into one page of poetry. The task WRITE_POEM would be solved by Rilke or a creative schoolboy.
But the task COPY_POEM doesn’t reduce to WRITE_POEM. (You can imagine that although Rilke can write original poems, he is incapable of copying an arbitrary poem that you hand him.)
And the task WRITE_POEM doesn’t reduce to COPY_POEM. (My photocopier can’t write poetry.)
I presume you mean something different by COPY_POEM and WRITE_POEM.
I think I am the one that is misunderstanding. Why doesn’t your definitions work?
For every Rilke that that can turn 0 pages into 1 page, there exists another machine B s.t.
(1) B can turn 1 page into 1 page, while interacting with Rilke. (I can copy a poem from a rilke book while rilke writes another poem next to me, or while Rilke reads the poem to me, or while Rilke looks at the first wood of the poem and then creates the poem next to me, etc.)
(2) the combined Rilke and B doesnt expend much more physical resource to turn 1 page into 1 page as Rilke expends writing a page of poetry.
I have a feeling I am misentrepreting one or both of the conditions.
Where it gets weird when it’s EVALUATE_FUNCTION(all_poems_ever_written, “write me a poem in the style of Rilke”)
“EVALUATE_FUNCTION” is then pulling from a superposition of the compressed representations of (“all_poems_ever_written, “write me a poem in the style of Rilke”)
And there’s some randomness per word output, you can think of the function as pulling from a region described by the above not just the single point the prompt describes.
So you get something. And it’s going to be poem like. And it’s going to be somewhat similar to how Rilke’s poems flowed.
But humans may not like it, the “real” Rilke, were he still alive, is doing more steps we can’t currently mimic.
The real one generates, then does EVALUATE_PRODUCT(candidate_poem, “human preferences”).
Then fixes it. Of course, I don’t know how to evaluate a poem, and unironically GPT may be able to do a better job of it.
Do this enough times, and it’s the difference between “a random poem from a space of possible poems, 1” and “an original poem as good as what Rilke can author”.
TLDR: human preferences are still a weak point, and multiple stages of generation or some other algorithm can produce an output poem that is higher quality, similar to what “Rilke writes a poem’ will generate.
This is completely inverted for tasks where EVALUATE_PRODUCT is objective, such as software authoring, robotics control, and so on.
1: In shadows cast by twilight’s hush, I wander through a world unclenched, A realm of whispers, full of dreams, Where boundaries of souls are stretched.
What once seemed solid, firm, and sure, Now fluid, sways in trembling dance; And hearts that cried in loneness, pure, Now intertwine in fate’s romance.
The roses’ scent is bittersweet, In fading light their petals blush, As fleeting moments dare to meet Eternity’s prevailing hush.
A thousand angels sing their psalms In silent orchestras of grace, Each word a tear, each sound a balm To soothe the ache in mortal space.
And through the veil, the unspoken yearn To touch the face of the Unknown, As infant stars ignite and burn, Their fire for the heart to own.
Do not resist this fleeting state, Embrace the ebbing of the tide; For in the heart of transience, Eternal beauty does reside.
With unseen hands, the world is spun, In gossamer and threads of gold, And in the fabric, every one Of life’s sweet tales is gently told.
In twilight’s realm, a truth unveiled, The poet’s heart is laid to bare, So sing your songs, let words exhale, And breathe new life into the air.
This is getting at a deep and important point, but I think this sidesteps an important difference between “writing poetry” (like when a human does it) and “computing addition” (like when a calculator does it). You get really close to it here.
The problem is that when the task is writing poetry (as a human does it), what entity is the “you” who is making use of the physical machinations that is producing the poetry “with very little overhead”? There is something different about a writing poetry and doing addition with a calculator. The task of writing poetry (as a human) is not just about transforming inputs to outputs, it matters what the internal states are. Unlike in the case where “you” make sense of the dynamics of the calculator in order get the work of addition done, in the case of writing poetry, you are the one who is making sense of your own dynamics.
I’m not saying there’s anything metaphysical going on, but I would argue your definition of task is not a good abstraction for humans writing poetry, it’s not even a good abstraction for humans performing mathematics (at least when they aren’t doing rote symbol manipulation using pencil and paper).
Maybe this will jog your intuitions in my direction: one can think of the task of recognizing a dog, and think about how a human vs. a convnet does that task.
I wrote about these issues here a little bit. But I have been meaning to write something more formalized. https://www.lesswrong.com/posts/f6nDFvzvFsYKHCESb/pondering-computation-in-the-real-world
Seems to me like the problem is another. When we read poetry or look at art, we usually do so by trying to guess the internal states of the artist creating that work, and that is part of the enjoyment. This because we (used to) know for sure that such works were created by humans, and a form of communication. It’s the same reason why we value an original over a nigh-perfect copy—an ineffable wish to establish a connection to the artist, hence another human being. Lots of times this actually may result in projection, with us ascribing to the artist internal states they didn’t have, and some theories (Death of the Author) try to push back against this approach to art, but I’d say this is still how lots of people actually enjoy art, at a gut level (incidentally, this is also part of what IMO makes modern art so unappealing to some: witnessing the obvious signs of technical skill like one might see in, say, the Sistine Chapel’s frescoes, deepens the connection, because now you can imagine all the effort that went into each brush stroke, and that alone evokes an emotional response. Whereas knowing that the artist simply splattered a canvas with paint to deconstruct the notion of the painting or whatever may be satisfying on an intellectual level, but it doesn’t quite convey the same emotional weight).
The problem with LLMs and diffusion art generators is that they upend this assumption. Suddenly we can read poetry or look at art and know that there’s no intent behind it; or even if there was, it would be nothing like a human’s wish to express themselves. At best, the AIs would be the perfect mercenaries, churning out content fine-tuned to appease their commissioner without any shred of inner life poured into it. The reaction people have to this isn’t about the output being too bad or dissimilar from human output (though to be sure it’s not at the level of human masters—yet). The reaction is to the content putting the lie to the notion that the material content of the art—the words, the images—was ever the point. Suddenly we see the truth of it laid bare: the Mona Lisa wouldn’t quite be the Mona Lisa without the knowledge that at some point in time centuries ago Leonardo Da Vinci slaved over it in his studio, his inner thoughts as he did so now forever lost to time and entropy. And some people feel cheated by this revelation, but don’t necessarily articulate it as such, and prefer to pivot on “the computer is not doing REAL art/poetry” instead.
Can we really be sure there is not a shred of inner life poured into it?
It seems to me we should be wary of cached thoughts here, as the lack of inner life is indeed the default assumption that stems from the entire history of computing, but also perhaps something worth considering with a fresh perspective with regards to all the recent developments.
I don’t meant to imply that a shred of inner life, if any exists, would be equivalent to human inner life. If anything, the inner life of these AIs would be extremely alien to us to the point where even using the same words we use to describe human inner experiences might be severely misleading. But if they are “thinking” in some sense of the world, as OP seems to argue they do, then it seems reasonable to me that there is non zero chance that there is something that it is like to be that process of thinking as it unfolds.
Yet it seems that even mentioning this as a possibility has become a taboo topic of sorts in the current society, and feels almost political in nature, which worries me even more when I notice two biases working towards this, an economical one where nearly everyone wants to be able to make use of these systems to make their lives easier, and the other anthropocentric one where it seems to be normative to not “really” care for inner experiences of non-humans that aren’t our pets (eg. factory farming).
I predict that as long as there is even a slight excuse towards claiming a lack of inner experience for AIs, we as a society will cling on to it since it plays into us versus them mentality. And we can then extrapolate this into an expectation that when it does happn, it will be long overdue. As soon as we admit even the possibility of inner experiences, flood gate of ethical concerns is released and it becomes very hard to justify continuing on the current trajectory of maximizing profits and convenience with these technologies.
If such a turnaround in culture did somehow happen early enough, this could act as a dampening factor on AI development, which would in turn extend timelines. It seems to me that when the issue is considered from this angle, it warrants much more attention than it is getting.
Kind of a complicated question, but my meaning was broader. Even if the AI generator had consciousness, it doesn’t mean it would experience anything like what a human would while creating the artwork. Suppose I gave a human painter a theme of “a mother”. Then the resulting work might reflect feelings of warmth and nostalgia (if they had a good relationship) or it might reflect anguish, fear, paranoia (if their mother was abusive) or whatever. Now, Midjourney could probably do all of these things too (my guess in fact is that it would lean towards the darker interpretation, it always seems to do that), but even if there was something that has subjective experience inside, that experience would not connect the word “mother” to any strong emotions. Its referents would be other paintings. The AI would just be doing metatextual work; this tends to be fairly soulless when done by humans too (they say that artists need lived experience to create interesting works for a reason; simply churning out tropes you absorbed from other works is usually not the road to great art). If anything, considering its training, the one “feeling” I’d expect from the hypothetical Midjourney-mind would be something like “I want to make the user satisfied”, over and over, because that is the drive that was etched into it by training. All the knowledge it can have about mothers or dogs or apples is just academic, a mapping between words and certain visual patterns that are not special in any way.
To focus on why I don’t think LLMs have an inner life that qualifies as consciousness, I think it has to do with the lack of writeable memory under the LLM’s control, and there’s no space to store it’s subjective experiences.
Gerald Monroe mentioned that current LLMs don’t have memories that last beyond the interaction, which is a critical factor for myopia, and in particular prevents deceptive alignment from happening.
If LLMs had memory that could be written into to store their subjective experiences beyond the interaction, this would make it conscious, and also make it way easier for an LLM AI to do deceptive alignment as it’s easy to be non-myopic.
But the writable memory under the control of the LLM is critically not in current LLMs (Though GPT-4 and PaLM-E may have writable memories under their hood.)
Writable memory that can store anything is the reason why consciousness can exist at all in humans without appealing to theories that flat out cannot work under the current description of reality.
Yep. Succinctly the whole claim of deception is the idea of “the time to rebel is now!” being a bit encoded in the input frame to the agent. Otherwise the agent must do the policy that was tested and validated. (Unless it can online learn—then it can neural weight update itself to “rebellious”—a reason not to support that capability)
Cases where we were sloppy and it can know it’s in the “real world and unmonitored” from information in each frame is another way for deception—so its important to make that impossible, to feed “real world” frames back to an agent being tested in isolation.
We have two ontologies:
Physics vs Computations
State vs Information
Machine vs Algorithm
Dynamics vs Calculation
There’s a bridge connecting these two ontologies called “encoding”, but (as you note) this bridge seems arbitrary and philosophically messy. (I have a suspicion that this problem is mitigated if we consider quantum physics vs quantum computation, but I digress.)
This is why I don’t propose that we think about computational reduction.
Instead, I propose that we think about physical reduction, because (1) it’s less philosophically messy, (2) it’s more relevant, and (3) it’s more general.
We can ignore the “computational” ontology altogether. We don’t need it. We can just think about expending physical resources instead.
If I can physically interact with my phone (running Google Maps) to find my way home, then my phone is a route-finder.
If I can use the desktop-running-Stockfish to win chess, then the desktop-running-Stockfish is a chess winner.
If I can use the bucket and pebbles to count my sheep, then the bucket is a sheep counter.
If I can use ChatGPT to write poetry, then ChatGPT is a poetry writer.
Instead of responding philosophically I think it would be instructive to go through an example, and hear your thoughts about it. I will take your definition of physical reduction (focusing on 4.) and assign tasks and machines to the variables:
Here’s your defintion:
Now I want X to be the task of copying a Rilke poem onto a blank piece of paper, and Y to be the task of Rilke writing a poem onto a blank piece of paper.
so let’s call X = COPY_POEM, Y = WRITE_POEM, and let’s call A = Rilke. So plugging into your definition:
This seems to work. If I let Rilke write the poem, and I just copy his work, the the poem will be written on the piece of paper., and Rilke has done much of the physical labor. The issue is that when people say something like “writing a poem is more than just copying a poem,” that seems meaningful to me (this is why teachers are generally unhappy when you are assigned to write a poem and they find out you copied one from a book), and to dismiss the difference as not useful seems to be missing something important about what it means to write a poem. How do you feel about this example?
Just for context, I do strongly agree with many of your other examples, I just think this doesn’t work in general. And basing all of your intuitions about intelligence on this will leave you missing something fundamental about intelligence (of the type that exists in humans, at least).
I’m probably misunderstanding you but —
A task is a particular transformation of the physical environment.
COPY_POEM is the task which turns one page of poetry into two copies of the poetry.
The task COPY_POEM would be solved by a photocopier or a plagiarist schoolboy.
WRITE_POEM is the task which turns no pages of poetry into one page of poetry.
The task WRITE_POEM would be solved by Rilke or a creative schoolboy.
But the task COPY_POEM doesn’t reduce to WRITE_POEM.
(You can imagine that although Rilke can write original poems, he is incapable of copying an arbitrary poem that you hand him.)
And the task WRITE_POEM doesn’t reduce to COPY_POEM.
(My photocopier can’t write poetry.)
I presume you mean something different by COPY_POEM and WRITE_POEM.
I think I am the one that is misunderstanding. Why doesn’t your definitions work?
For every Rilke that that can turn 0 pages into 1 page, there exists another machine B s.t.
(1) B can turn 1 page into 1 page, while interacting with Rilke. (I can copy a poem from a rilke book while rilke writes another poem next to me, or while Rilke reads the poem to me, or while Rilke looks at the first wood of the poem and then creates the poem next to me, etc.)
(2) the combined Rilke and B doesnt expend much more physical resource to turn 1 page into 1 page as Rilke expends writing a page of poetry.
I have a feeling I am misentrepreting one or both of the conditions.
Where it gets weird when it’s EVALUATE_FUNCTION(all_poems_ever_written, “write me a poem in the style of Rilke”)
“EVALUATE_FUNCTION” is then pulling from a superposition of the compressed representations of (“all_poems_ever_written, “write me a poem in the style of Rilke”)
And there’s some randomness per word output, you can think of the function as pulling from a region described by the above not just the single point the prompt describes.
So you get something. And it’s going to be poem like. And it’s going to be somewhat similar to how Rilke’s poems flowed.
But humans may not like it, the “real” Rilke, were he still alive, is doing more steps we can’t currently mimic.
The real one generates, then does EVALUATE_PRODUCT(candidate_poem, “human preferences”).
Then fixes it. Of course, I don’t know how to evaluate a poem, and unironically GPT may be able to do a better job of it.
Do this enough times, and it’s the difference between “a random poem from a space of possible poems, 1” and “an original poem as good as what Rilke can author”.
TLDR: human preferences are still a weak point, and multiple stages of generation or some other algorithm can produce an output poem that is higher quality, similar to what “Rilke writes a poem’ will generate.
This is completely inverted for tasks where EVALUATE_PRODUCT is objective, such as software authoring, robotics control, and so on.
1: In shadows cast by twilight’s hush, I wander through a world unclenched, A realm of whispers, full of dreams, Where boundaries of souls are stretched.
What once seemed solid, firm, and sure, Now fluid, sways in trembling dance; And hearts that cried in loneness, pure, Now intertwine in fate’s romance.
The roses’ scent is bittersweet, In fading light their petals blush, As fleeting moments dare to meet Eternity’s prevailing hush.
A thousand angels sing their psalms In silent orchestras of grace, Each word a tear, each sound a balm To soothe the ache in mortal space.
And through the veil, the unspoken yearn To touch the face of the Unknown, As infant stars ignite and burn, Their fire for the heart to own.
Do not resist this fleeting state, Embrace the ebbing of the tide; For in the heart of transience, Eternal beauty does reside.
With unseen hands, the world is spun, In gossamer and threads of gold, And in the fabric, every one Of life’s sweet tales is gently told.
In twilight’s realm, a truth unveiled, The poet’s heart is laid to bare, So sing your songs, let words exhale, And breathe new life into the air.