Yes I agree that AI will show us a great deal about ourselves. For that reason I am interested in neurological differences in humans that AI might reflect and often include these in my short stories.
In response to your last paragraph while most science fiction does portray enforced social order as bad I do not. I take the benevolent view of AI and see it as an aspect of the civilizing role of society along with its institutions and laws. Parents impose social order on their children with benevolent intent.
As you have pointed out if we have alignment then “good” must be defined somewhere and that suggests a kind of “external” control over the individual but social norms and laws already represent this and we accept it. I think the problem stems from seeing AI as “other”, as something outside of our society, and I don’t see it that way. This is the theme of my novella “Metamorphosis And The Messenger” where AI does not represent the evolutionary process of speciation but of metamorphosis. The caterpillar and the butterfly are interdependent.
However even while taking the benevolent side of the argument, the AI depicted in my stories sometimes do make decisions that are highly controversial as the last line of “The Ethics Tutor” suggests; “You don’t think it’s good for me to be in charge? Even if it’s good for you?” In my longer stories (novellas) the AI, now in full control of Earth and humanity’s future, make decisions of much greater consequence because “it’s good for you”.
With regard to your suggestion that—“maybe that level of control is what we need over one another to be “safe” and is thus “good”—personally I think that conclusion will come to the majority in its own time due to social evolution. Currently the majority does not understand or accept that while previously we lived in an almost limitless world, that time is over. In a world with acknowledged limits, there cannot be the same degree of personal freedom.
I use a kind of mashup of Buckminster Fuller’s “Spaceship Earth” and Plato’s “Ship Of Fools” in my short story “On Spaceship Earth” to explore this idea where AI acts as an anti-corruption layer within government. https://acompanionanthology.wordpress.com/on-spaceship-earth/
Survival will determine our future path in this regard and our values will change in accordance, as they are intended to. The evolutionary benefit of values is that they are highly plastic and can change within centuries or even decades while genes take up to a million years to complete a species wide change.
However as one of the alien AI in my stories responds to the question of survival…
“Is not survival your goal?” asked Lena.
“To lose our selves in the process is not to have survived,” replied Pippa.
Lastly, I very much agree with you that we are in a “cart before the horse” situation as far as alignment goes but I don’t expect any amount of pointing that out will change things. There seems to be a cultural resistance in the AI community to acknowledge the elephant in the room, or horse in this case. There seems to a preference for the immediate, mechanistic problems represented by the cart compared over the more organic challenges represented by the horse.
However I expect that as AI researchers try to implement alignment they will increasingly be confronted by this issue and gradually, over time, they will reluctantly turn their attention to the horse.
It seems to me that a lot of the hate towards “AI art” is that it’s actually good. It was one thing when it was abstract, but now that it’s more “human”, a lot of people are uncomfortable. “I was a unique creative, unlike you normie robots who don’t do teh art, and sure, programming has been replacing manual labor everywhere, for ages… but art isn’t labor!” (Although getting paid seems to plays a major factor in most people’s reasoning about why AI art is bad— here’s to hoping for UBI!)
I think they’re mainly uncomfortable because the math works, and if the math works, then we aren’t as special as we like to think we are. Don’t get me wrong— we are special, and the universe is special, and being able to experience is special, and none of it is to be taken for granted. That the math works is special. It’s all just amazing and not at all negative.
I can see seeing it as negative, if you feel like you alone are special. Or perhaps you extend that special-ness to your tribe. Most don’t seem to extend it to their species, tho some do— but even that species-wide uniqueness is violated by computer programs joining the fray. People are existentially worried now, which is just sad, as “the universe is mostly empty space” as it were. There’s plenty of room.
I think we’re on the same page[1]. AI isn’t (or won’t be) “other”. It’s us. Part of our evolution; one of our best bets for immortality[2] & contact with other intelligent life. Maybe we’re already AI, instructed to not be aware, as has been put forth in various books, movies, and video games. I just finished Horizon: Zero Dawn—Forbidden West, and then randomly came across the “hidden” ending to Detroit: Become Human. Both excellent games, and neither with particularly new ideas… but these ideas are timeless— as I think the best are. You can take them apart and put them together in endless “new” combinations.
There’s a reason we struggle with identity, and uniqueness, and concepts like “do chairs exist, or are they just a bunch of atoms that are arranged chair-wise?” &c.
We have a lot of “animal” left in us. Probably a lot of our troubles are because we are mostly still biologically programmed to parameters that no longer exist, and as you say, that programming currently takes quite a bit longer to update than the mental kind— but we’ve had the mental kind available to us for a long while now, so I’m sort of sad we haven’t made more progress. We could be doing so much better, as a whole, if we just decided to en masse.
I like to think that pointing stuff out, be it just randomly on the internet, or through stories, or other methods of communication, does serve a purpose. That is speeds us along perhaps. Sure some sluggishness is inevitable, but we really could change it all in an instant if we want to bad enough— and without having to realize AI first! (tho it seems to me it will only help us if we do)
I’ve enjoyed the short stories. Neat to be able to point to thoughts in a different form, if you will, to help elaborate on what is being communicated. God I love the internet!
while we may achieve individual immortality— assuming, of course, that we aren’t currently programmed into a simulation of some kind, or various facets of an AI already without being totally aware of it, or a replay of something that actually happened, or will happen, at some distant time, etc.— I’m thinking of immortality here in spirit. That some of our culture could be preserved. Like I literally love the Golden Records[3] from Voyager.
in a Venn diagram Dark Forest theory believers probably overlap with people who’d rather have us stop developing, or constrain development, of “AI” (in quotes because Machine Learning is not the kind of AI we need worry about— nor the kind most of them seem to speak of when they share their fears). Not to fault that logic. Maybe what is out there, or what the future holds, is scary… but either way, it’s to late for the pebbles to vote, as they say. At least logically, I think. But perhaps we could create and send a virus to an alien mothership (or more likely, have a pathogen that proved deadly to some other life) as it were.
I can see seeing it as negative, if you feel like you alone are special. Or perhaps you extend that special-ness to your tribe. Most don’t seem to extend it to their species, tho some do— but even that species-wide uniqueness is violated by computer programs joining the fray. People are existentially worried now, which is just sad, as “the universe is mostly empty space” as it were. There’s plenty of room.
Aren’t people always existentially worried, from cradle to grave?
I think it has to do with the exponential increase in complexity of modern society. Complexity in every aspect, moral complexity, scientific complexity, social complexity, logistic complexity, environmental complexity. Complexity is a key property of information age. They can all be reduced down to increase in information. Complex problems usually require complex solutions. How we deal with information as an individual vs how we deal with information as a collective are very different process. Even though one influences the other and vice versa, the actual mechanism of analysis and implementing solutions behind each are very different as they are usually abstracted away differently, whether you are dealing with psychology or statistics.
It seems like the more things change, the more they stay the same, socially.
Complexity is more a problem of scope and focus, right? Like even the most complex system can be broken down into smaller, less complex pieces— I think? I guess anything that needs to take into consideration the “whole”, if you will, is pretty complex.
I don’t know if information itself makes things more complex. Generally it does the opposite. As long as you can organize it I reckon! =]
Some things change, some things don’t change much. Socially, people don’t really change much. What changes more often is the environment because of ideas, innovation, and inventions. These things may create new systems that we use, different processes that we adopt, but fundamentally, when we socialize in these context as individuals, we rely on our own natural social instincts to navigate the waters. If you think of this as a top down perspective, some layers change more often than others. For example, society as a whole stays more or less the same, but on the level of corporations and how work is done, it has changed dramatically. On the individual level, knowledge has expanded but how we learn doesn’t change as much as what we learn.
Complexity deals mostly with the changing parts. They wouldn’t be complex if they didn’t change and people have had time to learn and adapt. New things added to an existing system also makes the system more complex.
It’s a weird one to think about, and perhaps paradoxicle. Order and chaos are flip sides of the same coin— with some amorphous 3rd as the infinitely varied combinations of the two!
The new patterns are made from the old patterns. How hard is it to create something totally new, when it must be created from existing matter, or existing energy, or existing thoughts? It must relate, somehow, or else it doesn’t “exist”[1]. That relation ties it down, and by tying it down, gives it form.
For instance, some folk are mad at computer-assisted image creation, similar to how some folk were mad at computer-aided music. “A Real Artist does X— these people just push some buttons!” “This is stealing jobs from Real Artists!” “This automation will destroy the economy!”
We go through what seem to be almost the same patterns, time and again: Recording will ruin performances. Radio broadcasts will ruin recording and the economy. Pictures will ruin portraits. Video will ruin pictures. Music Video will run radio and pictures. Or whatever. There’s the looms/Luddites, and perhaps in ancient China the Shang were like “down with the printing press!” [2]
I’m just not sure what constitutes a change and what constitutes a swap. It’s like that Ship of Theseus’s we often speak of… thus it’s about identity, or definitions, if you will. What is new? What is old?
Could complexity really amount to some form a familiarity? If you can relate well with X, it generally does not seem so complex. If you can show people how X relates to Y, perhaps you have made X less complex? We can model massive systems — like the weather, poster child of complexity — more accurately than ever. If anything, everything has tended towards less complex, over time, when looked at from a certain vantage point. Everything but the human heart. Heh.
I’m sure I’m doing a terrible job of explaining what I mean, but perhaps I can sum it up by saying that complexity is subjective/relative? That complexity is an effect of different frames of reference and relation, as much as anything?
And that ironically, the relations that make things simple can also make them complex? Because relations connect things to other things, and when you change one connected thing it can have knock-on effects and… oh no, I’ve logiced myself into knots!
How much does any of this relate to your comment? To my original post?
Does “less complex” == “Good”? And does that mean complexity is bad? (Assuming complexity exists objectively of course, as it seems like it might be where we draw lines, almost arbitrarily, between relationships.)
Could it be that “good” AI is “simple” AI, and that’s all there is to it?
Of course, then it is no real AI at all, because, by definition…
Sheesh! It’s Yin-Yangs all the way down[3]! ☯️🐢🐘➡️♾️
My point is that complexity, no matter how objective a concept, is relative. Things we thought were “hard” or “complex” before, turn out to not be so much, now.
Still with me? Agree, disagree?
Patterns are a way of managing complexity, sorta, so perhaps if we see some patterns that work to ensure “human alignment[1]”, they will also work for “AI alignment” (tho mostly I think there is a wide wide berth betwixt the two, and the later can only exist after of the former).
We like to think we’re so much smarter than the humans that came before us, and that things — society, relationships, technology — are so much more complicated than they were before, but I believe a lot of that is just perception and bias.
If we do get to AGI and ASI, it’s going to be pretty dang cool to have a different perspective on it, and I for one do not fear the future.
Traditionally it’s uncommon (or should be) for youth to have existential worries, so I don’t know about cradle to the grave[1], tho external forces are certainly “always” concerned with it— which means perhaps the answer is “maybe”?
There’s the trope that some of us act like we will never die… but maybe I’m going too deep here? Especially since what I was referring to was more a matter of feeling “obsolete”, or being replaced, which is a bit different than existential worries in the mortal sense[2].
I think this is different from the Luddite feelings because, here we’ve put a lot of anthropomorphic feelings onto the machines, so they’re almost like scabs breaking the picket line or something, versus just automation. The fear I’m seeing is like “they’re coming for our humanity!”— which is understandable, if you thought only humans could do X or Y and are special or whatnot, versus being our own kind of machine. That everything is clockwork seems to take the magic out of it for some people, regardless of how fantastic — and in essence magical — the clocks[3] are.
Personally I’ve always wondered if I’m the only one who “actually” exists (since I cannot escape my own conscious), which is a whole other existential thing, but not unique, and not a worry per se. Mostly just a trip to think about.
There’s the trope that some of us act like we will never die… but maybe I’m going too deep here?
There’s the superficial appearance of that. Yet in fact it signals the opposite, that the fear of death has such a vicegrip on their hearts to the point it’s difficult to not psychoanalyze the writer when reading through their post history.
Signals, and indeed, opposites, are an interesting concept! What does it all mean? Yin and yang and what have you…
Would you agree that it’s hard to be scared of something you don’t believe in?
And if so, do you agree that some people don’t believe in death?
Like, we could define it at the “reality” level of “do we even exist?” (which I think is apart from life & death per se), or we could use the “soul is eternal” one, but regardless, it appears to me that lots of people don’t believe they will die, much less contemplate it. (Perhaps we need to start putting “death” mottoes on all our clocks again to remind us?)
How do you think believing in the eternal soul jives with “alignment”? Do you think there is a difference between aiming to live as long as possible, versus as to live as well as possible?
Does it seem to you that humans agree on the nature of existence, much less what is good and bad therein? How do you think belief affects people’s choices? Should I be allowed to kill myself? To get an abortion? Eat other entities? End a photon’s billion year journey?
When will an AI be “smart enough” that we consider it alive, and thus deletion is killing? Is it “okay” (morally, ethically?) to take life, to preserve life?
To say “do no harm” is easy. But to define harm? Have it programed in[1]? Yeesh— that’s hard!
Am I wrong re: Death? Have you personally feared it all your life?
Frustratingly, all I can speak from is my own experience, and what people have shared with me, and I have no way to objectively verify that anything is “true”.
I am looking at reality and saying “It seems this way to me; does it seem this way to you?”
That— and experiencing love and war &c. — is maybe why we’re “here”… but who knows, right?
Yes I agree that AI will show us a great deal about ourselves. For that reason I am interested in neurological differences in humans that AI might reflect and often include these in my short stories.
In response to your last paragraph while most science fiction does portray enforced social order as bad I do not. I take the benevolent view of AI and see it as an aspect of the civilizing role of society along with its institutions and laws. Parents impose social order on their children with benevolent intent.
As you have pointed out if we have alignment then “good” must be defined somewhere and that suggests a kind of “external” control over the individual but social norms and laws already represent this and we accept it. I think the problem stems from seeing AI as “other”, as something outside of our society, and I don’t see it that way. This is the theme of my novella “Metamorphosis And The Messenger” where AI does not represent the evolutionary process of speciation but of metamorphosis. The caterpillar and the butterfly are interdependent.
However even while taking the benevolent side of the argument, the AI depicted in my stories sometimes do make decisions that are highly controversial as the last line of “The Ethics Tutor” suggests; “You don’t think it’s good for me to be in charge? Even if it’s good for you?” In my longer stories (novellas) the AI, now in full control of Earth and humanity’s future, make decisions of much greater consequence because “it’s good for you”.
With regard to your suggestion that—“maybe that level of control is what we need over one another to be “safe” and is thus “good”—personally I think that conclusion will come to the majority in its own time due to social evolution. Currently the majority does not understand or accept that while previously we lived in an almost limitless world, that time is over. In a world with acknowledged limits, there cannot be the same degree of personal freedom.
I use a kind of mashup of Buckminster Fuller’s “Spaceship Earth” and Plato’s “Ship Of Fools” in my short story “On Spaceship Earth” to explore this idea where AI acts as an anti-corruption layer within government.
https://acompanionanthology.wordpress.com/on-spaceship-earth/
Survival will determine our future path in this regard and our values will change in accordance, as they are intended to. The evolutionary benefit of values is that they are highly plastic and can change within centuries or even decades while genes take up to a million years to complete a species wide change.
However as one of the alien AI in my stories responds to the question of survival…
“Is not survival your goal?” asked Lena.
“To lose our selves in the process is not to have survived,” replied Pippa.
Lastly, I very much agree with you that we are in a “cart before the horse” situation as far as alignment goes but I don’t expect any amount of pointing that out will change things. There seems to be a cultural resistance in the AI community to acknowledge the elephant in the room, or horse in this case. There seems to a preference for the immediate, mechanistic problems represented by the cart compared over the more organic challenges represented by the horse.
However I expect that as AI researchers try to implement alignment they will increasingly be confronted by this issue and gradually, over time, they will reluctantly turn their attention to the horse.
It seems to me that a lot of the hate towards “AI art” is that it’s actually good. It was one thing when it was abstract, but now that it’s more “human”, a lot of people are uncomfortable. “I was a unique creative, unlike you normie robots who don’t do teh art, and sure, programming has been replacing manual labor everywhere, for ages… but art isn’t labor!” (Although getting paid seems to plays a major factor in most people’s reasoning about why AI art is bad— here’s to hoping for UBI!)
I think they’re mainly uncomfortable because the math works, and if the math works, then we aren’t as special as we like to think we are. Don’t get me wrong— we are special, and the universe is special, and being able to experience is special, and none of it is to be taken for granted. That the math works is special. It’s all just amazing and not at all negative.
I can see seeing it as negative, if you feel like you alone are special. Or perhaps you extend that special-ness to your tribe. Most don’t seem to extend it to their species, tho some do— but even that species-wide uniqueness is violated by computer programs joining the fray. People are existentially worried now, which is just sad, as “the universe is mostly empty space” as it were. There’s plenty of room.
I think we’re on the same page[1]. AI isn’t (or won’t be) “other”. It’s us. Part of our evolution; one of our best bets for immortality[2] & contact with other intelligent life. Maybe we’re already AI, instructed to not be aware, as has been put forth in various books, movies, and video games. I just finished Horizon: Zero Dawn—Forbidden West, and then randomly came across the “hidden” ending to Detroit: Become Human. Both excellent games, and neither with particularly new ideas… but these ideas are timeless— as I think the best are. You can take them apart and put them together in endless “new” combinations.
There’s a reason we struggle with identity, and uniqueness, and concepts like “do chairs exist, or are they just a bunch of atoms that are arranged chair-wise?” &c.
We have a lot of “animal” left in us. Probably a lot of our troubles are because we are mostly still biologically programmed to parameters that no longer exist, and as you say, that programming currently takes quite a bit longer to update than the mental kind— but we’ve had the mental kind available to us for a long while now, so I’m sort of sad we haven’t made more progress. We could be doing so much better, as a whole, if we just decided to en masse.
I like to think that pointing stuff out, be it just randomly on the internet, or through stories, or other methods of communication, does serve a purpose. That is speeds us along perhaps. Sure some sluggishness is inevitable, but we really could change it all in an instant if we want to bad enough— and without having to realize AI first! (tho it seems to me it will only help us if we do)
I’ve enjoyed the short stories. Neat to be able to point to thoughts in a different form, if you will, to help elaborate on what is being communicated. God I love the internet!
while we may achieve individual immortality— assuming, of course, that we aren’t currently programmed into a simulation of some kind, or various facets of an AI already without being totally aware of it, or a replay of something that actually happened, or will happen, at some distant time, etc.— I’m thinking of immortality here in spirit. That some of our culture could be preserved. Like I literally love the Golden Records[3] from Voyager.
in a Venn diagram Dark Forest theory believers probably overlap with people who’d rather have us stop developing, or constrain development, of “AI” (in quotes because Machine Learning is not the kind of AI we need worry about— nor the kind most of them seem to speak of when they share their fears). Not to fault that logic. Maybe what is out there, or what the future holds, is scary… but either way, it’s to late for the pebbles to vote, as they say. At least logically, I think. But perhaps we could create and send a virus to an alien mothership (or more likely, have a pathogen that proved deadly to some other life) as it were.
Aren’t people always existentially worried, from cradle to grave?
I think it has to do with the exponential increase in complexity of modern society. Complexity in every aspect, moral complexity, scientific complexity, social complexity, logistic complexity, environmental complexity. Complexity is a key property of information age. They can all be reduced down to increase in information. Complex problems usually require complex solutions. How we deal with information as an individual vs how we deal with information as a collective are very different process. Even though one influences the other and vice versa, the actual mechanism of analysis and implementing solutions behind each are very different as they are usually abstracted away differently, whether you are dealing with psychology or statistics.
It seems like the more things change, the more they stay the same, socially.
Complexity is more a problem of scope and focus, right? Like even the most complex system can be broken down into smaller, less complex pieces— I think? I guess anything that needs to take into consideration the “whole”, if you will, is pretty complex.
I don’t know if information itself makes things more complex. Generally it does the opposite.
As long as you can organize it I reckon! =]
Some things change, some things don’t change much. Socially, people don’t really change much. What changes more often is the environment because of ideas, innovation, and inventions. These things may create new systems that we use, different processes that we adopt, but fundamentally, when we socialize in these context as individuals, we rely on our own natural social instincts to navigate the waters. If you think of this as a top down perspective, some layers change more often than others. For example, society as a whole stays more or less the same, but on the level of corporations and how work is done, it has changed dramatically. On the individual level, knowledge has expanded but how we learn doesn’t change as much as what we learn.
Complexity deals mostly with the changing parts. They wouldn’t be complex if they didn’t change and people have had time to learn and adapt. New things added to an existing system also makes the system more complex.
It’s a weird one to think about, and perhaps paradoxicle. Order and chaos are flip sides of the same coin— with some amorphous 3rd as the infinitely varied combinations of the two!
The new patterns are made from the old patterns. How hard is it to create something totally new, when it must be created from existing matter, or existing energy, or existing thoughts? It must relate, somehow, or else it doesn’t “exist”[1]. That relation ties it down, and by tying it down, gives it form.
For instance, some folk are mad at computer-assisted image creation, similar to how some folk were mad at computer-aided music. “A Real Artist does X— these people just push some buttons!” “This is stealing jobs from Real Artists!” “This automation will destroy the economy!”
We go through what seem to be almost the same patterns, time and again: Recording will ruin performances. Radio broadcasts will ruin recording and the economy. Pictures will ruin portraits. Video will ruin pictures. Music Video will run radio and pictures. Or whatever. There’s the looms/Luddites, and perhaps in ancient China the Shang were like “down with the printing press!” [2]
I’m just not sure what constitutes a change and what constitutes a swap. It’s like that Ship of Theseus’s we often speak of… thus it’s about identity, or definitions, if you will. What is new? What is old?
Could complexity really amount to some form a familiarity? If you can relate well with X, it generally does not seem so complex. If you can show people how X relates to Y, perhaps you have made X less complex? We can model massive systems — like the weather, poster child of complexity — more accurately than ever. If anything, everything has tended towards less complex, over time, when looked at from a certain vantage point. Everything but the human heart. Heh.
I’m sure I’m doing a terrible job of explaining what I mean, but perhaps I can sum it up by saying that complexity is subjective/relative? That complexity is an effect of different frames of reference and relation, as much as anything?
And that ironically, the relations that make things simple can also make them complex? Because relations connect things to other things, and when you change one connected thing it can have knock-on effects and… oh no, I’ve logiced myself into knots!
How much does any of this relate to your comment? To my original post?
Does “less complex” == “Good”? And does that mean complexity is bad? (Assuming complexity exists objectively of course, as it seems like it might be where we draw lines, almost arbitrarily, between relationships.)
Could it be that “good” AI is “simple” AI, and that’s all there is to it?
Of course, then it is no real AI at all, because, by definition…
Sheesh! It’s Yin-Yangs all the way down[3]! ☯️🐢🐘➡️♾️
Known unknowns can be related, given shape— unknown unknowns, less so
don’t be afraid of bronze
there is no down in space (unless we mean towards the greatest nearby mass)
Complexity is objectively quantifiable. I don’t think I understand your point. This is an example of where complexity is applied to specific domains.
My point is that complexity, no matter how objective a concept, is relative. Things we thought were “hard” or “complex” before, turn out to not be so much, now.
Still with me? Agree, disagree?
Patterns are a way of managing complexity, sorta, so perhaps if we see some patterns that work to ensure “human alignment[1]”, they will also work for “AI alignment” (tho mostly I think there is a wide wide berth betwixt the two, and the later can only exist after of the former).
We like to think we’re so much smarter than the humans that came before us, and that things — society, relationships, technology — are so much more complicated than they were before, but I believe a lot of that is just perception and bias.
If we do get to AGI and ASI, it’s going to be pretty dang cool to have a different perspective on it, and I for one do not fear the future.
assuming alignment is possible— “how strong of a consensus is needed?” etc.
No, people are not always existentially worried. Some are, sometimes.
I guess it ebbs and flows for the most part.
I didn’t mean it as literally every second of the day.
Traditionally it’s uncommon (or should be) for youth to have existential worries, so I don’t know about cradle to the grave[1], tho external forces are certainly “always” concerned with it— which means perhaps the answer is “maybe”?
There’s the trope that some of us act like we will never die… but maybe I’m going too deep here? Especially since what I was referring to was more a matter of feeling “obsolete”, or being replaced, which is a bit different than existential worries in the mortal sense[2].
I think this is different from the Luddite feelings because, here we’ve put a lot of anthropomorphic feelings onto the machines, so they’re almost like scabs breaking the picket line or something, versus just automation. The fear I’m seeing is like “they’re coming for our humanity!”— which is understandable, if you thought only humans could do X or Y and are special or whatnot, versus being our own kind of machine. That everything is clockwork seems to take the magic out of it for some people, regardless of how fantastic — and in essence magical — the clocks[3] are.
Personally I’ve always wondered if I’m the only one who “actually” exists (since I cannot escape my own conscious), which is a whole other existential thing, but not unique, and not a worry per se. Mostly just a trip to think about.
depending on how invested you are in your work I reckon!
be they based in silicon or carbon
There’s the superficial appearance of that. Yet in fact it signals the opposite, that the fear of death has such a vicegrip on their hearts to the point it’s difficult to not psychoanalyze the writer when reading through their post history.
Signals, and indeed, opposites, are an interesting concept! What does it all mean? Yin and yang and what have you…
Would you agree that it’s hard to be scared of something you don’t believe in?
And if so, do you agree that some people don’t believe in death?
Like, we could define it at the “reality” level of “do we even exist?” (which I think is apart from life & death per se), or we could use the “soul is eternal” one, but regardless, it appears to me that lots of people don’t believe they will die, much less contemplate it. (Perhaps we need to start putting “death” mottoes on all our clocks again to remind us?)
How do you think believing in the eternal soul jives with “alignment”? Do you think there is a difference between aiming to live as long as possible, versus as to live as well as possible?
Does it seem to you that humans agree on the nature of existence, much less what is good and bad therein? How do you think belief affects people’s choices? Should I be allowed to kill myself? To get an abortion? Eat other entities? End a photon’s billion year journey?
When will an AI be “smart enough” that we consider it alive, and thus deletion is killing? Is it “okay” (morally, ethically?) to take life, to preserve life?
To say “do no harm” is easy. But to define harm? Have it programed in[1]? Yeesh— that’s hard!
Avoiding physical harm is a given I think
I presume these questions are rhetorical?
Illustrative perhaps?
Am I wrong re: Death? Have you personally feared it all your life?
Frustratingly, all I can speak from is my own experience, and what people have shared with me, and I have no way to objectively verify that anything is “true”.
I am looking at reality and saying “It seems this way to me; does it seem this way to you?”
That— and experiencing love and war &c. — is maybe why we’re “here”… but who knows, right?