linktr.ee/alexbeyman
Alex Beyman
Blueprint for a Brighter Future
An Appeal to AI Superintelligence: Reasons Not to Preserve (most of) Humanity
A conspiracy theory about Jeffrey Epstein has 264 votes currently: https://www.lesswrong.com/posts/hurF9uFGkJYXzpHEE/a-non-magical-explanation-of-jeffrey-epstein
How commonly are arguments on LessWrong aimed at specific users? Sometimes, certainly. But it seems the rule, rather than the exception, that articles here dissect commonly encountered lines of thought, absent any attribution. Are they targeting “someone not in the room”? Do we need to put a face to every position?
By the by, “They’re making cognitive errors” is an insultingly reductive way to characterize, for instance, the examination of value hierarchies and how awareness of them vs unawareness influence both our reasoning and appraisal of our fellow man’s morals.
When I tried, it didn’t work. I don’t know why. I agree with the premise of your article, having noticed that phenomenon in journalism myself before. I suppose when I say truth, I don’t mean the same thing you do, because it’s selective and with dishonest intent.
“Saying you put the value of truth above your value of morality on your list of values is analogous to saying you put your moral of truth above your moral of values; it’s like saying bananas are more fruity to you than fruits.”
I’m not sure if I understand your meaning here. Do you mean that truth and morality are one in the same, or that one is a subset of the other?
”Where does non-misleadingness fall on your list of supposedly amoral values such as truth and morality? Is non-misleadingness higher than truth or lower?”
Surely to be truthful is to be non-misleading...?
>”Perhaps AIs would treat humans like humans currently treat wildlife and insects, and we will live mostly separate lives, with the AI polluting our habitat and occasionally demolishing a city to make room for its infrastructure, etc.”
Planetary surfaces are actually not a great habitat for AI. Earth in particular has a lot of moisture, weather, ice, mud, etc. that poses challenges for mechanical self replication. The asteroid belt is much more ideal. I hope this will mean AI and human habitats won’t overlap, and that AI would not want the Earth’s minerals simply because the same minerals are available without the difficulty of entering/exiting powerful gravity wells.
I suppose I was assuming non-wrapper AI, and should have specified that. The premise is that we’ve created an authentically conscious AI.
Humans are not wrapper-minds.
Aren’t we? In fact, doesn’t evolution consistently produce minds which optimize for survival and reproduction? Sure, we’re able to overcome mortal anxiety long enough to commit suicide. But survival and reproduction is a strongly enough engrained instinctual goal that we’re still here to talk about it, 3 billion years on.
Bad according to whose priorities, though? Ours, or the AI’s? That was more the point of this article, whether our interests or the AI’s ought to take precedence, and whether we’re being objective in deciding that.
How Do We Protect AI From Humans?
Rarely do I get such insightful feedback but I appreciate when I do. It’s so hard to step outside of myself, I really value the opportunity to see my thoughts reflected back at me through other lenses than the one I see the world through. I suppose I imagined the obsolete tech would leave little doubt that the Sidekicks aren’t sentient, but the story also sort of makes the opposite case throughout when it talks about how personality is built up by external influences. I want the reader to be undecided by the end and it seems I can’t have that cake and eat it too (have the protag be the good guy). Thanks again and Merry Christmas
Because the purpose of horror fiction is to entertain. And it is more entertaining to be wrong in an interesting way than it is to be right.
>”″I’m going to do high-concept SCP SF worldbuilding literally set in a high-tech underground planet of vaults”
I do not consider this story scifi, nor PriceCo to be particularly high tech.
>”and focus on the details extensively all the way to the end—well, except when I get lazy and don’t want to fix any details even when pointed out with easy fixes by a reader”
All fiction breaks down eventually, if you dig deep enough. The fixes were not easy in my estimation. I am thinking now this story was a poor fit for this platform however
You may also enjoy these companion pieces:
I purposefully left it indeterminate so readers could fill in the blanks with their own theories. But broadly it represents a full, immediate and uncontrolled comprehension of recursive, fractal infinity. The pattern of relationships between all things at every scale, microcosm and macrocosm.
More specifically to the story I like to think they were never human, but always those creatures dreaming they were humans, shutting out the awful truth using the dome which represents brainwashing / compartmentalization. Although I am not dead-set on this interpretation and have written other stories in this setting which contradict it.
Incidentally this story was inspired by the following two songs:
Fair point. But then, our most distant ancestor was a mindless maximizer of sorts with the only value function of making copies of itself. It did indeed saturate the oceans with those copies. But the story didn’t end there, or there would be nobody to write this.
Good catch, indeed you’re right that it isn’t standard evolution and that an AI studies how the robots perish and improves upon them. This is detailed in my novel Little Robot, which follows employees of Evolutionary Robotics who work on that project in a subterranean facility attached to the cave network: https://www.amazon.com/Little-Robot-Alex-Beyman-ebook/dp/B06W56VTJ2
This is a prologue of sorts. It takes place in the same world as The Shape of Things to Come, The Three Cardinal Sins, and Perfect Enemy (Recently uploaded at the time of writing) with The Answer serving as the epilogue.
I appreciate your insightful post. We seem similar in our thinking up to a point. Where we diverge is that I am not prejudicial about what form intelligence takes. I care that it is conscious, insofar as we can test for such a thing. I care that it lacks none of our capacities, so that what we offer the universe does not perish along with us. But I do not care that it be humans, specifically, and feel there are carriers of intelligence far more suited to the vacuum of space than we are, or even cyborgs. Does the notion of being superceded disturb you?
Ah yes, the age old struggle. “Don’t listen to them, listen to me!” In Deuteronomy 4:2 Moses declares, “You shall not add to the word which I am commanding you, nor take away from it, that you may keep the commands of the Lord your God which I command you.” And yet, we still saw Christianity, Islam and Mormonism follow it.