Even if he threw out the data I have recurring storage snapshots happening behind the scenes (on the backing store for the OSes involved.)
Logos01
Do you have any good evidence that this assertion applies to Cephalopods?
Cephalopods in general have actually been shown to be rather intelligent. Some species of squid even engage in courtship rituals. There’s no good reason to assume that given the fact that they engage in courtship, predator/prey response, and have been shown to respond to simple irritants with aggressive responses that they do not experience at the very least the emotions of lust, fear, and anger.
(Note: I model “animal intelligence” in terms of emotional responses; while these can often be very sophisticated, it lacks abstract reasoning. Many animals are more intelligent beyond ‘simple’ animal intelligence; but those are the exception rather than the norm.)
Be comfortable in uncertainty.
Do whatever the better version of yourself would do.
Simplify the unnecessary.
Dual N-Back browser-based “game” in public alpha-testing state.
Now imagine a “more realistic” setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict.
There is Friendliness and there is Friendliness. Note: Ambivalence or even bemused antagonism would qualify as Friendliness so long as humans were still able to determine their own personal courses of development and progress.
An AGI that had as its sole ambition the prevention of other AGIs and unFriendly scenarios would allow a lot of what passes for bad science fiction in most space operas, actually. AI cores on ships that can understand human language but don’t qualify as fully sentient (because the real AGI is gutting their intellects); androids that are fully humanoid and perhaps even sentient but haven’t any clue why that is so (because you could rebuild human-like cognitive faculties by reverse-engineering black-box but if you actually knew what was going on in the parts you would have that information purged...) -- so on and so on.
And yet this would qualify as Friendly; human society and ingenuity would continue.
“If it weren’t for my horse, I never would’ve graduated college.” >_<
An omnipotent omnibenevolent being would have no need for such “shorthand” tricks to create infinite worlds without suffering. Yes you could always raise another aleph level for greater infinities; but only by introducing suffering at all.
Which violates omnibenevolence.
I don’t buy it. A superhuman intelligence with unlimited power and infinite planning time and resources could create a world without suffering even without violating free will. And yet we have cancer and people raping children.
I am thiiiiiiiiis confident!
I’m surprised to see this dialogue make so little mention of the material evidence* at hand with regards to the specific claims of Christianity. I mean; a god which was omnipotent and omnibenevolent would surely create a world with less suffering for humanity than what we conjecture an FAI would orchestrate, yes? Color me old-fashioned but I assign the logically** impossible a zero probability (barring of course my being mistaken about logical impossibilities).
* s/s//
** s/v/c/
but then changes its mind and brings us back as a simulation.”
This is commonly referred to as a “counterfactual” AGI.
Indeed. Which is why happiness is not a terminal value.
Yes, they do. And that’s the end of this dialogue.
(EDIT: By end of this dialogue I meant that he and I were at an impasse and unable to adjust our underlying assumptions to a coherent agreement in this discussion. They are too fundamentally divergent for “Aumanning.”)
Indeed. But they do demonstrate the principle in question.
Actually it’s more complicated than that. Not just water atoms; over time your genetic pattern changes—the composition of cancerous to non-cancerous cells; the composition of senescent to non-senescent cells; the physical structures of the brain itself change.
Neurogenesis does occur in adults—so not even on a cellular level is your brain the same today as it was yesterday.
Furthermore—what makes you confident you are not already in a Matrix? I have no such belief, myself. Too implausible to believe we are in the parent of all universes given physics simulations work.
Missed that about the class. Makes a difference, definitely.
I’m not really sure what non-local phenomena are [...]
Two options: trust the assertions of those who are sure, or learn of them for yourself. :)
1 v 2 -- is your “meat” persistent over time? (It is not).
2 v 3 are non differentiable -- 2 is 3.
4 is implied by 2⁄3. It is affirmed by physics simulations that have atomic-level precision, and by research like the Blue Brain project.
5 is excluded by the absence of non-local phenomena (‘psychic powers’).
A change of substrate occurs daily for you. It’s just of a similar class. What beyond simple “yuck factor” gives you cause to believe that a transition from cells to silicon would impact your identity? That it would look different?
Scientific truths include the measurement of net harm to society for any given action—which then impact utilitarian consequentialistic morals. (“It’s unjust to execute anyone. Ever.”)
Scientific truths include observations as to what occurs “in nature” which then informs naturalistic morals (“It’s not natural to be gay/left-handed/brilliant” )
Scientific truths include observations about the role morality plays in those species we can observe to possess it, thereby informing us practically about what actions or inactions or rules would best optimize that function. (Observing apes and other primates or pack animals to derive a functional analysis of how morality impacts our social coherence and so on.)
I have long argued that morality needn’t be absolute in order to be objective. Moral relativism and moral objectivism may be standard terms but I assert they are not as incompatible as is routinely claimed.
We needn’t know what is perfectly moral to know objectively what is less moral.
As I often say; you are not your meat. You are the unique pattern of information-flow that occurs within your meat. The meat is not necessary to the information, but the information does require a substrate.
The software needs a way to track who was responding to which questions. That’s because many of the questions relate to one another. It does that without requiring logins by using the ongoing http session. If you leave the survey idle then the session will time out. You can suspend a survey session by creating a login which it will then use for your answers.
The cookies thing is because it’s not a single server but loadbalanced between multiple webservers (multiactive HA architecture). This survey isn’t necessarily the only thing these servers will ever be running.
(I didn’t write the software but I am providing the physical hosting it’s running on.)