Related: The St. Petersburg Paradox
Shiroe
What about a large look-up table that mapped conversation so far → what to say next and was able to pass the Turing test? This program would have all the external signs of consciousness, but would you really describe it as a conscious being in the same way that you are?
Unless the conscious algorithm in question will experience states that are not valence-neutral, I see no issue with creating or destroying instances of it. The same applies to any other type of consciousness. It seems implausible to me that any of our known AI architectures could instantiate such non-neutral valences, even if they do seem plausibly able to instantiate other kinds of experiences (e.g. geometric impressions).
I’d love to hear about why anthropic reasoning made such a big difference for your prediction-market prediction. EDIT: Nevermind. Well played.
Quick note on the Ponzo illusion: In my view, seeing the top bar as longer is actually a more primitive, fundamental observation. The idea that the bars ought to appear as the same length is an additional interpretative layer thrown on top of this, justified by geometric principles and theories about human visual perception. The direct (or “raw”) observation, however, is that the top bar appears longer.
Question: anyone know of some work on the connection between anthropic paradoxes and illusionism? (I couldn’t figure out how to make a “Question” type post.)
Shiroe’s Shortform
What does “no indication” mean in this context? Can you translate that into probability speak?
Yes. Rogue AGI is scary, but I’m far more concerned about human misuse of AGI. Though in the end, there may not be that much of a distinction.
There’s a big difference between teleology … and teleonomy
I disagree. Any “purposes” are limited to the mind of a beholder. Otherwise, you’ll be joining the camp of the child who thinks that a teddy bear falls to the ground because it wants to.
Work to offer the solutions and let them make their own, informed choice.
The problem is that the bureaucrats who make the decision of whether gene drives are allowed aren’t the same people as the ones who are dying from malaria. Every day that you postpone the eradication of malaria by trying to convince bureaucrats, over a thousand people will die from the disease in question. Most of them, many of whom are infants, had no ability to meaningfully affect their political situation.
I guess it is logically coherent that a bean sprout could have values of its own. But what would it mean for a bean sprout to value something?
You might say, its evolutionary teleology is what it values. But it’s only in your human mind that there is such a thing as that teleology, which was an idea your mind created to help it understand the world. By adopting such a non-sentientist view, your brain hasn’t stepped down from its old tyranny, but only replaced one of its notions with a more egalitarian sounding one. This pleases your brain, but the bean sprout had no say.
It may help to link to this for context.
Also, what is your impression of Stop Gene Drives? Do their arguments about risks to humans seem in good faith, or is “humans don’t deserve to play god!” more like their real motive?
That’s true that it could set a bad precedent. But it also could set a bad precedent to normalize letting millions of people die horribly just to avoid setting a bad precedent. It’s not immediately clear to me which is worse in the very-long-run.
Something I didn’t see mentioned: is there any concern that a sudden elimination of malaria could cause a population surge, with cascading food shortage effects? I have no idea how population dynamics work, so it’s non-obvious to me whether there’s a potential problem there. Even if so, though, that still wouldn’t be an argument to not do the gene drive, but just to make the appropriate preparations beforehand.
First I thought it was a computer chip, then I thought it was Factorio.
How should this affect one’s decision to specialize in UI design versus other areas of software engineering? Will there be fewer GUIs in the future, or will the “audience” simply cease to be humans?
Personhood is a separate concept. Animals that may lack a personal identity conception may still have first person experiences, like pain and fear. Boltzmann brains supposedly can instantiate brief moments of first person experience, but they lack personhood.
The phrase “first person” is a metaphor borrowed from the grammatical “first person” in language.
“We ran the experiment of email being a truly open P2P protocol… That experiment failed” (@patio11)
I must be missing something here. How does this fit in with the rest of the tweets in that list?
Does anyone know of work dealing with the interaction between anthropic reasoning and illusionism/elimitivism?