Just this guy, you know?
Dagon
Well, there are possible outcomes that make resources per human literally infinite. They’re not great either, by my preferences.
In less extreme cases, a lot depends on your definition of “poverty”, and the weight you put on relative poverty vs absolute poverty. Already in most parts of the world the literal starvation rate is extremely low. It can get lower, and probably will in a “useful AI” or “aligned AGI” world. A lot of capabilities and technologies have already moved from “wealthy only” to “almost everyone, including technically impoverished people”, and this can easily continue.
There’s a wide range of techniques and behaviors that can be called “hypnosis”, and an even wider range of what can be called “a real thing, right?”. Things in the realm of hypnosis (meditation, guided-meditation, self-hypnosis, daily affirmations, etc. have plenty of anecdotal support from adherents, and not a lot of RCTs or formal proof of who it will work for and who it won’t.
There’s a TON of self-help and descriptive writing on the topics of meditation and self-hypnosis. For many people, daily affirmations seem to be somewhat effective in changing their attitude over time. For many, a therapist or guide may be helpful in setting up and framing the hypnosis.
What does “unsafe” mean for this prediction/wager? I don’t expect the murder rate to go up very much, nor life expectancy to reverse it’s upward trend. “Erosion of rights” is pretty general and needs more specifics to have any idea what changes are relevant.
I think things will get a little tougher and less pleasant for some minorities, both cultural and skin-color. There will be a return of some amount of discrimination and persecution. Probably not as harsh as it was in the 70s-90s, certainly not as bad as earlier than that, but worse than the last decade. It’ll probably FEEL terrible, because it was on such a good trend recently, and the reversal (temporary and shallow, I hope) will dash hopes of the direction being strictly monotonic.
This seems like a story that’s unsupported by any evidence, and no better than fiction.
They could have fought over resources in a scramble of each against all, but anarchy isn’t stable.
This seems most likely, and “stable” isn’t a filter in this situation − 1⁄3 of the population will die, nothing is stable. It wouldn’t really be “each against all”, but “small (usually family) coalitions against some of the other small-ish coalitions”. The optimal size of coalition will be dependend on a lot of factors, including ease of defection and strength of non-economic bonds between members.
If you could greatly help her at small cost, you should do so.
This needs to be quantified to determine whether or not I agree. In most cases I imagine (and a few I’ve experienced), I would (and did) kill the animal to end it’s suffering and to prevent harm to others if the animal might be subject to death throes or other violent reactions to their fear and pain.
In other cases I imagine, I’d walk away or drive on, without a second thought. Neither the benefit nor the costs are simple, linear, measurable things.Her suffering is bad.
I don’t have an operational definition of “bad”. I prefer less suffering, all else equal. All else is never equal—I don’t know what alternatives and what suffering (or reduced joy) any given remediation would require, and only really try to estimate them when faced with a specific case.
For the aggregate case, I don’t buy into a simple or linear aggregation of suffering (or of joy or of net value of distinct parts of the universe). I care about myself perhaps two dozen orders of magnitude more than the ant I killed in my kitchen this morning. And I care about a lot of things with a non-additive function—somewhere in the realm of logarithmic. I care about the quarter-million remaining gorillas, but I care about a marginal gorilla much less than 1/250K of that caring.
One challenge I’d have for you / others who feel similar to you, is to try to get more concrete on measures like this, and then to show that they have been declining.
I’ve given some thought to this over the last few decades, and have yet to find ANY satisfying measures, let alone a good set. I reject the trap of “if it’s not objective and quantitative, it’s not important”—that’s one of the underlying attitudes causing the decline.
I definitely acknowledge that my memory of the last quarter of the previous century is fuzzy and selective, and beyond that is secondhand and not-well-supported. But I also don’t deny my own experience that the (tiny subset of humanity) people I am aware of as individuals have gotten much less hopeful and agentic over time. This may well be for reasons of media attention, but that doesn’t make it not real.
Do you think that the world is getting worse each year?
Good clarification question! My answer probably isn’t satisfying, though. “It’s complicated” (meaning: multidimensional and not ordinally comparable).On a lot of metrics, it’s better by far, for most of the distribution. On harder-to-operationally-define dimensions (sense of hope and agency for the 25th through 75th percentile of culturally normal people), it’s quite a bit worse.
would consider the end of any story a loss.
Unfortunately, now you have to solve the fractal-story problem. Is the universe one story, or does each galaxy have it’s own? Each planet? Continent? Human? Subpersonal individual goals/plotlines? Each cell?
I feel like you’re talking in highly absolutist terms here.
You’re correct, and I apologize for that. There are plenty of potential good outcomes where individual autonomy reverses the trend of the last ~70 years. Or where the systemic takeover plateaus at the current level, and the main change is more wealth and options for individuals. Or where AI does in fact enable many/most individual humans to make meaningful decisions and contributions where they don’t today.
I mostly want to point out that many disempowerment/dystopia failure scenarios don’t require a step-change from AI, just an acceleration of current trends.
Presumably, if The Observer has a truly wide/long view, then destruction of the Solar System, or certainly loss of all CHON-based lifeforms on earth, wouldn’t be a problem—there have got to be many other macroscopic lifeforms out there, even if The Great Filter turns out to be “nothing survives the Information Age, so nobody ever detects another lifeform”.
Also, you’re describing an Actor, not just an Observer. If has the ability to intervene, even if it rarely chooses to do so, that’s it’s salient feature.
This seems like it would require either very dumb humans, or a straightforward alignment mistake risk failure, to mess up.
I think “very dumb humans” is what we have to work with. Remember, it only requires a small number of imperfectly aligned humans to ignore the warnings (or, indeed, to welcome the world the warnings describe).
a lot of people have strong low-level assumptions here that a world with lots of strong AIs must go haywire.
For myself, it seems clear that the world has ALREADY gone haywire. Individual humans have lost control of most of our lives—we interact with policies, faceless (or friendly but volition-free) workers following procedure, automated systems, etc. These systems are human-implemented, but in most cases too complex to be called human-controlled. Moloch won.
Big corporations are a form of inhuman intelligence, and their software and operations have eaten the world. AI pushes this well past a tipping point. It’s probably already irreversable without a major civilizational collapse, but it can still get … more so.in worlds where AI systems have strong epistemics without critical large gaps, and can generally be controlled / aligned, things will be fine.
I don’t have good working definitions of “controlled/aligned” that would make this true. I don’t see any large-scale institutions or groups large and sane enough to have a reasonable CEV, so I don’t know what an AI could align with or be controlled by.
In non-trivial settings, (some but not all) structural differences between programs lead to differences in input/output behaviour, even if there is a large domain for which they are behaviourally equivalent.
I think this is a crux (of why we’re talking past each other; I don’t actually know if we have a substantive disagreement). The post was about detecting “smaller than a lookup table would support” implementations, which implied that the input/output functionally-identical-as-tested were actually tested in the broadest possible domain. I fully agree that “tested” and “potential” input/output pairs are not the same sets, but I assert that, in a black-box situation, it CAN be tested in a very broad set of inputs, so the distinction usually won’t matter. That said, nobody has built a pure lookup table anywhere near as complete as it would take to matter (unless the universe or my experience is simulated that way, but I’ll never know).
My narrower but stronger point is that “lookup table vs algorithm” is almost never as important as “what specific algorithm” for any question we want to predict about the black box. Oh, and almost all real-world programs are a mix of algorithm and lookup.
might be true if you just care about input and output behaviour
Yes, that is the assumption for “some computable function” or “black box which takes in strings and spits out other strings.”
I’m not sure your example (of an AI with a much wider range of possible input/output pairs than the lookup table) fits this underlying distinction. If the input/output sets are truly identical (or even identical for all tests you can think of), then we’re back to the “why do we care” question.
i don’t exactly disagree with the methodology, but I don’t find the “why do we care” very compelling. For most practical purposes, “calculating a function” is only and exactly a very good compression algorithm for the lookup table.
Unless we care about side-effects like heat dissipation or imputed qualia, but those seem like you need to distinguish among different algorithms more than just “lookup table or no”.
(I’m using time-sensitive words, even though we are stepping out of the spacetime of our universe for parts of this discussion.)
Maybe use different words, so as not to imply that there is a temporal, causal, or spacial relation.
Many people realize that, conceptually “below” or “before” any “base universe,” there is
I don’t realize or accept that. Anything that would be in those categories are inaccessible to our universe, and not knowable or reachable from within. They are literally imaginary.
“all” humans?
The vast majority of actual humans are already dead. The overwhelming majority of currently-living humans should expect 95%+ chance they’ll die in under a century.
If immortality is solved, it will only apply to “that distorted thing those humans turn into”. Note that this is something the stereotypical Victorian would understand completely—there may be biological similarities with today’s humans, but they’re culturally a different species.
When humans fall well below marginal utility compared to AIs, will their priorities matter to a system that has made them essentially obsolete?
The point behind my question is “we don’t know. If we reason analogously to human institutions (which are made of humans, but not really made or controlled BY individual humans), we have examples in both directions. AIs have less biological drive to care about humans than humans do, but also have more training on human writings and thinking than any individual human does.
My suspicion is that it won’t take long (in historical time measure; perhaps only a few decades, but more likely centuries) for a fully-disempowered species to become mostly irrelevant. Humans will be pets, perhaps, or parasites (allowed to live because it’s easier than exterminating them). Of course, there are plenty of believable paths that are NOT “computational intelligence eclipses biology in all aspects”—it may hit a wall, it may never develop intent/desire, it may find a way to integrate with biologicals rather than remaining separate, etc. Oh, and it may be fragile enough that it dies out along with humans.
Do we have a good story about why this hasn’t already happened to humans? Systems don’t actually care about the individuals they comprise, and certainly don’t care about the individuals that are neither taxpayers, selectorate, contributors, or customers.
Why do modern economies support so many non-participants? Let alone the marginal and slightly sub-marginal workers, which don’t cost much and may have option value or be useful to keep money moving in some way, there are a lot who are clearly a drain on resources.
Specifically, “So, the islanders split into two groups and went to war.” is fiction—there’s no evidence, and it doesn’t seem particularly likely.