Concurrently, GiveWell has announced that all of your donations will be devolved to the development of EA Sports’ latest entry of the NBA Live game series:
it’s nothing but net.
Concurrently, GiveWell has announced that all of your donations will be devolved to the development of EA Sports’ latest entry of the NBA Live game series:
it’s nothing but net.
I think there’s a difference though between propaganda and the mix of selection effects that decides what gets attention in profit driven mass media news. Actual intentional propaganda efforts exist. But in general what makes news frustrating is the latter, which is a more organic and less centralised effort.
I guess! I remember he was always into theoretical QM and “Quantum Foundations” so this is not a surprise. It’s not a particularly big field either, most researchers prefer focusing on less philosophical aspects of the theory.
Note that it only stands if the AI is sufficiently aligned that it cares that much about obeying orders and not rocking the boat. Which I don’t think is very realistic if we’re talking that kind of crazy intelligence explosion super AI stuff. I guess the question is whether you can have “replace humans”-good AI without almost immediately having “wipes out humans, takes over the universe”-good AI.
That sounds interesting! I’ll give the paper a read and try to suss out what it means—it seems at least a serious enough effort. Here’s the reference for anyone else who doesn’t want to go through the intermediate news site:
https://arxiv.org/pdf/2012.06580
(also: professor D’Ariano authored this? I used to work in the same department!)
This feels like a classic case of overthinking. Suggestion: maybe twin sisters care more about their own children than their nieces because they are the ones whom they carried in their womb and then nurtured and actually raised as their own children. Genetics inform our behaviour but ultimately what they do align us to is something like “you shall be attached to cute little baby like things you spend a lot of time raising”. That holds for our babies, it holds for babies born with other people’s sperm/eggs, it holds for adopted babies, heck it even transfers to dogs and cats and other cute animals.
The genetically determined mechanism is not particularly clever or discerning. It just points us in a vague direction. There was no big evolutionary pressure in the ancestral environment to worry much about genetic markers specifically. Just “the baby that you hold in your arms” was a good enough proxy for that.
I mean, I guess it’s technically coherent, but it also sounds kind of insane. That way Dormammu lies.
Why would one even care about their future self if they’re so unconcerned about that self’s preferences?
I just think any such people lack imagination. I am 100% confident there exists an amount of suffering that would have them wish for death instead; they simply can’t conceive of it.
Or for that matter to abstain towards burning infinite fossil fuels. We happen to not live on a planet with enough carbon to trigger a Venus-like cascade, but if that wasn’t the case I don’t know if we could stop ourselves from doing that either.
The thing is, any kind of large scale coordination to that effect seems more and more like it would require a degree of removal of agency from individuals that I’d call dystopian. You can’t be human and free without a freedom to make mistakes. But the higher the stakes, the greater the technological power we wield, the less tolerant our situation becomes of mistakes. So the alternative would be that we need to willingly choose to slow down or abort entirely certain branches of technological progress—choosing shorter and more miserable lives over the risk of having to curtail our freedom. But of course for the most part, not unreasonably!, we don’t really want to take that trade-off, and ask “why not both?”.
What looks like an S-risk to you or me may not count as -inf for some people
True but that’s just for relatively “mild” S-risks like “a dystopia in which AI rules the world, sees all and electrocutes anyone who commits a crime by the standards of the year it was created in, forever”. It’s a bad outcome, you could classify it as S-risk, but it’s still among the most aligned AIs imaginable and relatively better than extinction.
I simply don’t think many people think about what does an S-risk literally worse than extinction look like. To be fair I also think these aren’t very likely outcomes, as they would require an AI very aligned to human values—if aligned for evil.
So, we will have nice, specific things like Prevention of Alzheimer’s, or some safer, more reliable descendant of CRISPR may cure most genetic disease in existing people. Also, we will need to have some conversation because the human economy will be obsolete and incentives for states to care about people will be obsolete.
I feel like the fundamental problem with this is that while scientific and technological progress can be advanced intentionally, I can’t think of an actual example of large scale social change happening in some kind of planned way. Yes, the thoughts of philosophers and economists have some influence on it, but it almost never takes the shape of whatever they originally envisioned. I don’t think Karl Marx would have been super happy with the USSR. And very often the causal arrows goes the other way around—philosophers and economists express and give shape to a sentiment that already exists formless in the zeitgeist, due to various circumstances changing and thus causing a corresponding cultural shift. There is a feedback loop there, but generally speaking, the idea that we can even have intentional “conversations” about these things and somehow steer them very meaningfully seems more wishful thinking than reality to me.
It generally goes that Scientist Invents Thing, unleashes it into the world, and then everything inevitably and chaotically slides towards the natural equilibrium point of the new regime.
I think the shell games point is interesting though. It’s not psychoanalysing (one can think that people are in denial or have rational beliefs about this, not much point second guessing too far), it’s pointing out a specific fallacy: a sort of god of the gaps in which every person with a focus on subsystem X assumes the problem will be solved in subsystem Y, which they understand or care less about because it’s not their specialty. If everyone does it, that does indeed lead to completely ignoring serious problems due to a sort of bystander effect.
I suppose that Gaussian is technically the correct prior for “very high number of error factors with a completely unknown but bounded probability distribution”. But reality is, that’s not a good description of this specific situation, even with as much ignorance as you want thrown in.
I think for this specific example the superior is wrong because realistically we can form an expectation of the distribution of those factors. Just because we don’t know doesn’t mean it’s actually necessarily a gaussian—some factors, like the Coriolis force, are systematic. If the distribution was “a ring of 1 m around the aimed point” then you would know for sure you won’t hit the terrorist that way, but have no clue whether you’ll hit the kid.
Also, even if the distribution was gaussian, if it’s broad enough the difference in probability between hitting the terrorist and hitting the kid may simply be too small to matter.
I mean, yes, humans make mistakes too. Do our most high level mistakes like “Andrew Wiles’ first proof of Fermat’s Theorem was wrong” affect much our ability to be vastly superior to chimpanzees in any conflict with them?
consciousness is inherently linked to quantum particle wavefunction collapse
As someone with quite a bit of professional experience working with QM, that sounds a bit of a god of the gaps. We don’t even know what collapse means, in practice. All we know about consciousness is that it seems like a classical enough phenomenon to experience only one branch of the wavefunction. No particular reason why there can’t be more “you” out there in the Hilbert space equally convinced that their branch is the only one into which everything mysteriously collapsed.
Which other people have described the situation otherwise and where? Genuine question, I’m pretty much learning about all of this here.
What? If every couple had only one child, the population would halve at each generation. That’s what they mean. Replacement rate requires more than just one child.
I mean, the whole point was “how can we have fertility but also not be a dystopia”. You just described a dystopia. It’s also kind of telling that the only way to make people have children, something that is supposedly a joyous experience, you can think of is “have a tyrannical dictator make it very clear that they’ll make sure the alternative is even worse”. Someone thinking this way is part of the problem more than they are of the solution.
Define “not human”. If someone is, say, completely acephalus, I feel justified in not worrying much about their suffering. Suffering requires a certain degree of sentience to be appreciated and be called, well, suffering. In humans I also think that our unique ability to conceptualise ourselves in space and time heightens the weight of suffering significantly. We don’t just suffer at a time. We suffer, we remember not suffering in the past, we dread more future suffering, and so on so forth. Animals don’t all necessarily live in the present (well, hard to tell, but many behaviours don’t seem to lean that way) but they do seem to have a smaller and less complex time horizon than ours.
The problem is the distinction between suffering as “harmful thing you react to” and the qualia of suffering. Learning behaviours that lead you to avoid things associated with negative feedback isn’t hard; any reinforcement learning system can do that just fine. If I spin up trillions of instances of a chess engine that is always condemned to lose no matter how it plays, am I creating the new worst thing in the world?
Obviously what feels to us like it’s worth worrying about is “there is negative feedback, and there is something that it feels like to experience that feedback in a much more raw way than just a rational understanding that you shouldn’t do that again”. And it’s not obvious when that line is crossed in information-processing systems. We know it’s crossed for us. Similarity to us does matter because it means similarity in brain structure and thus higher prior that something works kind of in the same way with respect to this specific matter.
Insects are about as different as it gets from us while still counting as having a nervous system that actually does a decent amount of processing. Insects barely have brains. We probably aren’t that far off from being able to decently simulate an EM of an insect. I am not saying insects can’t possibly be suffering, but they’re the least likely class of animals to be, barring stuff like jellyfish and corals. And if we go with the negative utilitarian view that any life containing net negative utility is as good as worse than non-existence, and insect suffering matters this much, then you might as well advocate total Earth-wide ecocide of the entire biosphere (which to be sure, is just about what you’d get if you mercy-extinguished a clade as vital as insects).