I don’t understand the response. Are you saying that the reason you don’t have an egocentric world view and I do is in some way because of kin selection?
bokov
How about this as a rule of thumb, pending something more formal:
If a particular reallocation of resources/priority/etc. seems optimal, look for a point in the solution space between there and the status quo that is more optimal than the status quo, go for that point, and re-evaluate from there.
You might be right. I hope not, though, because that means it will take even longer to escape from the planetary cycle of overshoot and collapse.
Then again, it’s good to be ready for the worst and be pleasantly surprised if things turn out better than expected.
Once we’ve dealt with the mass starvation, vast numbers of deaths from malaria, horrendous poverty, etc., then we can start paying a lot more attention to awesomeness.
What if, for practical purposes, there is an inexhaustible supply of suck? What if we can’t deal with it once and for all and then turn our attention to the fun stuff?
So, judging from the reception of my post about the Malthusian Crunch certain Wrongians sense this and have gone into denial (perhaps, if they’re honest with themselves, privately admitting the hope that if they ignore the starving masses long enough, they will go away).
I propose a middle ground between giving everything and giving nothing—a non-arbitrary cutoff for how much help is enough. A cutoff that can be defended on pragmatic grounds without having to assume a shared normative morality.
You put just enough resources into pure suckiness remediation to insure that spillover suckiness will not derail your awesomeness plans. I emphasize pure because there are pursuits that simultaneously strive for new heights of awesomeness and fix suck in equal measure. Obviously this quality is desirable and such projects should not be penalized for having it.
But, in any case, I would expect this to lead to the Malthusian scenario we should be trying to avoid, not an overall maximization of all humans who have ever lived.
What if the reason repugnant conclusions come up is that we only have estimates of our real utility functions which are an adequate fit over most of the parameter space but diverge from true utility under certain boundary conditions?
If so, either don’t feel shame about having to patch your utility function with one that does fit better in that region of the parameter space… or aim for a non-maximal but closer to maximum utility that that is far enough from the boundary that it can be relied on.
So your issue is that a copy of you is not you? And you would treat star trek-like transporter beams as murder?
Nothing so melodramatic, but I wouldn’t use them. UNLESS they were in fact manipulating my wave function directly somehow causing my amplitude to increase in one place and decrease in another. Probably not what the screenplay writers had in mind, though.
But you are OK with a gradual replacement of your brain, just not with a complete one?
Maybe even a complete one eventually. If the vast majority of my cognition has migrated to the synthetic regions, it may not seem as much of a loss when parts of the biological brain break down and have to be replaced. Hard to speak on behalf of my future self with only what I know now. This is speculation.
How fast would the parts need to be replaced to preserve this “experience of continuity”?
This is an empirical question that could be answered if/when it becomes possible perform for real the thought experiment I described (the second one, with the blank brain being attached to the existing brain).
Basically, what I am unclear on is whether your issue is continuity of experience or cloning.
Continuity. I’m not opposed to non-destructive copies of me, but I don’t see them as inherently beneficial to me either.
That should be a new discussion.
You claimed that people ignore or outright oppose trying to accelerate the rate of technological advancement. Could it be instead that nobody has any idea how to do it?
Very, very possible.
An independent settlement seems quite beyond the possibilities of present and foreseeable technology.
I’m not saying its easy. I guess I calibrate my concept of foreseeable technology as sleeker, faster mobile devices being trivially predictable, fusion as possible, and general-purpose nanofactories as speculative.
On that scale, I would place permanent off-world settlements as closer than nanofactories, around the same proximity as fusion. Closer, since no new discoveries are required, only an enormous outpouring of resources into existing technologies.
Well, that for starters.
Then there is the drive to insure the survival and happiness of your children. I have found that this increases with age. If you don’t have that drive yet, simply wait. There’s a good chance you will be surprised to see that you will develop one, as I have. I imagine this drive undergoes another burst when one’s children have children.
Then there is foreclosing on the possibility of the human race reaching the stars. If that doesn’t excite you, what does? Sports? Video games? I’m sure those will also spread through the galaxy if we do.
Then there there is the possibility that medical science has a breakthrough or two during your lifetime that makes these more than just theoretical futures.
For the most part, my emphasis is not on limiting population directly. I do believe that charitable efforts have the responsibility to mitigate the risk of a demographic trap in the areas they serve. But I think getting anybody who matters to listen is a lost cause.
My emphasis is on being conscious of the fact that the reason we’re still alive and prospering is that we are continuously buying ourselves more time with technology and use this insight to motivate greater investment in research and development. This seems like an easier sell.
Here is a thought experiment that might not be a thought experiment in the foreseeable future:
Grow some neurons in vitro and implant them in a patient. Over time, will that patient’s brain recruit those neurons?
If so, the more far-out experiment I earlier proposed becomes a matter of scaling up this experiment. I’d rather be on a more resilient substrate than neurons, but I’ll take what I can get.
I’m betting that the answer to this will be “yes”, following a similar line of reasoning that Drexler used to defend the plausibility of nanotech: the existence of birds implied the feasibility of aircraft; the existence of ribosomes implies the feasibility of nanotech… neurogenesis occurring during development and over the last few decades found to be possible in adulthood implies the feasibility of replacing damaged brains or augmenting healthy ones.
build a synthetic brain from a frozen/plastinated one
I’m unconvinced that cryostasis wll preserve the experience of continuity. Because of the thought experiment with the non-destructive copying of a terminal patient, I am convinced that plastination will fail to preserve it (I remain the unlucky copy, and in addition to that, dead).
My ideal scenario is one where I can undergo a gradual migration before I actually need to be preserved by either method.
As far as I can tell from the OP’s summary, the book could just as well be arguing that having children is a net asset… and instead of wasting resources on your one or two children, you should use them to help a dozen other people have 40 or 80 children.
This is a view I disagree with because the utility function should be evaluated over the entire support range, and maximizing child output on a local time-scale at the cost of a long-term overshoot, collapse, and dark ages (or at worst extinction) does not do that.
I want to increase awesome by decreasing suck.
There are a lot of paths in science and engineering that accomplish that, and space travel (in the long view) is definitely one of them.
How about the appropriate utility function being maximizing integral of all humans who have ever lived?
Because what does it matter if you maximize the number of people alive in one time interval, or maximize how happy they are, if this is followed by extinction?
It’s not clear to me whether scientism believes that the mind is a process that cannot take place on any substrate other than a brain, or whether he’s shares my and (I think) Mitchell Porter’s more cautious point of view that our consciousness can in principle exist somewhere other than a brain, but we don’t yet know enough about neuroscience to be confident about what properties such a system must have.
I, for one, would be sceptical of there being no substrate possible at all except the brain, because it’s a strong unsupported assertion on the same order as the (perhaps straw-man) patternist assertion that binary computers are an adequate substrate (or the stronger-still assertion that any computational substrate is adequate).
For better or worse, there are people making policy decisions and I know of no reason why that would change on the time scales we’re working with.
At the moment, these decision makers are acting as though they believe:
Overpopulation is not related to environmental degradation, violent conflict, and resource depletion.
That technological progress is not the main risk mitigator against overpopulation and its various consequences.
Supposing the above conventional wisdom is incorrect, but either way policy makers will make policy, and if that is inherently a bad thing (strong assumption), isn’t it better to limit the damage by them having a better approximation of reality?
So, if you agree with the conventional view, you have nothing to worry about (but I have yet to see here convincing arguments why I should agree with this view if I don’t already). If you disagree with the conventional view, that has implications at the very least for allowing its public apologists to stand un-debated and for whichpublic policies and charitable activities you endorse. If you are undecided, then perhaps you’re curious to develop better estimates, because they may have bearing on your survival and prosperity. I know I am.
latter half of the population finds themselves unable to afford a second generation
The technical term for that is “starving to death”, so let’s call it what it is. Don’t worry, I won’t judge you—I’m a pragmatist deeply skeptical of prescriptive morality. I respect someone who frankly doesn’t give a damn about the less fortunate more than I respect someone who pretends to give a damn. From a pragmatic point of view, though, that starving half will resort to the other standard response to scarcity: violence.
Which brings us to...
Normal political mechanisms
Do we live in a world where normal political mechanisms operate? Uh-oh, we do. Does the impact of political mechanisms outweigh the opposite impact of market mechanisms to limit family size? Uh-oh, we don’t know.
So, for you to continue believing that the markets will prevent overpopulation without any specific person thinking or doing anything about it, it’s your turn to come up with estimates on the net effect government has on technological progress and population growth, and the net effect an pure free market would have on technological progress and population growth.
I believe that deliberately increasing population growth is specifically the opposite direction of the one we should be aiming toward if we are to maximize any utility function that penalizes die-offs, at least as long as we are strictly confined to one planet. I was just more interested in the more general point shminux raised about repugnant conclusions and wanted to address that instead of the specifics of this particular repugnant conclusion.
I think the way to maximize the “human integral” is to find the rate of population change at which our chances surviving long enough and ramping up our technological capabilities fast enough to colonize the solar system. That, in turn, will be bounded from above by population growth rates that risk overshoot, civilizational collapse, and die-off and bounded from below by the critical mass necessary for optimum technological progress and the minimum viable population. My guess is that the first of these is the more proximal one.
At any rate, we have to have some better-than-nothing way of handling repugnant conclusions that doesn’t amount to doing nothing and waiting for someone else to come up with all the answers. I also think it’s important to distinguish between optima that are inherently repugnant versus optima that can be non-repugnant but we haven’t been able to think of a non-repugnant path to get from here to there.