Take any 500-year window that contains the year 2014. How typical would you say it is of all 500-year intervals during which tool-using humans existed?
bokov
even with no singularity technological advance is a normal part of our society
Depends what time scale you’re talking about.
It would look like a failure to adequately discount for inferential chain length.
having people around who give a damn about you
Yes, exactly. I’d add:
...because the best cryopreservation arrangements won’t do you much good if nobody notices you died until the neighbors complain about the smell.
Somewhere between an extended family and a kindergarten; like a small private kindergarten where the parents are close friends with the caretakers.
That, right there, is one of my fondest dreams. To get my tiny scientists out of the conformity-factory and someplace where they can flourish (even more). Man, if this was happening in my town, in a heartbeat I’d rearrange my work schedule to spend part of the week being a homeschooler.
dealing with resource scarcity—and you keep on bringing up how markets don’t solve violence and pollution...
Well, they should provide a constructive alternative to the former, and the latter is isomorphous with a scarcity of non-polluted air/water/land.
Here’s what I expect someone who seriously believed that markets will handle it would sound like:
“Wow, overpopulation is a threat? Clearly there are inefficiencies the rest of the market is too stupid to exploit. Let’s see if I can get rich by figuring out where these inefficiencies are and how to exploit them.”
Whereas “the markets will handle it, period, full stop” is not a belief, it’s an excuse.
it’s only a question of how many planets we consume before that happens.
Hopefully more than one. There are a lot of underutilized planets out there, even within our own solar system.
Fixed, thanks.
The choke point in our Fritz Haber/Norman Borlaug/Edward Jenner pipeline is not the amount of science education out there. It’s a combination of the low-hanging fruit being picked, insufficient investment in novel approaches and not enough geniuses.
Very true. Each year we produce thousands of new Ph.D.s and import thousands more, while slowly choking off funding for basic research, so they languish in a post-doc holding pattern until many of them give up and go do something less innovative but safer.
Alternatively tutoring is free and with a similar level of time costs to raising your own children you could tutor a lot of others.
Yes! The school system in my state spends far more on remedial education than on GT. Education is seen as a status symbol instead of a costly investment that should be allocated in a manner that gives the highest returns (in terms of innovation, prosperity, and sane policy decisions).
All of these “what you should do if you are a utilitarian” articles should start with “Assuming you are a being for whom utility matters roughly equally regardless of who experiences it...”
Yes! Thank you for articulating in one sentence what I haven’t been able to in a dozen posts.
You should repeat this at the top level. This changes things quite a bit.
We should be careful to make the distinction between jkaufman’s own opinions and those of the paper they posted a link to.
By the way, it’s refreshing to see people be honest with themselves and others about what they value instead of the posturing/kool-aid one often sees around this topic.
Oops, you’re right. I have now revised it.
A witty quote from an great book by a brilliant author is awesome, but does not have the status of any sort of law.
What do we mean by “normality”? What you observe around you every day? If you are wrong about the unobserved causal mechanisms underlying your observations, you will make wrong decisions. If you walk on hot coals because you believe God will not let you burn, the normality that quantum mechanics adds up to diverges enough from your normality that there will be tangible consequences. Are goals part of normality? If not, they certainly depend on assumptions you make about your model of normality. Either way, when you discover that God can’t/won’t make you fireproof, some subset of your goals will (and should) come tumbling down. This too has tangible consequences.
Some subset of the remaining goals relies on more subtle errors in your model of normality and they too will at some point crumble.
What evidence do we have that any goals at all are stable at every level? Why should the goals of a massive blob of atoms have such a universality?
I can see the point of “it all adds up to normality” if you’re encouraging someone to not be reluctant to learn new facts. But how does it help answer the question of “what goal do we pursue if we find proof that all our goals are bullshit”?
So, looking at shminux’ post above, you would suggest mandatory insemination of only some fertile females and reducing subsistence to slightly above the minimum acceptable caloric levels..?
I believe that deliberately increasing population growth is specifically the opposite direction of the one we should be aiming toward if we are to maximize any utility function that penalizes die-offs, at least as long as we are strictly confined to one planet. I was just more interested in the more general point shminux raised about repugnant conclusions and wanted to address that instead of the specifics of this particular repugnant conclusion.
I think the way to maximize the “human integral” is to find the rate of population change at which our chances surviving long enough and ramping up our technological capabilities fast enough to colonize the solar system. That, in turn, will be bounded from above by population growth rates that risk overshoot, civilizational collapse, and die-off and bounded from below by the critical mass necessary for optimum technological progress and the minimum viable population. My guess is that the first of these is the more proximal one.
At any rate, we have to have some better-than-nothing way of handling repugnant conclusions that doesn’t amount to doing nothing and waiting for someone else to come up with all the answers. I also think it’s important to distinguish between optima that are inherently repugnant versus optima that can be non-repugnant but we haven’t been able to think of a non-repugnant path to get from here to there.
I don’t understand the response. Are you saying that the reason you don’t have an egocentric world view and I do is in some way because of kin selection?
How about this as a rule of thumb, pending something more formal:
If a particular reallocation of resources/priority/etc. seems optimal, look for a point in the solution space between there and the status quo that is more optimal than the status quo, go for that point, and re-evaluate from there.
The current 500-year window needs to be be VERY typical if it’s the main evidence in support of the statement that “even with no singularity technological advance is a normal part of our society”.
This is like someone in the 1990s saying that constantly increasing share price “is a normal part of Microsoft”.
I think technological progress is desirable and hope that it will continue for a long time. All I’m saying is that being overconfident about future rates of technological progress is one of this community’s most glaring weaknesses.