Ehn. Nobody really understands anything, we’re just doing the best we can with various models of different complexity. Adam Smith’s pin factory description in the 18th century has only gotten more representative of the actual complexity in the world and the impossibility of fully understanding all the tradeoffs involved in anything. Note also that anytime you frame something as “responsibility of every citizen”, you’re well into the political realm.
You can see the economy as a set of solutions to some problems, but you also need to see it as exacerbation of other problems. Chesterton’s Fence is a good heuristic for tearing down fences, where it’s probably OK to let it stand for awhile while you think about it. It’s a crappy way to decide whether you should get off the tracks before you understand the motivation of the railroad company.
I suspect that if people really understood the cost to future people of the contortions we go through to support this many simultaneous humans in this level of luxury, we’d have to admit that we don’t actually care about them very much. I sympathize with those who are saying “go back to the good old days” in terms of cutting the population back to a sustainable level (1850 was about 1.2B, and it’s not clear even that was sparse/spartan enough to last more than a few millennia).
I suspect that if people really understood the cost to future people of the contortions we go through to support this many simultaneous humans in this level of luxury, we’d have to admit that we don’t actually care about them very much. I sympathize with those who are saying “go back to the good old days” in terms of cutting the population back to a sustainable level (1850 was about 1.2B, and it’s not clear even that was sparse/spartan enough to last more than a few millennia).
There’s enough matter in our light cone to support each individual existing human for roughly 10^44 years.
The problem is not “running out of resources”- there are so many resources it will require cosmic engineering for us to use more of them than entropy, even if we multiply our current population by ten billion.
Earth is only one planet- it does not matter how much of earth we use here and now. Our job is to make sure that our light cone ends up being used for what we find valuable. That’s our only job. The finite resources available on earth are almost irrelevant to the long term human project, beyond the extent to which those resources help us accomplish our job- I would burn a trillion pacific oceans worth of oil for a .000000000000000001% absolute increase to our probability of succeeding at our job.
I sympathize with people who are thinking like this, because it shows that they’re at least trying to think about the future. But… Future Humanity doesn’t need the petty resources available on earth any more than we need good flint to make hunting spears with. The only important thing and the best thing we can do for them is to ensure that they will ever exist at all!
It’s possible, but very improbable. We have vastly more probable concerns (misaligned AGI, etc.) than resource depletion sufficient to cripple the entire human project.
What critical resources is Humanity at serious risk of depleting? Remember that most resources have substitutes- food is food.
That’s surprisingly close, but I don’t think that counts. That page explains that the current dynamics behind phosphate recycling are bad as a result of phosphate being cheap- if phosphate was scarce, recycling (and potentially the location of new phosphate reserves, etc.) would become more economical.
True, and it’ll be a long time before off-planet habitations are resilient and self-sufficient enough to survive the anger of the 10B people on the planet which can no longer support them in the way they think they deserve. Getting the exponential growth (of permanent off-world settlements) started as soon as possible is the only way to get there, though.
Why do you seem to imply that burning fossil fuels would help at all the odds of the long term human project?
Even ignoring the current deaths due to the large scale desertification that Climate Change is causing, it’s putting our current society at a very real risk of collapse. Food and water supplies are at risk for the medium term, since we are losing hydrical reserves and cultivations are expected to suffer greatly for the abrupt change in temperature and the increased extreme meteorological events.
At the current rate of fishing, all fish species could be practically extinct by 2050, and for the same date the estimates ranging from 100 million to 1 billion climate refugees. Given how badly our societies reacted to numbers of refugees that weren’t even close to that scale, I really don’t want to see what will happen.
Not to say that currently one species out of three of all animals and vegetal is going extinct and could be gone for the same date. That is a scale of damage to the ecosystem that could easily feedback into who knows what.
We are causing the sixth mass extinction on our planet. I feel pretty confident some humans will survive and that technological progress could continue past that, eventually.
But I feel a lot more confident about humanity reaching the stars in an universe where we manage to not make scorched earth of our first planet before we have a way to do that, and I personally don’t want to see my personal odds of survival diminishing because I’ll have to deal with riots, food shortages, totalitarian fascist governments or… who know? A dying ecosystem is the kind of thing that could rush us into botching nanotechnology while looking for a way to fix our mess.
Lastly, I really don’t see how switching out of fossils would in any way harm our chances to develop as a species.
Every economical estimate I saw said that the costs would be a lot less than the economic damage from climate change alone, many estimates agree that it would actually improve the economy, and nobody is saying “let’s toss industry and technology out of the window, back to the caves everyone!”.
Even ignoring the current deaths due to the large scale desertification that Climate Change is causing,
What is your source for this? On Wikipedia, there is a distinct lack of references to good quality data, and in the anecdotal evidence (e.g. shrinking of lakes in the Sahel) seem to have other contributing factors than climate change, like increasing irrigation. Elsewhere I find that “[t]he Sahel region is experiencing a phase of population growth unprecedented in any other part of the world”.
At the current rate of fishing, all fish species could be practically extinct by 2050
What is your source for this? While some fisheries are poorly managed, many are in much better shape. There is a lack of knowledge about the status of many stocks, and we can’t model ecosystems very well, but the uncertainty doesn’t mean you can conclude with the most outrageous claim.
estimates ranging from 100 million to 1 billion climate refugees
Again, who is estimating this, and how? Currently we have 70 million refugees from wars and oppression, and probably more fleeing towards better economic prospects (although we don’t usually cause them refugees). I propose we spend our resources towards fixing this, rather than towards some hypothetical refugee situation some time in the future. A side benefit is that rich, peaceful nations tend to be the ones that manage their fisheries well, protect biodiversity and their inhabitants don’t become refugees even when the occasional natural disaster strikes.
The report also states that the way land and water are used for agriculture is part of the problem, it interacts with climate change making both issues worse.
again cover the subject of an abrupt drop in marine biomass and it’s consequences for food security. The second one specifies how over fishing and climate change are again piling up as problems, exacerbating each other consequences.
My specific claim that overfishing would extinguish all fish species by 2050 turned out to not be in my thesis, I mixed up what I heard in a documentary with the statements I was able to prove about risks for a collapse of marine life and risks for food security during my work.
This is referred as the study which that statement I heard was based on, but it states it’s a possibility and there doesn’t seem to be much recent research backing this outcome, so I’d update my expectations to the possible outcomes treated in the studies above, which aren’t at all less worrying.
My statement was from memory and it was incorrect. The most relevant pages for my statement seem to be 38 and 39. IOM states that, in the current literature, predictions of refugees number vary from 25 million to 1 billion because there are a lot of variables.
However at page 39 says that in the previous 5 years over 165 million people were newly displaced, and that climate and weather disasters were involved in 90% of cases, so my guess is that we can throw the most optimistic estimates out of the window. Most of those cases are related to temporary displacement (page 40), but in the same page it’s stated that climate change is expected to shift climate related displacement toward permanent ones.
For current refugees vs future refugees, usually it’s a lot more cost-effective to prevent a problem than to fix it once it happens.
I strongly feel we should fix the current problem as well, and that the two approaches shouldn’t have to compete for the resources we’ll allocate. Currently this kind of problems are seeing only the scraps of what we could allocate, and fixing the future problems spares us economic damages that would be way higher even in the short term alone.
Also, many of the wars currently causing refugees seem to be partly caused by climate change consequences.
Both these studies indicate climate change as one of the causes of recent war, and as likely cause for more armed conflicts in the future.
On a side note: I do have to remember to always post the sources of my claims in advance, so at least I can make less of them. This wasn’t how I planned to spend a good part of my morning, but it would have been really incorrect to not post the sources for claims I already made.
Why do you seem to imply that burning fossil fuels would help at all the odds of the long term human project?
I don’t imply that. For clarification:
I would waste any number of resources if that was what was best for the long-term prospects of Humanity. In practice, that means that I’m willing to sacrifice really really large amounts of resources that we won’t be able to use until after we develop AGI or similar, in exchange for very very small increases to our probability of developing aligned AGI or similar.
Because I think we won’t be able to use significant portions of most of the types of resources available on Earth before we develop AGI or similar, I’m willing to completely ignore conservation of those resources. I still care about the side effects of the process of gathering and using those resources, but...
The oil example isn’t meant to be any reflection of my affinity for fossil fuels.
My point that “Super long term conservation of resources” isn’t a concern. If there are near term non “conservation of resources” reasons why doing something is bad, I’m open to those concerns- we don’t need to worry about ensuring that humans 100 years from now have access to fuel sources.
For the record, I think nuclear and solar seem to clearly be better energy sources than fossil fuels for most applications. Especially nuclear.
I’m also not fighting defense for climate change activists- I don’t care about how many species die out, unless those species are useful (short term- next 50 years, 100 years max?) to us. If you want to make sure future humanity has access to Tropical Tree Frog #952, and you’re concerned about them going extinct, go grab some genetic samples and preserve them. If the species makes many humans very happy, provides us valuable resources, etc., fine.
At the current rate of fishing, all fish species could be practically extinct by 2050
I’m open to the notion that regulating our fish intake is the responsible move- it seems like a pretty easy sell. It keeps our fishing equipment, boats, and fishermen useful. I’m taking this action because it’s better for humanity, not because it’s better for the fish or better for the Earth.
The Strategy is not to excessively use resources and destroy the environment just because we can, it’s to actively and directly use our resources to accomplish our goals, which I have doubts strongly aligns with preserving the environment.
Let’s list a few ways in which our conservation efforts are bad:
Long term (100+ years) storage of nuclear waste.
Protecting species which aren’t really useful to Humanity.
Planning with the idea that we will be indefinitely (Or, for more than 100 years) living in the current technological paradigm, i.e. without artificial general intelligence.
And in which they’re valid:
Being careful with our harvesting of easily depletable species which we’ll be better off having alive for the next 100 years.
Being careful with our effect on global temperatures and water levels, in order to avoid the costs of relocating large numbers of humans.
Being careful with our management of important freshwater reserves, at least until we develop sufficiently economical desalinization plants.
I personally don’t want to see my personal odds of survival diminishing because I’ll have to deal with riots, food shortages, totalitarian fascist governments or… who know?
The greatest risks to your survival are, by far, (unless you’re a very exceptional person) natural causes and misaligned artificial general intelligence. You shouldn’t significantly concern yourself with dealing with weird risk factors such as riots or food shortages unless you’ve already found that you can’t do anything about natural causes and misaligned artificial general intelligence. Spoiler: It seems you can do something about these risk factors.
Every economical estimate I saw said that the costs would be a lot less than the economic damage from climate change alone, many estimates agree that it would actually improve the economy, and nobody is saying “let’s toss industry and technology out of the window, back to the caves everyone!”.
Many people are saying things I consider dangerously close to “Let’s toss industry and technology out of the window!”. Dagon suggested that our current resource expenditure was reckless, and that we should substantially downgrade our resource expenditures. I consider this to be a seriously questionable perspective on the problem.
I’m not arguing against preserving the environment if it would boost the economy for at least the next 100 years, keeping in mind opportunity cost. I want to improve humanity’s generalized power to pursue its goals- I’m not attached to any particular short guiding principle for doing this, such as “Protect the Earth!” or “More oil!”. I don’t have Mad Oil Baron Syndrome.
Understood, I apologise for misunderstanding your position on fossils fuels. I feel there was a specific attempt from my side to interpret it with that meaning, even if the example used didn’t necessarily implied it was something you endorse, and that it was due to a negative gut reaction I had while reading what you wrote.
We seem to agree on the general principles that humanity technological level will not stay the same for the next hundred years, and that some level of the changes we are producing on the environment are to be avoided to improve mankind future’s condition.
I do feel that allowing the actions of humanity to destroy every part of the environment that hasn’t been proved useful is an engagement in an extremely reckless form of optimism, though.
It’s certainly part of the attitude that got us to the point where being careful with our effect on current temperature levels and avoiding to loose most of our water resources has become a pretty difficult global challenge.
From what I read on industrial regulations so far, in most nations pollutants functionally have to be proven harmful before it can be considered forbidding their release in the environment, and I’m 100% sure it’s at least the current approach in the country most users from this site are.
All in all, our species is nowhere near the point to be immune from the feedbacks our environment can throw at us. By our actions, one third of current animal and vegetable species are currently going extinct.
That is one huge Chesterton Fence we’re tearing down. We simply don’t know in how many way such a change on the system we’re living in can go wrong for us.
I’d agree that the greatest “currently existing risks to my survival” are natural causes. I intend this category as “risks that are actively killing people who are living in similar conditions to my own now”.
However, if we talk about the main “future risks to my survival”, as in “risks that currently are killing a low number of people similar to me, but that could kill a lot more in future years in which I’ll be alive” then I feel that, even if AI mismanagement takes first place, climate change takes the second, and that it augments considerably the chances of the first.
While riots and food shortages are indeed examples I choose by pure gut level evaluation of “scariness” and are too specific to me to put my money on if I should bet on the causes of my death, I don’t feel at all optimistic about the way our society will react to climate change.
Migratory fluxes and violent conflicts a lot smaller than what we’ll certainly see were enough to send the European Union dangerously close to falling apart in a bunch of nationalistic states. Change enough our environment, and wide-scale wars and a new wave of totalitarian governments stop to be an unlikely reality, since in times of fear and unrest people are more likely to regard the principles behind them as positive. All these factors seem to reinforce each other as far as I know.
Even by assuming the situation won’t go as bad as total warfare and rampant totalitarianism, I would bet on a significant degeneration in the political scenario, moving away from international cooperation and toward nationalism and short term interests considerations only, and I don’t really see any reason that a bunch of such states, that are fighting for resources, facing wide scale crisis, scared of what each other will do and have lost most of their ability to cooperate with each other are less likely to botch AI horribly and kill us all.
About the suggestions for lowering our resource consumptions since it’s currently too high: it’s unarguable that we are burning through a ridiculous amount of resources that are producing practically no improvement in our chances of survival or even marginally improving the quality of our life. We could easily keep the same amount of comforts and life expectancy while consuming a lot less resources.
Our economical system has simply not enough incentives for efficiency, shrinking our resources consumption without sacrificing quality of life and life expectancy is perfectly doable and it’s imperative to augment our chances of long term survival.
Lastly, given the current trend of society, statements close to “keeping in check mankind consumption of resources and it’s impact on the environment it’s not a priority” are a lot more dangerous than statements close to “let’s toss industry out of the windows and go back to the caves”. Clearly going too far in either of those directions would hurt, but going too much in the first direction is a possibility a lot more likely at the present moment, while I don’t see any real chance for the second kind of statements to change society toward a pre-technological or pre-industrial site.
The de-growth movement (which, if I remember correctly, it’s based on the proven fact that economic growth, after a certain threshold, offers basically no improvement to quality of life, and that first world has long passed that threshold, so we should focus on things that aren’t economic growth), also doesn’t strike me as a threat to my quality of life or my long term survival comparable to underestimating the impact of environmental damages or of over-consumption of resources before the point when mankind hits a positive singularity.
I also don’t see any real chance of this site moving toward an anti-technological or anti-science trend. Those trends do seem dangerous and likely in the general populace, but for the risks I’ve stated above I think they should be opposed by informing people on the benefits of technology and science, rather than of the industrial system.
Our job is to make sure that our light cone ends up being used for what we find valuable. That’s our only job.
Why, exactly, is this our only job (or, indeed, our job at all)? Surely it’s possible to value present-day things, people, etc.?
The only important thing and the best thing we can do for [future humanity] is to ensure that they will ever exist at all!
Seeing as how future humanity (with capital letters or otherwise) does not, in fact, currently exist, it makes very little sense to say that ensuring their existence is something that we would be doing “for” them.
Why, exactly, is this our only job (or, indeed, our job at all)? Surely it’s possible to value present-day things, people, etc.?
The space that you can affect is your light cone, and your goals can be “simplified” to “applying your values over the space that you can affect”, therefore your goal is to apply your values over your light cone. It’s you’re “only job”.
There is, of course, a specific notion that I intended to evoke by using this rephrasing: the idea that your values apply strongly over humanity’s vast future. It’s possible to value present-day things, people, and so on- and I do. However… whenever I hear that fact in response to my suggestions that the future is large and it matters more than today, I interpret it as playing defense for their preexisting strategies. Everyone was aware of this before the person said it, and it doesn’t address the central point- it’s...
“There are 4 * 10^20 stars out there. You’re in a prime position to make sure they’re used for something valuable to you- as in, you’re currently experiencing the top 10^-30% most influential hours of human experience because of your early position in human history, etc. Are you going to change your plans and leverage your unique position?”
“No, I think I’ll spend most of my effort doing the things I was already going to do.”
Really- Is that your final answer? What position would you need to be in to decide that planning for the long term future is worth most of your effort?
Seeing as how future humanity (with capital letters or otherwise) does not, in fact, currently exist, it makes very little sense to say that ensuring their existence is something that we would be doing “for” them.
“Seeing as how a couple’s baby does not yet exist, it makes very little sense to say that saving money for their clothes and crib is something that they would be doing ‘for’ them.” No, wait, that’s ridiculous- It does make sense to say that you’re doing things “for” people who don’t exist.
We could rephrase these things in terms of doing them for yourself- “you’re only saving for their clothes and crib because you want them to get what they want”. But, what are we gaining from this rephrasing? The thing you want is for them to get what they want/need. It seems fair to say that you’re doing it for them.
There’s some more complicated discussion to be had on the specific merits of making sure that people exist, but I’m not (currently) interested in having that discussion. My point isn’t really related to that- it’s that we should be spending most of our effort on planning for the long term future.
Also, in the context of artificial intelligence research, it’s an open question as to what the border of “Future Humanity” is. “Existing humans” and “Future Humanity” probably have significant overlap, or so the people at MIRI, DeepMind, OpenAI, FHI, etc. tend to argue- and I agree.
Whether the future “matters more than today” is not a question of impersonal fact. Things, as you no doubt know, do not ‘matter’ intransitively; they matter to someone. So the question is, does “the future” (however construed) matter to me more than “today” (likewise, however construed) does? Does “the future” matter to my hypothetical friend Alice more than today does, or to her neighbor Bob? Etc.
And any of these people are fully within their right to answer in the negative.
“There are 4 * 10^20 stars out there. You’re in a prime position to make sure they’re used for something valuable to you- as in, you’re currently experiencing the top 10^-30% most influential hours of human experience because of your early position in human history, etc. Are you going to change your plans and leverage your unique position?”
Note that you’re making a non-trivial claim here. In past discussions, on Less Wrong and in adjacent spaces, it has been pointed out that our ability to predict future consequences of our actions drops off rapidly as our time horizon recedes into the distance. It is not obvious to me that I am in any particularly favorable position to affect the course of the distant future in any but the most general ways (such as contributing to, or helping to avert, human extinction—and even there, many actions I might feasibly take could plausibly affect the likelihood of my desired outcome in either the one direction or the other).
“No, I think I’ll spend most of my effort doing the things I was already going to do.”
Really- Is that your final answer? What position would you need to be in to decide that planning for the long term future is worth most of your effort?
I would need to (a) have different values than those I currently have, and (b) gain (implausibly, given my current understanding of the world) the ability to predict the future consequences of my actions with an accuracy vastly greater than that which is currently possible (for me or for anyone else).
“Seeing as how a couple’s baby does not yet exist, it makes very little sense to say that saving money for their clothes and crib is something that they would be doing ‘for’ them.” No, wait, that’s ridiculous- It does make sense to say that you’re doing things “for” people who don’t exist.
Sorry, no. There is a categorical difference between bringing a person into existence and affecting a person’s future life, contingent on them being brought into existence. It of course makes sense to speak of doing the latter sort of thing “for” the person-to-be, but such isn’t the case for the former sort of thing.
There’s some more complicated discussion to be had on the specific merits of making sure that people exist, but I’m not (currently) interested in having that discussion. My point isn’t really related to that …
To the contrary: your point hinges on this. You may of course discuss or not discuss what you like, but by avoiding this topic, you avoid one of the critical considerations in your whole edifice of reasoning. Your conclusion is unsupportable without committing to a position on this question.
Also, in the context of artificial intelligence research, it’s an open question as to what the border of “Future Humanity” is.
Quite so—but surely this undermines your thesis, rather than supporting it?
Whether the future “matters more than today” is not a question of impersonal fact. Things, as you no doubt know, do not ‘matter’ intransitively; they matter to someone. So the question is, does “the future” (however construed) matter to me more than “today” (likewise, however construed) does? Does “the future” matter to my hypothetical friend Alice more than today does, or to her neighbor Bob? Etc.
And any of these people are fully within their right to answer in the negative.
Eh… We can draw conclusions about the values of individuals based on the ways in which they seem to act in the limit of additional time and information, the origins of humanity (selection for inclusive genetic fitness), by constructing thought experiments to solicit revealed beliefs, etc.
Other agents are allowed to claim that they have more insight than you into certain preferences of yours- they often do. Consider the special cases in which you can prove that the stated preferences of some humans allow you to siphon infinite money off of them. Also consider the special cases in which someone says something completely incoherent- “I prefer two things to one another under all conditions”, or some such. We know that they’re wrong. They can refuse to admit they’re wrong, but they can’t properly do that without giving us all of their money or in some sense putting their fingers in their ears.
These special cases are just special cases. In general, values are highly entangled with concrete physical information. You may say that you want to put your hand on that (unbeknownst to you) searing plate, but we can also know that you’re wrong. You don’t want to do that, and you’d agree if only you knew that the plate was searing hot.
They are fully within their right to answer in the negative, but they’re not allowed to decide that they’re correct. There is a correct answer to what they value, and they don’t necessarily have perfect insight into that.
Note that you’re making a non-trivial claim here. In past discussions, on Less Wrong and in adjacent spaces, it has been pointed out that our ability to predict future consequences of our actions drops off rapidly as our time horizon recedes into the distance. It is not obvious to me that I am in any particularly favorable position to affect the course of the distant future in any but the most general ways (such as contributing to, or helping to avert, human extinction—and even there, many actions I might feasibly take could plausibly affect the likelihood of my desired outcome in either the one direction or the other).
You don’t need to be able to predict the future with omniscient accuracy to realize that you are in an unusually important position for affecting the future.
If it’s not obvious, here we go: You’re an above average intelligence person living in the small period directly before Humanity is expected (By top experts- and with good cause) to develop artificial general intelligence. This technology will allow us to break the key scarcities of civilization:
Allowing vastly more efficient conversion of matter into agency through the fabrication of computer hardware. This process will, given the advent of artificial general intelligence, soon far surpass the efficiency with which we can construct Human agency. Humans take a very long time to make, and you must train each individual Human- you can’t directly copy Human software, and the indirect copying is very, very slow.
Allowing agents with intelligence vastly above that of the most intelligent humans (whose brains must all fit in a container of relatively limited size) in all strategically relevant regards- speed, quality, modularity, I/O speed, multitasking ability, adaptability, transparency, etc.
Allowing us to build agents able to access a much more direct method of recursively improving their own intelligence by buying or fabricating new hardware and directly improving their own code, triggering an extremely exploitable direct feedback loop.
The initial conditions of the first agent(s) we deploy which possesses these radical and simultaneously new options which will, on account of the overwhelming importance of these limitations on the existing state of affairs, precisely and “solely” determine the future.
This is a pretty popular opinion among the popular rationalist writers- I pass the torch on to them.
Sorry, no. There is a categorical difference between bringing a person into existence and affecting a person’s future life, contingent on them being brought into existence. It of course makes sense to speak of doing the latter sort of thing “for” the person-to-be, but such isn’t the case for the former sort of thing.
I was aware of the difference. The point (Which I directly stated at the end- convenient!) is that “It does make sense to say that you’re doing things ‘for’ people who don’t exist.” If this doesn’t directly address your point, the proper response to make would have been “Ok, I think you misunderstood what I was saying.” I think that I did misunderstand what you were saying, so disregard.
Aside from that, I still think that saying bringing someone into existence “for” them makes sense. I think you saying the thing doesn’t “make sense” is unfairly dismissive and overly argumentative. If someone said that they weren’t going to have an abortion “for” their baby, (or, if you disagree with me about the lines of what constitutes a “person”) that they were stopping some pain relieving experimental drug that was damaging their fertility “for” their future children, you’d receive all of the information they meant to convey about their motivations. It would definitely make sense. You might disagree with that reasoning, but it’s coherent. They have an emotional connection with their as of yet not locally instantiated children.
I personally do happen to disagree with this reasoning for reasons I will explain later- but it does make sense.
To the contrary: your point hinges on this. You may of course discuss or not discuss what you like, but by avoiding this topic, you avoid one of the critical considerations in your whole edifice of reasoning. Your conclusion is unsupportable without committing to a position on this question.
It isn’t, and I just told you that it isn’t. You should have tried to understand why I was saying that before arguing with me- I’m the person who made the comment in the first place, and I just directly told you that you were misinterpreting me.
My point is: “It’s that we should be spending most of our effort on planning for the long term future.” See later for an elaboration.
Quite so—but surely this undermines your thesis, rather than supporting it?
No- I’m not actually arguing for the specific act of ensuring that future humans exist. I think that all humans already exist, perhaps in infinite supply, and I thus see (tentatively) zero value in bringing about future humans in and of itself. My first comment was using a rhetorical flair that was intended to convey my general strategy for planning for the future; I’m more interested in solving the AI alignment problem (and otherwise avoiding human extinction/s-risks) than I am about current politically popular long term planning efforts and the problems that they address, such as climate change and conservation efforts.
I think that we should be interested in manipulating the relative ratios (complicated stuff) of future humans, which means that we should still be interested in “ensuring the existence” (read: manipulating the ratios of different types of) of “Future Humanity”, a nebulous phrase meant to convey the sort of outcome that I want to see to the value achievement dilemma. Personally, I think that the most promising plan for this is engineering an aligned AGI and supporting it throughout its recursive self improvement process.
Your response was kindof sour, so I’m not going to continue this conversation.
I read this comment with interest, and with the intent of responding to your points—it seemed to me that there was much confusion to be resolved here, to the benefit of all. Then I got to your last line.
It is severely rude to post a detailed fisking of an interlocutor’s post/comment, and to then walk away. If you wish to bow out of the discussion, that is your right, but it is both self-indulgent and disrespectful to first get in a last word (much less a last several hundred words).
You were welcome to write an actual response, and I definitely would have read it. I was merely announcing my advanced intent to not respond in detail to any following comments, and explaining why in brief, conservative terms. This is seemingly strictly better- it gives you new information which you can use to decide whether or not you want to respond. If I was being intentionally mean, I would have allowed you to write a detailed comment and never responded, potentially wasting your time.
If your idea of rudeness is constructed in this (admittedly inconvenient) way, I apologize.
Sorry—was a reaction to the focus on changing other people’s models, and the implication that there is a set of simple models that if only people were a little more {educated,smart,aware,like-me}, this would all be a non-issue.
Ehn. Nobody really understands anything, we’re just doing the best we can with various models of different complexity. Adam Smith’s pin factory description in the 18th century has only gotten more representative of the actual complexity in the world and the impossibility of fully understanding all the tradeoffs involved in anything. Note also that anytime you frame something as “responsibility of every citizen”, you’re well into the political realm.
You can see the economy as a set of solutions to some problems, but you also need to see it as exacerbation of other problems. Chesterton’s Fence is a good heuristic for tearing down fences, where it’s probably OK to let it stand for awhile while you think about it. It’s a crappy way to decide whether you should get off the tracks before you understand the motivation of the railroad company.
I suspect that if people really understood the cost to future people of the contortions we go through to support this many simultaneous humans in this level of luxury, we’d have to admit that we don’t actually care about them very much. I sympathize with those who are saying “go back to the good old days” in terms of cutting the population back to a sustainable level (1850 was about 1.2B, and it’s not clear even that was sparse/spartan enough to last more than a few millennia).
There’s enough matter in our light cone to support each individual existing human for roughly 10^44 years.
The problem is not “running out of resources”- there are so many resources it will require cosmic engineering for us to use more of them than entropy, even if we multiply our current population by ten billion.
Earth is only one planet- it does not matter how much of earth we use here and now. Our job is to make sure that our light cone ends up being used for what we find valuable. That’s our only job. The finite resources available on earth are almost irrelevant to the long term human project, beyond the extent to which those resources help us accomplish our job- I would burn a trillion pacific oceans worth of oil for a .000000000000000001% absolute increase to our probability of succeeding at our job.
I sympathize with people who are thinking like this, because it shows that they’re at least trying to think about the future. But… Future Humanity doesn’t need the petty resources available on earth any more than we need good flint to make hunting spears with. The only important thing and the best thing we can do for them is to ensure that they will ever exist at all!
It’s entirely possible to burn through the resources on this planet without getting off this planet . That’s a very dicey pinch point
It’s possible, but very improbable. We have vastly more probable concerns (misaligned AGI, etc.) than resource depletion sufficient to cripple the entire human project.
What critical resources is Humanity at serious risk of depleting? Remember that most resources have substitutes- food is food.
Phosphate rock?
https://en.wikipedia.org/wiki/Peak_phosphorus
That’s surprisingly close, but I don’t think that counts. That page explains that the current dynamics behind phosphate recycling are bad as a result of phosphate being cheap- if phosphate was scarce, recycling (and potentially the location of new phosphate reserves, etc.) would become more economical.
The resources required to get off the planet and access other resources are huge .
True, and it’ll be a long time before off-planet habitations are resilient and self-sufficient enough to survive the anger of the 10B people on the planet which can no longer support them in the way they think they deserve. Getting the exponential growth (of permanent off-world settlements) started as soon as possible is the only way to get there, though.
Why do you seem to imply that burning fossil fuels would help at all the odds of the long term human project?
Even ignoring the current deaths due to the large scale desertification that Climate Change is causing, it’s putting our current society at a very real risk of collapse. Food and water supplies are at risk for the medium term, since we are losing hydrical reserves and cultivations are expected to suffer greatly for the abrupt change in temperature and the increased extreme meteorological events.
At the current rate of fishing, all fish species could be practically extinct by 2050, and for the same date the estimates ranging from 100 million to 1 billion climate refugees. Given how badly our societies reacted to numbers of refugees that weren’t even close to that scale, I really don’t want to see what will happen.
Not to say that currently one species out of three of all animals and vegetal is going extinct and could be gone for the same date. That is a scale of damage to the ecosystem that could easily feedback into who knows what.
We are causing the sixth mass extinction on our planet. I feel pretty confident some humans will survive and that technological progress could continue past that, eventually.
But I feel a lot more confident about humanity reaching the stars in an universe where we manage to not make scorched earth of our first planet before we have a way to do that, and I personally don’t want to see my personal odds of survival diminishing because I’ll have to deal with riots, food shortages, totalitarian fascist governments or… who know? A dying ecosystem is the kind of thing that could rush us into botching nanotechnology while looking for a way to fix our mess.
Lastly, I really don’t see how switching out of fossils would in any way harm our chances to develop as a species.
Every economical estimate I saw said that the costs would be a lot less than the economic damage from climate change alone, many estimates agree that it would actually improve the economy, and nobody is saying “let’s toss industry and technology out of the window, back to the caves everyone!”.
What is your source for this? On Wikipedia, there is a distinct lack of references to good quality data, and in the anecdotal evidence (e.g. shrinking of lakes in the Sahel) seem to have other contributing factors than climate change, like increasing irrigation. Elsewhere I find that “[t]he Sahel region is experiencing a phase of population growth unprecedented in any other part of the world”.
(https://ideas4development.org/en/population-growth-sahel-challenge-generation/)
What is your source for this? While some fisheries are poorly managed, many are in much better shape. There is a lack of knowledge about the status of many stocks, and we can’t model ecosystems very well, but the uncertainty doesn’t mean you can conclude with the most outrageous claim.
Again, who is estimating this, and how? Currently we have 70 million refugees from wars and oppression, and probably more fleeing towards better economic prospects (although we don’t usually cause them refugees). I propose we spend our resources towards fixing this, rather than towards some hypothetical refugee situation some time in the future. A side benefit is that rich, peaceful nations tend to be the ones that manage their fisheries well, protect biodiversity and their inhabitants don’t become refugees even when the occasional natural disaster strikes.
My master thesis treated the impacts of climate change, here are the sources I used for these claims:
Desertification: https://spiral.imperial.ac.uk/bitstream/10044/1/76618/2/SRCCL-Full-Report-Compiled-191128.pdf If you’d rather know precisely where to look for my claims, since it’s a 874 pages long report, I’d suggest the Summary for Policy Maker part, from page 5 to 9, Chapter 1.2.1, from page 88 to page 91, and chapter 5 executive summary, pages 439-440.
The report also states that the way land and water are used for agriculture is part of the problem, it interacts with climate change making both issues worse.
https://www.ipcc.ch/site/assets/uploads/sites/3/2019/11/03_SROCC_SPM_FINAL.pdf For this I suggest reading the Summary for Policy Makers B, from page 17 to 28. B7 is the most relevant point for desertification, B8 for fish losing most of it’s biomass and putting at risk food security.
These two:
https://iopscience.iop.org/article/10.1088/1748-9326/4/2/024007/pdf
https://www.researchgate.net/publication/337888219_Impacts_of_ocean_deoxygenation_on_fisheries_In_%27Laffoley_D_Baxter_JM_eds_2019_Ocean_deoxygenation_Everyone%27s_problem_-_Causes_impacts_consequences_and_solutions_Gland_Switzerland_IUCN_xxii562pp
again cover the subject of an abrupt drop in marine biomass and it’s consequences for food security. The second one specifies how over fishing and climate change are again piling up as problems, exacerbating each other consequences.
My specific claim that overfishing would extinguish all fish species by 2050 turned out to not be in my thesis, I mixed up what I heard in a documentary with the statements I was able to prove about risks for a collapse of marine life and risks for food security during my work.
This is referred as the study which that statement I heard was based on, but it states it’s a possibility and there doesn’t seem to be much recent research backing this outcome, so I’d update my expectations to the possible outcomes treated in the studies above, which aren’t at all less worrying.
Edit: I forgot to actually paste the link: https://www3.epa.gov/region1/npdes/schillerstation/pdfs/AR-024.pdf
For refugees:
https://xpda.com/junkmail/junk219/environmental%20refugees%2014851.pdf this indicates 200 million climate refugees by 2050 as the most common estimate.
https://publications.iom.int/system/files/pdf/mecc_outlook.pdf this was the resource I used from 100 million to 1 billion.
My statement was from memory and it was incorrect. The most relevant pages for my statement seem to be 38 and 39. IOM states that, in the current literature, predictions of refugees number vary from 25 million to 1 billion because there are a lot of variables.
However at page 39 says that in the previous 5 years over 165 million people were newly displaced, and that climate and weather disasters were involved in 90% of cases, so my guess is that we can throw the most optimistic estimates out of the window. Most of those cases are related to temporary displacement (page 40), but in the same page it’s stated that climate change is expected to shift climate related displacement toward permanent ones.
For current refugees vs future refugees, usually it’s a lot more cost-effective to prevent a problem than to fix it once it happens.
I strongly feel we should fix the current problem as well, and that the two approaches shouldn’t have to compete for the resources we’ll allocate. Currently this kind of problems are seeing only the scraps of what we could allocate, and fixing the future problems spares us economic damages that would be way higher even in the short term alone.
Also, many of the wars currently causing refugees seem to be partly caused by climate change consequences.
https://eprints.lancs.ac.uk/id/eprint/134710/1/Mach_2019_accepted_manuscript.pdf
(here is the published version of the same article, I’m not sure if you have access to this resource though, I have it through my university https://www.nature.com/articles/s41586-019-1300-6 )
https://archive.defense.gov/pubs/150724-congressional-report-on-national-implications-of-climate-change.pdf
Both these studies indicate climate change as one of the causes of recent war, and as likely cause for more armed conflicts in the future.
On a side note: I do have to remember to always post the sources of my claims in advance, so at least I can make less of them. This wasn’t how I planned to spend a good part of my morning, but it would have been really incorrect to not post the sources for claims I already made.
I don’t imply that. For clarification:
The oil example isn’t meant to be any reflection of my affinity for fossil fuels.
My point that “Super long term conservation of resources” isn’t a concern. If there are near term non “conservation of resources” reasons why doing something is bad, I’m open to those concerns- we don’t need to worry about ensuring that humans 100 years from now have access to fuel sources.
For the record, I think nuclear and solar seem to clearly be better energy sources than fossil fuels for most applications. Especially nuclear.
I’m also not fighting defense for climate change activists- I don’t care about how many species die out, unless those species are useful (short term- next 50 years, 100 years max?) to us. If you want to make sure future humanity has access to Tropical Tree Frog #952, and you’re concerned about them going extinct, go grab some genetic samples and preserve them. If the species makes many humans very happy, provides us valuable resources, etc., fine.
I’m open to the notion that regulating our fish intake is the responsible move- it seems like a pretty easy sell. It keeps our fishing equipment, boats, and fishermen useful. I’m taking this action because it’s better for humanity, not because it’s better for the fish or better for the Earth.
The Strategy is not to excessively use resources and destroy the environment just because we can, it’s to actively and directly use our resources to accomplish our goals, which I have doubts strongly aligns with preserving the environment.
Let’s list a few ways in which our conservation efforts are bad:
Long term (100+ years) storage of nuclear waste.
Protecting species which aren’t really useful to Humanity.
Planning with the idea that we will be indefinitely (Or, for more than 100 years) living in the current technological paradigm, i.e. without artificial general intelligence.
And in which they’re valid:
Being careful with our harvesting of easily depletable species which we’ll be better off having alive for the next 100 years.
Being careful with our effect on global temperatures and water levels, in order to avoid the costs of relocating large numbers of humans.
Being careful with our management of important freshwater reserves, at least until we develop sufficiently economical desalinization plants.
The greatest risks to your survival are, by far, (unless you’re a very exceptional person) natural causes and misaligned artificial general intelligence. You shouldn’t significantly concern yourself with dealing with weird risk factors such as riots or food shortages unless you’ve already found that you can’t do anything about natural causes and misaligned artificial general intelligence. Spoiler: It seems you can do something about these risk factors.
Many people are saying things I consider dangerously close to “Let’s toss industry and technology out of the window!”. Dagon suggested that our current resource expenditure was reckless, and that we should substantially downgrade our resource expenditures. I consider this to be a seriously questionable perspective on the problem.
I’m not arguing against preserving the environment if it would boost the economy for at least the next 100 years, keeping in mind opportunity cost. I want to improve humanity’s generalized power to pursue its goals- I’m not attached to any particular short guiding principle for doing this, such as “Protect the Earth!” or “More oil!”. I don’t have Mad Oil Baron Syndrome.
Understood, I apologise for misunderstanding your position on fossils fuels. I feel there was a specific attempt from my side to interpret it with that meaning, even if the example used didn’t necessarily implied it was something you endorse, and that it was due to a negative gut reaction I had while reading what you wrote.
We seem to agree on the general principles that humanity technological level will not stay the same for the next hundred years, and that some level of the changes we are producing on the environment are to be avoided to improve mankind future’s condition.
I do feel that allowing the actions of humanity to destroy every part of the environment that hasn’t been proved useful is an engagement in an extremely reckless form of optimism, though.
It’s certainly part of the attitude that got us to the point where being careful with our effect on current temperature levels and avoiding to loose most of our water resources has become a pretty difficult global challenge.
From what I read on industrial regulations so far, in most nations pollutants functionally have to be proven harmful before it can be considered forbidding their release in the environment, and I’m 100% sure it’s at least the current approach in the country most users from this site are.
All in all, our species is nowhere near the point to be immune from the feedbacks our environment can throw at us. By our actions, one third of current animal and vegetable species are currently going extinct.
That is one huge Chesterton Fence we’re tearing down. We simply don’t know in how many way such a change on the system we’re living in can go wrong for us.
I’d agree that the greatest “currently existing risks to my survival” are natural causes. I intend this category as “risks that are actively killing people who are living in similar conditions to my own now”.
However, if we talk about the main “future risks to my survival”, as in “risks that currently are killing a low number of people similar to me, but that could kill a lot more in future years in which I’ll be alive” then I feel that, even if AI mismanagement takes first place, climate change takes the second, and that it augments considerably the chances of the first.
While riots and food shortages are indeed examples I choose by pure gut level evaluation of “scariness” and are too specific to me to put my money on if I should bet on the causes of my death, I don’t feel at all optimistic about the way our society will react to climate change.
Migratory fluxes and violent conflicts a lot smaller than what we’ll certainly see were enough to send the European Union dangerously close to falling apart in a bunch of nationalistic states. Change enough our environment, and wide-scale wars and a new wave of totalitarian governments stop to be an unlikely reality, since in times of fear and unrest people are more likely to regard the principles behind them as positive. All these factors seem to reinforce each other as far as I know.
Even by assuming the situation won’t go as bad as total warfare and rampant totalitarianism, I would bet on a significant degeneration in the political scenario, moving away from international cooperation and toward nationalism and short term interests considerations only, and I don’t really see any reason that a bunch of such states, that are fighting for resources, facing wide scale crisis, scared of what each other will do and have lost most of their ability to cooperate with each other are less likely to botch AI horribly and kill us all.
About the suggestions for lowering our resource consumptions since it’s currently too high: it’s unarguable that we are burning through a ridiculous amount of resources that are producing practically no improvement in our chances of survival or even marginally improving the quality of our life. We could easily keep the same amount of comforts and life expectancy while consuming a lot less resources.
Our economical system has simply not enough incentives for efficiency, shrinking our resources consumption without sacrificing quality of life and life expectancy is perfectly doable and it’s imperative to augment our chances of long term survival.
Lastly, given the current trend of society, statements close to “keeping in check mankind consumption of resources and it’s impact on the environment it’s not a priority” are a lot more dangerous than statements close to “let’s toss industry out of the windows and go back to the caves”. Clearly going too far in either of those directions would hurt, but going too much in the first direction is a possibility a lot more likely at the present moment, while I don’t see any real chance for the second kind of statements to change society toward a pre-technological or pre-industrial site.
The de-growth movement (which, if I remember correctly, it’s based on the proven fact that economic growth, after a certain threshold, offers basically no improvement to quality of life, and that first world has long passed that threshold, so we should focus on things that aren’t economic growth), also doesn’t strike me as a threat to my quality of life or my long term survival comparable to underestimating the impact of environmental damages or of over-consumption of resources before the point when mankind hits a positive singularity.
I also don’t see any real chance of this site moving toward an anti-technological or anti-science trend. Those trends do seem dangerous and likely in the general populace, but for the risks I’ve stated above I think they should be opposed by informing people on the benefits of technology and science, rather than of the industrial system.
Indeed, there is an active “degrowth” movement. cf. Giorgos Kallis: https://greattransition.org/publication/the-degrowth-alternative
It’s entirely possible to burn through the resources on this planet without getting off this planet . That’s a very dicey pinch point
Why, exactly, is this our only job (or, indeed, our job at all)? Surely it’s possible to value present-day things, people, etc.?
Seeing as how future humanity (with capital letters or otherwise) does not, in fact, currently exist, it makes very little sense to say that ensuring their existence is something that we would be doing “for” them.
The space that you can affect is your light cone, and your goals can be “simplified” to “applying your values over the space that you can affect”, therefore your goal is to apply your values over your light cone. It’s you’re “only job”.
There is, of course, a specific notion that I intended to evoke by using this rephrasing: the idea that your values apply strongly over humanity’s vast future. It’s possible to value present-day things, people, and so on- and I do. However… whenever I hear that fact in response to my suggestions that the future is large and it matters more than today, I interpret it as playing defense for their preexisting strategies. Everyone was aware of this before the person said it, and it doesn’t address the central point- it’s...
“There are 4 * 10^20 stars out there. You’re in a prime position to make sure they’re used for something valuable to you- as in, you’re currently experiencing the top 10^-30% most influential hours of human experience because of your early position in human history, etc. Are you going to change your plans and leverage your unique position?”
“No, I think I’ll spend most of my effort doing the things I was already going to do.”
Really- Is that your final answer? What position would you need to be in to decide that planning for the long term future is worth most of your effort?
“Seeing as how a couple’s baby does not yet exist, it makes very little sense to say that saving money for their clothes and crib is something that they would be doing ‘for’ them.” No, wait, that’s ridiculous- It does make sense to say that you’re doing things “for” people who don’t exist.
We could rephrase these things in terms of doing them for yourself- “you’re only saving for their clothes and crib because you want them to get what they want”. But, what are we gaining from this rephrasing? The thing you want is for them to get what they want/need. It seems fair to say that you’re doing it for them.
There’s some more complicated discussion to be had on the specific merits of making sure that people exist, but I’m not (currently) interested in having that discussion. My point isn’t really related to that- it’s that we should be spending most of our effort on planning for the long term future.
Also, in the context of artificial intelligence research, it’s an open question as to what the border of “Future Humanity” is. “Existing humans” and “Future Humanity” probably have significant overlap, or so the people at MIRI, DeepMind, OpenAI, FHI, etc. tend to argue- and I agree.
Whether the future “matters more than today” is not a question of impersonal fact. Things, as you no doubt know, do not ‘matter’ intransitively; they matter to someone. So the question is, does “the future” (however construed) matter to me more than “today” (likewise, however construed) does? Does “the future” matter to my hypothetical friend Alice more than today does, or to her neighbor Bob? Etc.
And any of these people are fully within their right to answer in the negative.
Note that you’re making a non-trivial claim here. In past discussions, on Less Wrong and in adjacent spaces, it has been pointed out that our ability to predict future consequences of our actions drops off rapidly as our time horizon recedes into the distance. It is not obvious to me that I am in any particularly favorable position to affect the course of the distant future in any but the most general ways (such as contributing to, or helping to avert, human extinction—and even there, many actions I might feasibly take could plausibly affect the likelihood of my desired outcome in either the one direction or the other).
I would need to (a) have different values than those I currently have, and (b) gain (implausibly, given my current understanding of the world) the ability to predict the future consequences of my actions with an accuracy vastly greater than that which is currently possible (for me or for anyone else).
Sorry, no. There is a categorical difference between bringing a person into existence and affecting a person’s future life, contingent on them being brought into existence. It of course makes sense to speak of doing the latter sort of thing “for” the person-to-be, but such isn’t the case for the former sort of thing.
To the contrary: your point hinges on this. You may of course discuss or not discuss what you like, but by avoiding this topic, you avoid one of the critical considerations in your whole edifice of reasoning. Your conclusion is unsupportable without committing to a position on this question.
Quite so—but surely this undermines your thesis, rather than supporting it?
Eh… We can draw conclusions about the values of individuals based on the ways in which they seem to act in the limit of additional time and information, the origins of humanity (selection for inclusive genetic fitness), by constructing thought experiments to solicit revealed beliefs, etc.
Other agents are allowed to claim that they have more insight than you into certain preferences of yours- they often do. Consider the special cases in which you can prove that the stated preferences of some humans allow you to siphon infinite money off of them. Also consider the special cases in which someone says something completely incoherent- “I prefer two things to one another under all conditions”, or some such. We know that they’re wrong. They can refuse to admit they’re wrong, but they can’t properly do that without giving us all of their money or in some sense putting their fingers in their ears.
These special cases are just special cases. In general, values are highly entangled with concrete physical information. You may say that you want to put your hand on that (unbeknownst to you) searing plate, but we can also know that you’re wrong. You don’t want to do that, and you’d agree if only you knew that the plate was searing hot.
They are fully within their right to answer in the negative, but they’re not allowed to decide that they’re correct. There is a correct answer to what they value, and they don’t necessarily have perfect insight into that.
You don’t need to be able to predict the future with omniscient accuracy to realize that you are in an unusually important position for affecting the future.
If it’s not obvious, here we go: You’re an above average intelligence person living in the small period directly before Humanity is expected (By top experts- and with good cause) to develop artificial general intelligence. This technology will allow us to break the key scarcities of civilization:
Allowing vastly more efficient conversion of matter into agency through the fabrication of computer hardware. This process will, given the advent of artificial general intelligence, soon far surpass the efficiency with which we can construct Human agency. Humans take a very long time to make, and you must train each individual Human- you can’t directly copy Human software, and the indirect copying is very, very slow.
Allowing agents with intelligence vastly above that of the most intelligent humans (whose brains must all fit in a container of relatively limited size) in all strategically relevant regards- speed, quality, modularity, I/O speed, multitasking ability, adaptability, transparency, etc.
Allowing us to build agents able to access a much more direct method of recursively improving their own intelligence by buying or fabricating new hardware and directly improving their own code, triggering an extremely exploitable direct feedback loop.
The initial conditions of the first agent(s) we deploy which possesses these radical and simultaneously new options which will, on account of the overwhelming importance of these limitations on the existing state of affairs, precisely and “solely” determine the future.
This is a pretty popular opinion among the popular rationalist writers- I pass the torch on to them.
I was aware of the difference. The point (Which I directly stated at the end- convenient!) is that “It does make sense to say that you’re doing things ‘for’ people who don’t exist.” If this doesn’t directly address your point, the proper response to make would have been “Ok, I think you misunderstood what I was saying.” I think that I did misunderstand what you were saying, so disregard.
Aside from that, I still think that saying bringing someone into existence “for” them makes sense. I think you saying the thing doesn’t “make sense” is unfairly dismissive and overly argumentative. If someone said that they weren’t going to have an abortion “for” their baby, (or, if you disagree with me about the lines of what constitutes a “person”) that they were stopping some pain relieving experimental drug that was damaging their fertility “for” their future children, you’d receive all of the information they meant to convey about their motivations. It would definitely make sense. You might disagree with that reasoning, but it’s coherent. They have an emotional connection with their as of yet not locally instantiated children.
I personally do happen to disagree with this reasoning for reasons I will explain later- but it does make sense.
It isn’t, and I just told you that it isn’t. You should have tried to understand why I was saying that before arguing with me- I’m the person who made the comment in the first place, and I just directly told you that you were misinterpreting me.
My point is: “It’s that we should be spending most of our effort on planning for the long term future.” See later for an elaboration.
No- I’m not actually arguing for the specific act of ensuring that future humans exist. I think that all humans already exist, perhaps in infinite supply, and I thus see (tentatively) zero value in bringing about future humans in and of itself. My first comment was using a rhetorical flair that was intended to convey my general strategy for planning for the future; I’m more interested in solving the AI alignment problem (and otherwise avoiding human extinction/s-risks) than I am about current politically popular long term planning efforts and the problems that they address, such as climate change and conservation efforts.
I think that we should be interested in manipulating the relative ratios (complicated stuff) of future humans, which means that we should still be interested in “ensuring the existence” (read: manipulating the ratios of different types of) of “Future Humanity”, a nebulous phrase meant to convey the sort of outcome that I want to see to the value achievement dilemma. Personally, I think that the most promising plan for this is engineering an aligned AGI and supporting it throughout its recursive self improvement process.
Your response was kindof sour, so I’m not going to continue this conversation.
I read this comment with interest, and with the intent of responding to your points—it seemed to me that there was much confusion to be resolved here, to the benefit of all. Then I got to your last line.
It is severely rude to post a detailed fisking of an interlocutor’s post/comment, and to then walk away. If you wish to bow out of the discussion, that is your right, but it is both self-indulgent and disrespectful to first get in a last word (much less a last several hundred words).
Strongly downvoted.
You were welcome to write an actual response, and I definitely would have read it. I was merely announcing my advanced intent to not respond in detail to any following comments, and explaining why in brief, conservative terms. This is seemingly strictly better- it gives you new information which you can use to decide whether or not you want to respond. If I was being intentionally mean, I would have allowed you to write a detailed comment and never responded, potentially wasting your time.
If your idea of rudeness is constructed in this (admittedly inconvenient) way, I apologize.
?
Sorry—was a reaction to the focus on changing other people’s models, and the implication that there is a set of simple models that if only people were a little more {educated,smart,aware,like-me}, this would all be a non-issue.