Humanitarian Phase Transition needed before Technological Singularity
TLDR: Author tries to explain (1) what HPT is, (2) why it is needed, and (3) why no additional technological breakthroughs are needed for HPT, not even AGI. The author does not follow the LessWrong terminology, e.g. ‘alignment’.
(1) In ancient times, all you needed to learn to become a good member of your society was: learn the language of your tribe, and a few intellectually simple skills. After thousands of years: you also need to learn to read and write, and must master arithmetic skills. After hundreds of years: you choose whether your society is relatively small (family, town, province...) or it’s closer to worldwide, and in the latter case you should master an international language, and a number of relevant skills, including long-distance communication skills. These might look like giant leaps, but the approaching Phase Transition (coming from the Explosive Growth—EG—of knowledge about humans, their societies and humanity overall) might view those three worlds outlined above as barely distinguishable.
Before EG, regardless of whether you agree or not with Socrates in his “all I know is that I know nothing”, you indeed have less than 10% of the post-EG knowledge about humans, societies, humanity. Less than 10% for sure, and most likely even less than 1% of what you could learn at your place and time with post-EG tools and a longer life.
Also, in pre-HPT worlds, once you’ve learned more than 50% of what you could learn at your place and time, most often you are already 50+ years old. And your lifespan is most likely 60...70 years, at most 80...90 in very rare cases. In post-HPT worlds, you need just 10...15 years for the same, and your lifespan is 100+ years.
(2) HPT is clearly needed before Techno-Singularity, because currently way too many humans and societies are unhappy, and too many barely understand where they are heading to, and what the consequences can be. We see that TS contains many dangerous technologies, and the technologies we already have can easily kill humanity, and even the entire planet. And before HPT some societies, not only individual humans, but even societies, might wish to kill other societies. We also see that happier humans are less sure that AGI will kill all humans sooner or later. The people that respect and trust their societies are not so sure about that. AGI is seen as the nearest breakthrough technology in the TS cluster, which will supposedly make TS inevitable. You can find many relevant discussions on the entire topic (2), including many that will look like better discussions, more insightful and/or convincing.
(3) Will AGI be very helpful for HPT? Sure it will. It can provide a lot of knowledge to humanity about humans, societies and humanity overall. Some of the knowledge will be easy to understand and make use of, for example, why the coffee consumption per capita is the highest in Finland, Norway, and Iceland (Sweden on the 6th place, and in the Western Hemisphere it’s Canada). But many truths will be hard to accept and use. For example, for some societies AGI can reveal that the stronger the judgement-and-punishment system, the higher the crime rate and imprisonment, and the lower the average trust of individuals in social institutions. And the entire Ministry Of Justice contributes roughly as much to social justice as the infamous Ministry Of Peace to establishing peace asap and by all peaceful means, or the Ministry Of Truth (from the same novel by George Orwell) to revealing and promoting objective facts. Perhaps humans should start preparing for HPT and for loads of inconvenient truths.
In addition to obtaining knowledge, AGI can teach itself and humans on how to improve the process of obtaining more knowledge about humanity, and how to make use of the new knowledge to improve societies and lives of individuals.
The next important question is: do we really need AGI for all that? That’s not a trivial question, maybe everything we need for EG and HPT is already here, including Large AI, LAI. To better understand why so: how would you prove that the first paragraphs of (3) were written by a human rather than an LAI (not necessarily LLM), possibly with a few smart prompts from a human?
If you have more thoughts on (3), please contribute.
You probably also needed lots of social skills.
The optimists are more optimistic, sure. Is this somehow related to the actual probability of AGI killing everyone?
And yet there is no definition of it in the post, not that I can find.
Mod note: the LW team is currently experimenting with a stricter-moderation policy.
I approved this post, but was conflicted about it, and some other moderators have said they wouldn’t have approved it. The LW team mostly agrees that the bar for AI posts needs to be significantly higher now (I think with slightly different reasons – one of my personal reasons is that there are so many AI posts that the userbase can’t possibly engage with them all. It seems better to focus attention on posts that seem more likely to be useful)
My current best guess for a bar is “is the post novel, succinct/clear, and does it addressing existing arguments that are relevant to it?” An additional restriction one might add to the bar is “at least someone on the LW team thinks there’s a decent chance the post is useful”. I’m kind of on the fence about this post being useful. But I put at least some odds on this frame being helpful.
Other mods can chime in with their thoughts if they want.
Agreed, except for the fact that I don’t think we have enough time to do them separately—HPT without AI seems incredibly difficult to me. I’ve spent some time thinking about it, and I’m enthusiastic about https://microsolidarity.cc/ and https://anarchy.works/, but they rely on establishing a preference fulfillment network-of-caring of sufficient strength for mutual protection and general mutual aid to be natural things to do, which many large aggregate agents (eg, states) seem quite hesitant to do due to power balance instability; if states were too humanitarian and eg were to consider becoming world’s EMT instead of world’s police, they’d put their power at risk. In general, establishing a dense network of coprotective caring relies on a high enough ratio of other-preference fulfillment preference in individuals that there is percolation of protection, which seems necessary to me for reliable reputational-tit-for-tat resistance to defectbot behavior in social network graphs. Because this is so hard to establish in the current preference network, it seems to me plausible that making it easier with tools would make an overwhelming difference (eg, if power sources were dramatically more available), which seems to me to rely on AI capable of solving death, disease, and relative scarcity—which in turn are ai capabilities so strong that the AI needs to be aligned enough to not destroy us to work. But, this is all idiosyncratic to my views, and perhaps I’ve missed something important—eg, perhaps I got the game theory of distributed systems wrong and it’s more tractable than it seems to me.
(I kinda dumped a bunch of relevant keywords into this paragraph, might help to ask an ai to extract the key ones and then look them up, sorry to jargondump; most words besides “coprotection” should be standard terms of art or combinations of them)
I’m curious what downvoters would say if they also replied.