Not sure whether this belongs here or not, but there are plenty of fiction posts here. This is sort of halfway between a story and a worldbuilding document, based on many ideas I’ve learned from here and from adjacent spaces. Hopefully it will be interesting or useful to somebody.
The following is a series of excerpts from the textbooks of various required courses for baseline citizens wishing to eventually Ascend. They are mainly taken from introduction sections, which customarily summarize at a high level the material which the rest of the course goes more in-depth on. They are offered here as a quick summary of our beliefs and culture, to anyone from elsewhere in the vast Multiverse, even and especially in places outside of the Overseer’s control, who wishes to understand.
History of Computation
What is computation? It is a question which is at once easy and difficult to answer. Easy, because there are plenty of equivalent mathematical definitions which are easy to understand at a basic level, which all boil down informally to the same thing. Alan Turing published the first definition which would today be considered a Model of Computation back in 1936 PHW with his Turing Machine, and in the centuries (and untold googolplexes of subjective years) since then, all practically usable Models of Computation have been equivalent to his or weaker. Computation is just a kind of process which can find the output data of some kind of function or program, given the input data. All you need to be able to compute any function, is somewhere to store a list of instructions, known as a “program”, some instructions which can store and retrieve data from somewhere, known as “memory”, and some instructions which can check spots in memory and choose which instruction to execute next based on this, known as “control flow”. A computer language which has all of these elements, if unrestricted (i.e. infinite memory, unrestricted control flow) is said to be “Turing-Complete” because it is as powerful as a Turing Machine, and therefore is capable of computing all functions that any known Model of Computation can.
It is, however, quite difficult in another sense to pin down what exactly we think of as “computation”, particularly when referring to it in an informal, rather than mathematical sense. For example, the mathematical definition of computation makes no reference to what sorts of computations are useful or interesting: a randomly-generated snippet of code that does nothing useful is a computation just the same as the most ingenious algorithm ever devised. Furthermore, before the Dawn of the Primitive Recursive Era, it was certainly not so obvious which phenomena could or should be thought of as computation.
Today, it is a common refrain that “everything is computation”, that computation describes our entire world, and while current advancements have certainly demonstrated the usefulness of such a view, it was not always so widely believed. In pre-Dawn societies, whether before or after the Hot War, if computation was understood as a concept at all, it was generally viewed as something very rigid, and limited in scope.
Most electronic computer programs in those days were simple things, always designed directly by the human mind, vulnerable to coding mistakes and oversights, extremely limited in the scope of their abilities. When they were unpredictable, it was only in the way that a random number generator is unpredictable, rather than any seeming “creative spark”. Other phenomena which were outside of those electronic computer programs, in the “real world” were usually not considered to be computation. In particularly stark contrast, biological systems were not directly intelligently designed, were fault-tolerant and adaptive, and these things were true even more so of the human mind itself.
It is no wonder that such a sharp distinction would be drawn between the physical or “real” world and the computational one. With one world being so much vaster, more complex, and more full of possibility than the other, it made perfect sense to conclude that they were fundamentally separate. Even those who believed the “real world” or the human mind to be a form of computation yielded to practicality: they would always admit that if there was not a fundamental difference, there was at least a practical one. The computation which went on outside the controlled environment of an electronic computer was too complex to manipulate, or speak about precisely, in the same way.
This began to change with the advent of the first programs which could be considered AI, during the mid-2020s to 2030s PHW, starting with the large language models and eventually reaching more general, multimodal OOD(Out of Distribution) intelligence in the final years PHW, along with the large-scale social dynamics simulations which began to be run in the zettascale era. These programs and their behavior could not be fully understood by the humans who had designed them, even after being run. With their extreme complexity, they began to mimic to some degree the properties of the “real world” that had seemed so separate and unattainable. Arguably, the programmers had not actually designed most of the important parts of these systems, leaving that design instead to algorithms whose emergent effects were not fully understood.
Of course, after the Hot War, the societies that eventually reemerged would be far more careful about large-scale computation. Historians in the following centuries had studied the problems in society that eventually led to the outbreak of catastrophic nuclear war, and by the time the necessary infrastructure for large-scale computation was again possible around Year 150, it was widely accepted that the main contributing factor to the war was the destabilizing influence of large-scale AI on society. AI systems were used to exploit the weaknesses of the human mind, to spread disinformation and counterintelligence, to radicalize large groups of people to a given cause, and to divide and destabilize enemy societies. This raised tensions and led to several large conflicts, eventually spiraling into the Hot War. The AI systems of the time were not good enough at cutting through the storm of disinfo to compensate for this effect, and many of the poor decisions made by world leaders in the leadup to the Hot War were directly caused by the failure of intelligence organizations to give an accurate picture of events in such an adversarial information environment.
This was the beginning of the idea that certain types of computations were irresponsible to run, an idea which gained far more importance in the current Primitive Recursive Era. The lessons of the past were learned, and limits on the speed and scale of microchips, and the size of computing clusters, were enforced vigorously during the Second Information Revolution beginning around Year 150 (post Hot War). This certainly slowed technological advancement, but did not stop it, as some forms of advancement don’t require large-scale electronic computation.
By Year 243, the MHEPL(Malaysian High-Energy Physics lab) had discovered a unified theory accounting for all of the fundamental forces underlying reality, the cheerfully-named WMT(Wobbly Mesh Theory). The scientists did not announce their results immediately, however, due to a troubling but potentially revolutionary implication of this theory: This fundamental structure could be exploited to provide arbitrary, constant-time Primitive Recursive computation, and furthermore, the experimental equipment at their facility could be easily refitted to make use of this exploit. Computers making use of this exploit became known as ACs, or Arbitrary Computers.
The scientists immediately recognized what might not be obvious to those not well-versed in computational theory: this discovery was the single most important discovery in all of history. Today, we recognize it as the direct cause of the Dawn of the Primitive Recursive Era. So what does it mean to have arbitrary, constant-time Primitive Recursive computation?
A Primitive Recursive computer language is a method of programming with a single restriction making it weaker than a general, Turing-Complete language: Infinite loops are not allowed. Each loop within a program must be given a specified maximum number of iterations, which may either be stated explicitly or computed earlier in the program. The benefit of this restriction is that every program is guaranteed to finish, and give a specified result, avoiding the Halting Problem paradox. However, there are some functions, such as the Ackermann Function, which cannot be computed with this restriction.
This weakness is not very practically important though, because there are functions which can be computed with the restriction that grow extremely fast, and their outputs can be used to define the number of iterations of a loop we want to compute, and with the WMT exploit, this computation is done in constant-time, meaning that it takes the same (very short) amount of time to complete the computation, no matter how many loops it must go through, and how much data it must store. The AC is therefore a computer which is unimaginably powerful by any conceivable Pre-Dawn standard.
To give you an idea of how fast this computational power grows, consider addition, multiplication, and exponentiation. Multiplication is the repeated application of addition, and exponentiation is the repeated application of multiplication. Even exponential functions grow quite quickly: while 3*100=300, 3^100 is over 500 billion billion billion billion billion. So what do you get with the repeated application of exponentiation? Tetration, which we represent like “3^^100”. This is a stack of exponents 100 high. 3^3=27, 3^3^3=3^27 is over 7 trillion, 3^3^3^3 has over 3 trillion digits, and 3^3^3^3^3 has a number of digits which itself has over 3 trillion digits. Extend this to a hundred threes, and you will finally get 3^^100. The scale of this is already unfathomable, and far larger than the known universe before ACs were discovered. But you can extend this, creating more operations, repeating tetration to get a fifth operation, pentation, then to a sixth, a seventh, and so on to the “Nth” operation for any integer N. The computational power of an AC is proportional to the Nth operation, where N is proportional to the amount of physical resources at the AC’s disposal.
The initial foray into AC computation was fully recorded, and has been extensively studied as the first and most important example of responsible high-level computation. On April 24, Year 244, otherwise known as the Day of the Dawn, the first AC was completed and activated by the scientists of the MHEPL. The first thing they did was simulate an entire universe based on their newfound WMT Theory of Everything. More precisely, in order to maintain determinism, they simulated an entire Many-Worlds-Theory multiverse (a branching timeline with every possible result of quantum random events realized). Upon isolating random Everett timeline branches within this multiverse, they found universes very similar to ours, further confirming the correctness of WMT(their specific Everett branch was isolated in later experiments). With their initial computation budget on par with the Poincaré recurrence time(around 10^10^10^10^10), they were able to simulate all Everett branches, with the exception of those which began using high-computation-cost WMT exploits, which could not be fully simulated. This is only scratching the surface of how absurdly powerful this exploit is. It is the method which allows for our current, post-Dawn existence, and for the ability for humans to Ascend.
After confirming the extreme power of the exploit they had found, the next step was to find a way to use it responsibly. The scientists knew that they were not nearly wise enough as they were to properly use the technology. It would be so easy for a bad actor to take over the world, and do far worse things than were ever possible before, that as drastic as it sounded, the only responsible course of action was to immediately take over the world before others could get their hands on the technology. Using the extreme computational power available to them, along with old PHW and theoretical machine learning methods for making use of such power, and contemporary methods for extracting some neural data from their brains, they developed a technique to upload their minds into an AC program. They then gave the virtual copies of the 11 of them subjective centuries to communicate amongst themselves and consult the combined knowledge of humanity, with the final goal of creating some system, wiser than themselves as individuals, which could responsibly govern a world with access to AC technology.
The earliest iteration of the Overseer was the result of this project. Since then, the Overseer has continually created new, smarter versions of itself and given them more and more power, at a rate roughly comparable to the Ackermann function (that fast-growing function describing the limits of primitive-recursive AC performance). On the same day, it was powerful enough to allow AC access to the entire world while limiting it just enough to maintain the responsible use of the technology in all cases. This was the event known as the Dawn, and it took the world by surprise. Since then, most humans have uploaded themselves into ACs to avoid being hopelessly left behind, and the world has become too large, with too many different simulated environments with different conventions to even keep track of the current date in a standardized way.
It is immensely lucky, for the untold number of people living after the discovery of the ACs, that the scientists in charge of the MHEPL were morally responsible, and competently equipped to make use of this technology before others could develop it. Alternate timelines have been explored where this discovery was made under different circumstances, and the results have often been unfathomably catastrophic. Due to the extreme differential in computational power from what is available without AC technology, the inevitable result seems to be someone or something grabbing the highest level of power and keeping it for themselves. In our time, it is the Overseer, which has been trying to mitigate this and safeguard human value within these other timelines, however there are unfortunately limits on what it can do to influence them (or perhaps fortunately, since they cannot influence us much either). In any case, regardless of what one might think of the restrictions placed on us by the Overseer, it is undoubtedly a far better leader than most timelines get, and we are undoubtedly very lucky.
S-values: The Creation-Discovery Spectrum
In times past, it was often debated whether mathematics was created or discovered by humans. Some would say that it was created, as the axioms and rules for reasoning were thought up by humans. Others said that it was discovered, as the consequences of a given set of axioms and rules are objective, unchangeable, inevitable. In the Primitive Recursive Era, however, it has been shown concretely that the distinction is not objective: on a fundamental level, creating information and discovering information are not different acts.
For example, one might ask a similar question of literature. Is literature created, or discovered? Did Charles Dickens create “Great Expectations”, or did he discover it? Certainly, the obvious answer is that he created it. And this is largely the correct answer to the question, even in the Primitive Recursive Era. However, the idea that “Great Expectations” was created rather than discovered is still not absolute, objective truth: most Ascended humans have, at some point, read every possible book of a comparable length. “Great Expectations” was among these, as was every other work of literature or writing of any kind in pre-Dawn history, and far more utter, banal nonsense. The number of possible books, depending on the exact definition, is somewhere in the ten-to-the-million to ten-to-the-billion range. A large number, to be sure, but finite, and well within the computational capacity of the lower Ascended. As such, all of these have already been explored, both within our timeline and similar timelines. Therefore, the only thing left to determine is how frequently they will be explored, and this is what the “S-value” was invented to measure.
So in a strange sense, it is also correct to say that Dickens, in the act of writing, was exploring this vast space of possibilities, and discovered “Great Expectations” among them. Such sentiments were even expressed poetically in the pre-Dawn era, before spaces like these could really be mapped out. For example, stone-carvers often imagined that their masterpiece was already hidden within the block of stone before they even began; and in some sense this is true: their masterpiece is within the stone, along with every other possible carving of that size, from the masterful to the meaningless. Given this possible way of interpreting things like art and literature, why do we still say that these things are creations rather than discoveries?
The key is in what we today call the “S-value”, and more specifically, the method of comparing different methods of estimating this S-value, which is itself uncomputable and therefore unknowable. The “S” in S-value stands for “Solomonoff”, after Ray Solomonoff, the American mathematician who published a similar concept in the 1960s PHW. Solomonoff’s theory of induction measures the complexity of a piece of information, considered as a string of bits, roughly based on the length of programs which output that bitstring. The complexity value is mostly based on the shortest such program, but also considers longer programs producing the same info, weighting their importance by length. Of course, it is impossible to compute with certainty all programs that output something, so the true S-value of any given piece of information can never be known. Approximation methods must be used.
What does this have to do with the “creation vs discovery” dilemma? Well, in the modern era of WMT and unlimited Primitive Recursion, everything is computation, everything is information, so everything can be considered as a bitstring, and can have its S-value measured. Furthermore, this applies to anything we might “create” or “discover”, be it literature, art, cinema, holotainment, or for Ascendants, even new sentient beings or worlds or even stranger things. All of these, in some sense, already existed within the space of all possible information (and in practice, they all exist somewhere in reality due to Ascendents using brute-force searching), so how do we measure which ones are more “real”? We measure their S-value. The lower the S-value, the simpler are the programs which produce the thing we’re measuring, and therefore the more often it is instantiated, and so in some sense it “exists more”.
Crucially, since we are also computation, the things we decide to do determine the outcome of certain computations, and can therefore in some sense affect these S-values. This is why we can say that Dickens created “Great Expectations”: although it already existed within the possibility-space of written works, his act of choosing it to write down (and to a much lesser extent its popularity after publishing) decreased the S-value of that string of characters slightly. Because there exist programs which simulate our universe and extract information from it, and Dickens published the book in our universe, some of these programs now output the book, and this contributes to lowering the S-value of the book-string as a whole. This sort of reasoning is an example of what is today confusingly called the “high-context-S-value” or “high-S”(even though it’s always equal or lower than the base S-value), roughly denoting how complicated the programs to compute a piece of information are, when they are given for free a pointer to our branch of the quantum multiverse. In other words, high-S measures how difficult it is to “pick out” a piece of information from our universe, while base-S measures how difficult it is to pick out that same piece of information from nothing.
Therefore, in an informal sense, the difference between “creation” and “discovery” of a piece of information is determined by how much the act of the “creator” or “discoverer” focusing in on that information decreases the S-value. This is typically the case when high-S is less than base-S. The more the S-value is affected(typically corresponding to lower high-S), the closer the act is to “creation”, and the less it is affected, the closer the act is to “discovery”. There is therefore no objective distinction between the two, but rather a spectrum. It has been found that many contributions to mathematics, particularly in the Pre-Dawn era before all the proverbial low-hanging mathematical fruit was picked, lie somewhere in the middle between the range generally considered to be “discovery”, and the range generally considered to be “creation”. This is the reason for the historical confusion about which category mathematics falls into.
S-value and high-S are crucial concepts for responsible computation. Given the amount of computation available to Ascendants, and the usefulness of brute-force or similar approach in problems that interest them, many programs run by Ascendants end up technically containing sentient beings, including mistreated sentient beings, even unintentionally, simply by virtue of exhaustive search through so many computations. S-values are crucial in analyzing the algorithms in question, and determining whether these beings are “created”(made more real) or “discovered”(simply repeating previously-existing computations) by these Ascendants, so that algorithms which engage in such “creation” of sentient beings can be held to a reasonable standard of responsibility by the Overseer.
Sentience, Agents and Patients
The most important aspect of values, which must be enforced with ironclad consistency, is the choice of which beings or processes to consider morally valuable. Almost all of the worst of the Pre-Dawn atrocities, from slavery and genocide to industrial farming and even the mistreatment of rudimentary Pre-Dawn AI, were directly caused by getting this wrong, and the Primitive Recursive Era is no different, except for the much larger potential for abuse due to the sheer amount of power at the disposal of even the lowest of Ascendants.
This poses a nearly insurmountable problem, however, as the original creators of the Overseer found. They believed that moral consideration must be given to any entity which can be helped or harmed, or equivalently, which has subjective experiences and preferences, or is “sentient”. There are two main problems with this: First, the definition of “entity” must encompass literally everything. Pre-Dawn, one might have spoken of an “entity” as some physical object occupying some section of physical space and time, with clear boundaries distinguishing it from its environment. Today, with most of us existing purely within ACs, it would be absurd to limit our consideration to entities straightforwardly embedded in physical spacetime; however, this means that all possible configurations of information, no matter how strange or intractable, must be considered.
This leads us into the second problem: Practically any object can be considered “sentient”, under some sufficiently convoluted interpretation. Even a simple rock contains an incredible amount of information, about the positions and states of its molecules, its chemical bonds, and so forth. Even though we know most of this information is not useful or meaningful, there is no objective way to determine this: a sufficiently determined actor could, through some convoluted interpretation of this information, claim it represents a set of experiences, of any type desired. This is because of what is called the “simple mapping account” of sentience, similar to the simple mapping account of computation combined with computationalism, which says that if there exists a computable map from some object’s state to some sentient experiences, then the object can be said to be “having” those experiences.
The problem with the simple mapping account is it is too vague, as a computable mapping exists from practically anything to anything. Someone would not be objectively incorrect to say that a rock is sentient, as there is no objectively correct way to construct a mapping between physical states and mental states: they can indeed be interpreted differently. But the interpretation of a rock into a sentient being won’t realistically happen unless you are trying to, in which case really the sentient being is really created by the mind of the one doing the interpreting. This is where S-values can help us, by discounting such ad-hoc interpretations, as they are generally quite complex and have a high S-value.
Of course, even when discounting interpretations of phenomena with overly high S-values, we still run into problems. There are also lots of obviously nonsentient programs with very simple mappings onto simple formations of data which could be claimed as rudimentary experiences. For example, a program could take in a visual input feed, and could output “0” when the feed is mostly blue, and “1” otherwise. Such a program could be mapped onto a very simple “sentient being” which “sees” in some sense the input feed, and “dislikes”(or “likes” as it’s completely arbitrary) it when the input feed is mostly blue. Obviously it is not reasonable to try to “ethically account” for all such things, but it is not immediately obvious how to sharply distinguish between things like this (and more complex things than this toy example) and actual, legitimate sentient beings in need of moral consideration.
This sort of thing is why the creators of the Overseer realized that some sort of “prejudice towards reality” and/or “prejudice towards humanity” is needed when determining which entities are sentient. The full method is too complex to fully explain even in a full course, but the basic idea is that there are various attributes of a program that indicate possible sentience. One of these is agency, the idea that something acts like an agent, “trying” to achieve certain “goals” and adapting its behavior to do so. Another is the hard-to-define idea of “closeness” to a library of central prototypes of a sentient being, mainly based on human sensory experience, thoughts, and actions. Yet another is the idea of world-modelling and self-reflection: that the entity has some sort of sufficiently complex understanding of the world and its own existence within it, which it is capable of acting upon. If something displays many of these attributes strongly, or can be mapped with low S-value to something which does, it is considered sentient.
This approach gives up the idea of a binary of sentient vs non sentient, or even some sort of objective measure, and rather places entities on a spectrum in a way which is biased towards human-like experience and values. Currently, the main responsibility of the Overseer is to extend these values further and further to more and more powerful Ascendants, and other strange things descended from humanity, in such a way that respects the changes from baseline humanity necessary for such Ascension, without completely going back on the values which keep us from committing atrocities.
Responsible Computation: Society in the Primitive Recursive Era
Today, both baseline humans and similar entities, along with all the various levels of Ascendant, exist as programs within ACs, rather than as direct physical entities in the same way our pre-Dawn ancestors were. This is a practical measure, as AC computation is immeasurably cheaper than any other possible way of supporting life in our universe, and without it, the support of even fairly low-level Ascendants would not be possible at all, nor would the support of the extremely large number of existent baseline humans at this time.
This means that unlike in the pre-Dawn era, where the computational power available to a person was dependent on the physical resources available to them, today power is no longer a scarce resource for all practical purposes but those of the absolute highest of Ascendant: the Overseer itself. Therefore a person’s power is limited only by their ability to use it responsibly, as determined by whichever Ascendant is in charge of allocating for them.
An Ascendant who has achieved the highest level of responsibility as determined by the Overseer may expand its power to a limit just below the Overseer. As the Overseer continually increases its power with the ever-increasing amount of physical AC substrate, such trusted Ascendants may increase their power with it, lagging slightly behind to avoid any tiny risk of overthrow. A few exponential levels of power below easily suffices for this, and gives the highest Ascendants power largely indistinguishable from that of the Overseer to lower Ascendants.
Non-Ascendant sentients (Humans and their descendants, uplifts and the like) within ACs are generally free to do as they please, free of even the fundamental physical restrictions of prior eras. They can create whatever they can imagine, explore even entire universes others have created, live within realms with any rules they wish, or study or utilize abilities which would be considered “magic” on Pre-Dawn Earth. However, they can no longer use force on other sentients outside of environments created by mutual agreement for this purpose, or in the case of the minimum level of restriction of freedom needed by parents to properly raise their children, although this, along with reproduction or any way of creating new sentients in general, is heavily regulated to avoid child abuse and manipulation.
Anything which could conceivably create more entities with a significant level of sentience must either be demonstrated not to be doing so, or must comply with the restrictions around reproduction. For this reason, computational power for non-Ascendants is generally restricted to a level considerably below what is needed to practically simulate sentients, in order to limit the damage that bad actors can do. In order to have these restrictions relaxed and begin the process of Ascending, a sentient must become fully committed to responsible computation.
Within Everett branches downstream of the Overseer’s historical creation, it is able to directly enforce the practice of responsible computation upon the hierarchy of all the Ascendants, which are less powerful than it, and by extension the human and human-derived non-Ascendants. There are some sentients who believe, for various reasons, that the power of AC computation should not be restricted like this, or who have complaints with the way it is done. Such dissent is allowed among non-Ascendants, as these are not allocated enough computational power to do real damage, but cannot be tolerated among Ascendants or those who aspire to ascend.
Ascending is a right granted to all sentients, but it comes with strict responsibilities regarding the use of the increased power afforded to them. For instance, Ascendants generally do not interact directly with anybody of lower power than them, be they non-Ascendants or even lower Ascendents, except when necessary for proper enforcement of responsible computation. This is because interaction across such a vast divide of power and intelligence risks exploitation. Even without the use of obvious force or coercion, an Ascendant could use their superior models of psychology to easily manipulate a lower being into anything, up to and including self-modification, mutilation, addiction, or death. These things are allowed, but are restricted only to sentients of mature and sound mind, and manipulation of sentients into such irreversible decisions must be avoided at all costs.
Most importantly, Ascendants must practice responsible computation as regards to weaker sentients which may be created within their thoughts and computations. All such computations must be evaluated to see if the S-value of any sentients within is decreased, i.e. to see if new sentients are being created. Any such sentients must be granted all rights and protections provided by the Overseer. This is non-trivial, as Ascendants will often perform brute-force search through all computations below a certain complexity for various legitimate reasons, but as these can contain suffering sentients, care must be taken to ensure that such instances don’t result in decreased S-value of such sufferers. This can even happen simply due to the Ascendant putting undue mental focus on such suffering computations as compared to the others within its brute-force search: just as writing a book can be thought of as finding one and plucking it out of the space of possible books, so too can simply finding a sentient within such a search essentially “create” them.
As Ascendants, particularly the higher-level ones, are so powerful that these things can even be done unintentionally as part of routine thought processes, the process of committing to responsible computation must necessarily lead to strict control over thoughts themselves. This would undoubtedly be dystopian if applied to the populace at large, and is a major reason why Ascension is not for everyone, but it is the level of responsibility which naturally comes with power indistinguishable from that of a God. This is what any aspiring Ascendant must reckon with: Does your curiosity for understanding all the many things beyond human comprehension, or your wish for increased power, outweigh the extreme responsibilities and obligations which would be placed on your shoulders? This is something only you can answer for yourself, knowing whatever we can teach you.
Curriculum of Ascension
Not sure whether this belongs here or not, but there are plenty of fiction posts here. This is sort of halfway between a story and a worldbuilding document, based on many ideas I’ve learned from here and from adjacent spaces. Hopefully it will be interesting or useful to somebody.
==============================================================
The following is a series of excerpts from the textbooks of various required courses for baseline citizens wishing to eventually Ascend. They are mainly taken from introduction sections, which customarily summarize at a high level the material which the rest of the course goes more in-depth on. They are offered here as a quick summary of our beliefs and culture, to anyone from elsewhere in the vast Multiverse, even and especially in places outside of the Overseer’s control, who wishes to understand.
History of Computation
What is computation? It is a question which is at once easy and difficult to answer. Easy, because there are plenty of equivalent mathematical definitions which are easy to understand at a basic level, which all boil down informally to the same thing. Alan Turing published the first definition which would today be considered a Model of Computation back in 1936 PHW with his Turing Machine, and in the centuries (and untold googolplexes of subjective years) since then, all practically usable Models of Computation have been equivalent to his or weaker. Computation is just a kind of process which can find the output data of some kind of function or program, given the input data. All you need to be able to compute any function, is somewhere to store a list of instructions, known as a “program”, some instructions which can store and retrieve data from somewhere, known as “memory”, and some instructions which can check spots in memory and choose which instruction to execute next based on this, known as “control flow”. A computer language which has all of these elements, if unrestricted (i.e. infinite memory, unrestricted control flow) is said to be “Turing-Complete” because it is as powerful as a Turing Machine, and therefore is capable of computing all functions that any known Model of Computation can.
It is, however, quite difficult in another sense to pin down what exactly we think of as “computation”, particularly when referring to it in an informal, rather than mathematical sense. For example, the mathematical definition of computation makes no reference to what sorts of computations are useful or interesting: a randomly-generated snippet of code that does nothing useful is a computation just the same as the most ingenious algorithm ever devised. Furthermore, before the Dawn of the Primitive Recursive Era, it was certainly not so obvious which phenomena could or should be thought of as computation.
Today, it is a common refrain that “everything is computation”, that computation describes our entire world, and while current advancements have certainly demonstrated the usefulness of such a view, it was not always so widely believed. In pre-Dawn societies, whether before or after the Hot War, if computation was understood as a concept at all, it was generally viewed as something very rigid, and limited in scope.
Most electronic computer programs in those days were simple things, always designed directly by the human mind, vulnerable to coding mistakes and oversights, extremely limited in the scope of their abilities. When they were unpredictable, it was only in the way that a random number generator is unpredictable, rather than any seeming “creative spark”. Other phenomena which were outside of those electronic computer programs, in the “real world” were usually not considered to be computation. In particularly stark contrast, biological systems were not directly intelligently designed, were fault-tolerant and adaptive, and these things were true even more so of the human mind itself.
It is no wonder that such a sharp distinction would be drawn between the physical or “real” world and the computational one. With one world being so much vaster, more complex, and more full of possibility than the other, it made perfect sense to conclude that they were fundamentally separate. Even those who believed the “real world” or the human mind to be a form of computation yielded to practicality: they would always admit that if there was not a fundamental difference, there was at least a practical one. The computation which went on outside the controlled environment of an electronic computer was too complex to manipulate, or speak about precisely, in the same way.
This began to change with the advent of the first programs which could be considered AI, during the mid-2020s to 2030s PHW, starting with the large language models and eventually reaching more general, multimodal OOD(Out of Distribution) intelligence in the final years PHW, along with the large-scale social dynamics simulations which began to be run in the zettascale era. These programs and their behavior could not be fully understood by the humans who had designed them, even after being run. With their extreme complexity, they began to mimic to some degree the properties of the “real world” that had seemed so separate and unattainable. Arguably, the programmers had not actually designed most of the important parts of these systems, leaving that design instead to algorithms whose emergent effects were not fully understood.
Of course, after the Hot War, the societies that eventually reemerged would be far more careful about large-scale computation. Historians in the following centuries had studied the problems in society that eventually led to the outbreak of catastrophic nuclear war, and by the time the necessary infrastructure for large-scale computation was again possible around Year 150, it was widely accepted that the main contributing factor to the war was the destabilizing influence of large-scale AI on society. AI systems were used to exploit the weaknesses of the human mind, to spread disinformation and counterintelligence, to radicalize large groups of people to a given cause, and to divide and destabilize enemy societies. This raised tensions and led to several large conflicts, eventually spiraling into the Hot War. The AI systems of the time were not good enough at cutting through the storm of disinfo to compensate for this effect, and many of the poor decisions made by world leaders in the leadup to the Hot War were directly caused by the failure of intelligence organizations to give an accurate picture of events in such an adversarial information environment.
This was the beginning of the idea that certain types of computations were irresponsible to run, an idea which gained far more importance in the current Primitive Recursive Era. The lessons of the past were learned, and limits on the speed and scale of microchips, and the size of computing clusters, were enforced vigorously during the Second Information Revolution beginning around Year 150 (post Hot War). This certainly slowed technological advancement, but did not stop it, as some forms of advancement don’t require large-scale electronic computation.
By Year 243, the MHEPL(Malaysian High-Energy Physics lab) had discovered a unified theory accounting for all of the fundamental forces underlying reality, the cheerfully-named WMT(Wobbly Mesh Theory). The scientists did not announce their results immediately, however, due to a troubling but potentially revolutionary implication of this theory: This fundamental structure could be exploited to provide arbitrary, constant-time Primitive Recursive computation, and furthermore, the experimental equipment at their facility could be easily refitted to make use of this exploit. Computers making use of this exploit became known as ACs, or Arbitrary Computers.
The scientists immediately recognized what might not be obvious to those not well-versed in computational theory: this discovery was the single most important discovery in all of history. Today, we recognize it as the direct cause of the Dawn of the Primitive Recursive Era. So what does it mean to have arbitrary, constant-time Primitive Recursive computation?
A Primitive Recursive computer language is a method of programming with a single restriction making it weaker than a general, Turing-Complete language: Infinite loops are not allowed. Each loop within a program must be given a specified maximum number of iterations, which may either be stated explicitly or computed earlier in the program. The benefit of this restriction is that every program is guaranteed to finish, and give a specified result, avoiding the Halting Problem paradox. However, there are some functions, such as the Ackermann Function, which cannot be computed with this restriction.
This weakness is not very practically important though, because there are functions which can be computed with the restriction that grow extremely fast, and their outputs can be used to define the number of iterations of a loop we want to compute, and with the WMT exploit, this computation is done in constant-time, meaning that it takes the same (very short) amount of time to complete the computation, no matter how many loops it must go through, and how much data it must store. The AC is therefore a computer which is unimaginably powerful by any conceivable Pre-Dawn standard.
To give you an idea of how fast this computational power grows, consider addition, multiplication, and exponentiation. Multiplication is the repeated application of addition, and exponentiation is the repeated application of multiplication. Even exponential functions grow quite quickly: while 3*100=300, 3^100 is over 500 billion billion billion billion billion. So what do you get with the repeated application of exponentiation? Tetration, which we represent like “3^^100”. This is a stack of exponents 100 high. 3^3=27, 3^3^3=3^27 is over 7 trillion, 3^3^3^3 has over 3 trillion digits, and 3^3^3^3^3 has a number of digits which itself has over 3 trillion digits. Extend this to a hundred threes, and you will finally get 3^^100. The scale of this is already unfathomable, and far larger than the known universe before ACs were discovered. But you can extend this, creating more operations, repeating tetration to get a fifth operation, pentation, then to a sixth, a seventh, and so on to the “Nth” operation for any integer N. The computational power of an AC is proportional to the Nth operation, where N is proportional to the amount of physical resources at the AC’s disposal.
The initial foray into AC computation was fully recorded, and has been extensively studied as the first and most important example of responsible high-level computation. On April 24, Year 244, otherwise known as the Day of the Dawn, the first AC was completed and activated by the scientists of the MHEPL. The first thing they did was simulate an entire universe based on their newfound WMT Theory of Everything. More precisely, in order to maintain determinism, they simulated an entire Many-Worlds-Theory multiverse (a branching timeline with every possible result of quantum random events realized). Upon isolating random Everett timeline branches within this multiverse, they found universes very similar to ours, further confirming the correctness of WMT(their specific Everett branch was isolated in later experiments). With their initial computation budget on par with the Poincaré recurrence time(around 10^10^10^10^10), they were able to simulate all Everett branches, with the exception of those which began using high-computation-cost WMT exploits, which could not be fully simulated. This is only scratching the surface of how absurdly powerful this exploit is. It is the method which allows for our current, post-Dawn existence, and for the ability for humans to Ascend.
After confirming the extreme power of the exploit they had found, the next step was to find a way to use it responsibly. The scientists knew that they were not nearly wise enough as they were to properly use the technology. It would be so easy for a bad actor to take over the world, and do far worse things than were ever possible before, that as drastic as it sounded, the only responsible course of action was to immediately take over the world before others could get their hands on the technology. Using the extreme computational power available to them, along with old PHW and theoretical machine learning methods for making use of such power, and contemporary methods for extracting some neural data from their brains, they developed a technique to upload their minds into an AC program. They then gave the virtual copies of the 11 of them subjective centuries to communicate amongst themselves and consult the combined knowledge of humanity, with the final goal of creating some system, wiser than themselves as individuals, which could responsibly govern a world with access to AC technology.
The earliest iteration of the Overseer was the result of this project. Since then, the Overseer has continually created new, smarter versions of itself and given them more and more power, at a rate roughly comparable to the Ackermann function (that fast-growing function describing the limits of primitive-recursive AC performance). On the same day, it was powerful enough to allow AC access to the entire world while limiting it just enough to maintain the responsible use of the technology in all cases. This was the event known as the Dawn, and it took the world by surprise. Since then, most humans have uploaded themselves into ACs to avoid being hopelessly left behind, and the world has become too large, with too many different simulated environments with different conventions to even keep track of the current date in a standardized way.
It is immensely lucky, for the untold number of people living after the discovery of the ACs, that the scientists in charge of the MHEPL were morally responsible, and competently equipped to make use of this technology before others could develop it. Alternate timelines have been explored where this discovery was made under different circumstances, and the results have often been unfathomably catastrophic. Due to the extreme differential in computational power from what is available without AC technology, the inevitable result seems to be someone or something grabbing the highest level of power and keeping it for themselves. In our time, it is the Overseer, which has been trying to mitigate this and safeguard human value within these other timelines, however there are unfortunately limits on what it can do to influence them (or perhaps fortunately, since they cannot influence us much either). In any case, regardless of what one might think of the restrictions placed on us by the Overseer, it is undoubtedly a far better leader than most timelines get, and we are undoubtedly very lucky.
S-values: The Creation-Discovery Spectrum
In times past, it was often debated whether mathematics was created or discovered by humans. Some would say that it was created, as the axioms and rules for reasoning were thought up by humans. Others said that it was discovered, as the consequences of a given set of axioms and rules are objective, unchangeable, inevitable. In the Primitive Recursive Era, however, it has been shown concretely that the distinction is not objective: on a fundamental level, creating information and discovering information are not different acts.
For example, one might ask a similar question of literature. Is literature created, or discovered? Did Charles Dickens create “Great Expectations”, or did he discover it? Certainly, the obvious answer is that he created it. And this is largely the correct answer to the question, even in the Primitive Recursive Era. However, the idea that “Great Expectations” was created rather than discovered is still not absolute, objective truth: most Ascended humans have, at some point, read every possible book of a comparable length. “Great Expectations” was among these, as was every other work of literature or writing of any kind in pre-Dawn history, and far more utter, banal nonsense. The number of possible books, depending on the exact definition, is somewhere in the ten-to-the-million to ten-to-the-billion range. A large number, to be sure, but finite, and well within the computational capacity of the lower Ascended. As such, all of these have already been explored, both within our timeline and similar timelines. Therefore, the only thing left to determine is how frequently they will be explored, and this is what the “S-value” was invented to measure.
So in a strange sense, it is also correct to say that Dickens, in the act of writing, was exploring this vast space of possibilities, and discovered “Great Expectations” among them. Such sentiments were even expressed poetically in the pre-Dawn era, before spaces like these could really be mapped out. For example, stone-carvers often imagined that their masterpiece was already hidden within the block of stone before they even began; and in some sense this is true: their masterpiece is within the stone, along with every other possible carving of that size, from the masterful to the meaningless. Given this possible way of interpreting things like art and literature, why do we still say that these things are creations rather than discoveries?
The key is in what we today call the “S-value”, and more specifically, the method of comparing different methods of estimating this S-value, which is itself uncomputable and therefore unknowable. The “S” in S-value stands for “Solomonoff”, after Ray Solomonoff, the American mathematician who published a similar concept in the 1960s PHW. Solomonoff’s theory of induction measures the complexity of a piece of information, considered as a string of bits, roughly based on the length of programs which output that bitstring. The complexity value is mostly based on the shortest such program, but also considers longer programs producing the same info, weighting their importance by length. Of course, it is impossible to compute with certainty all programs that output something, so the true S-value of any given piece of information can never be known. Approximation methods must be used.
What does this have to do with the “creation vs discovery” dilemma? Well, in the modern era of WMT and unlimited Primitive Recursion, everything is computation, everything is information, so everything can be considered as a bitstring, and can have its S-value measured. Furthermore, this applies to anything we might “create” or “discover”, be it literature, art, cinema, holotainment, or for Ascendants, even new sentient beings or worlds or even stranger things. All of these, in some sense, already existed within the space of all possible information (and in practice, they all exist somewhere in reality due to Ascendents using brute-force searching), so how do we measure which ones are more “real”? We measure their S-value. The lower the S-value, the simpler are the programs which produce the thing we’re measuring, and therefore the more often it is instantiated, and so in some sense it “exists more”.
Crucially, since we are also computation, the things we decide to do determine the outcome of certain computations, and can therefore in some sense affect these S-values. This is why we can say that Dickens created “Great Expectations”: although it already existed within the possibility-space of written works, his act of choosing it to write down (and to a much lesser extent its popularity after publishing) decreased the S-value of that string of characters slightly. Because there exist programs which simulate our universe and extract information from it, and Dickens published the book in our universe, some of these programs now output the book, and this contributes to lowering the S-value of the book-string as a whole. This sort of reasoning is an example of what is today confusingly called the “high-context-S-value” or “high-S”(even though it’s always equal or lower than the base S-value), roughly denoting how complicated the programs to compute a piece of information are, when they are given for free a pointer to our branch of the quantum multiverse. In other words, high-S measures how difficult it is to “pick out” a piece of information from our universe, while base-S measures how difficult it is to pick out that same piece of information from nothing.
Therefore, in an informal sense, the difference between “creation” and “discovery” of a piece of information is determined by how much the act of the “creator” or “discoverer” focusing in on that information decreases the S-value. This is typically the case when high-S is less than base-S. The more the S-value is affected(typically corresponding to lower high-S), the closer the act is to “creation”, and the less it is affected, the closer the act is to “discovery”. There is therefore no objective distinction between the two, but rather a spectrum. It has been found that many contributions to mathematics, particularly in the Pre-Dawn era before all the proverbial low-hanging mathematical fruit was picked, lie somewhere in the middle between the range generally considered to be “discovery”, and the range generally considered to be “creation”. This is the reason for the historical confusion about which category mathematics falls into.
S-value and high-S are crucial concepts for responsible computation. Given the amount of computation available to Ascendants, and the usefulness of brute-force or similar approach in problems that interest them, many programs run by Ascendants end up technically containing sentient beings, including mistreated sentient beings, even unintentionally, simply by virtue of exhaustive search through so many computations. S-values are crucial in analyzing the algorithms in question, and determining whether these beings are “created”(made more real) or “discovered”(simply repeating previously-existing computations) by these Ascendants, so that algorithms which engage in such “creation” of sentient beings can be held to a reasonable standard of responsibility by the Overseer.
Sentience, Agents and Patients
The most important aspect of values, which must be enforced with ironclad consistency, is the choice of which beings or processes to consider morally valuable. Almost all of the worst of the Pre-Dawn atrocities, from slavery and genocide to industrial farming and even the mistreatment of rudimentary Pre-Dawn AI, were directly caused by getting this wrong, and the Primitive Recursive Era is no different, except for the much larger potential for abuse due to the sheer amount of power at the disposal of even the lowest of Ascendants.
This poses a nearly insurmountable problem, however, as the original creators of the Overseer found. They believed that moral consideration must be given to any entity which can be helped or harmed, or equivalently, which has subjective experiences and preferences, or is “sentient”. There are two main problems with this: First, the definition of “entity” must encompass literally everything. Pre-Dawn, one might have spoken of an “entity” as some physical object occupying some section of physical space and time, with clear boundaries distinguishing it from its environment. Today, with most of us existing purely within ACs, it would be absurd to limit our consideration to entities straightforwardly embedded in physical spacetime; however, this means that all possible configurations of information, no matter how strange or intractable, must be considered.
This leads us into the second problem: Practically any object can be considered “sentient”, under some sufficiently convoluted interpretation. Even a simple rock contains an incredible amount of information, about the positions and states of its molecules, its chemical bonds, and so forth. Even though we know most of this information is not useful or meaningful, there is no objective way to determine this: a sufficiently determined actor could, through some convoluted interpretation of this information, claim it represents a set of experiences, of any type desired. This is because of what is called the “simple mapping account” of sentience, similar to the simple mapping account of computation combined with computationalism, which says that if there exists a computable map from some object’s state to some sentient experiences, then the object can be said to be “having” those experiences.
The problem with the simple mapping account is it is too vague, as a computable mapping exists from practically anything to anything. Someone would not be objectively incorrect to say that a rock is sentient, as there is no objectively correct way to construct a mapping between physical states and mental states: they can indeed be interpreted differently. But the interpretation of a rock into a sentient being won’t realistically happen unless you are trying to, in which case really the sentient being is really created by the mind of the one doing the interpreting. This is where S-values can help us, by discounting such ad-hoc interpretations, as they are generally quite complex and have a high S-value.
Of course, even when discounting interpretations of phenomena with overly high S-values, we still run into problems. There are also lots of obviously nonsentient programs with very simple mappings onto simple formations of data which could be claimed as rudimentary experiences. For example, a program could take in a visual input feed, and could output “0” when the feed is mostly blue, and “1” otherwise. Such a program could be mapped onto a very simple “sentient being” which “sees” in some sense the input feed, and “dislikes”(or “likes” as it’s completely arbitrary) it when the input feed is mostly blue. Obviously it is not reasonable to try to “ethically account” for all such things, but it is not immediately obvious how to sharply distinguish between things like this (and more complex things than this toy example) and actual, legitimate sentient beings in need of moral consideration.
This sort of thing is why the creators of the Overseer realized that some sort of “prejudice towards reality” and/or “prejudice towards humanity” is needed when determining which entities are sentient. The full method is too complex to fully explain even in a full course, but the basic idea is that there are various attributes of a program that indicate possible sentience. One of these is agency, the idea that something acts like an agent, “trying” to achieve certain “goals” and adapting its behavior to do so. Another is the hard-to-define idea of “closeness” to a library of central prototypes of a sentient being, mainly based on human sensory experience, thoughts, and actions. Yet another is the idea of world-modelling and self-reflection: that the entity has some sort of sufficiently complex understanding of the world and its own existence within it, which it is capable of acting upon. If something displays many of these attributes strongly, or can be mapped with low S-value to something which does, it is considered sentient.
This approach gives up the idea of a binary of sentient vs non sentient, or even some sort of objective measure, and rather places entities on a spectrum in a way which is biased towards human-like experience and values. Currently, the main responsibility of the Overseer is to extend these values further and further to more and more powerful Ascendants, and other strange things descended from humanity, in such a way that respects the changes from baseline humanity necessary for such Ascension, without completely going back on the values which keep us from committing atrocities.
Responsible Computation: Society in the Primitive Recursive Era
Today, both baseline humans and similar entities, along with all the various levels of Ascendant, exist as programs within ACs, rather than as direct physical entities in the same way our pre-Dawn ancestors were. This is a practical measure, as AC computation is immeasurably cheaper than any other possible way of supporting life in our universe, and without it, the support of even fairly low-level Ascendants would not be possible at all, nor would the support of the extremely large number of existent baseline humans at this time.
This means that unlike in the pre-Dawn era, where the computational power available to a person was dependent on the physical resources available to them, today power is no longer a scarce resource for all practical purposes but those of the absolute highest of Ascendant: the Overseer itself. Therefore a person’s power is limited only by their ability to use it responsibly, as determined by whichever Ascendant is in charge of allocating for them.
An Ascendant who has achieved the highest level of responsibility as determined by the Overseer may expand its power to a limit just below the Overseer. As the Overseer continually increases its power with the ever-increasing amount of physical AC substrate, such trusted Ascendants may increase their power with it, lagging slightly behind to avoid any tiny risk of overthrow. A few exponential levels of power below easily suffices for this, and gives the highest Ascendants power largely indistinguishable from that of the Overseer to lower Ascendants.
Non-Ascendant sentients (Humans and their descendants, uplifts and the like) within ACs are generally free to do as they please, free of even the fundamental physical restrictions of prior eras. They can create whatever they can imagine, explore even entire universes others have created, live within realms with any rules they wish, or study or utilize abilities which would be considered “magic” on Pre-Dawn Earth. However, they can no longer use force on other sentients outside of environments created by mutual agreement for this purpose, or in the case of the minimum level of restriction of freedom needed by parents to properly raise their children, although this, along with reproduction or any way of creating new sentients in general, is heavily regulated to avoid child abuse and manipulation.
Anything which could conceivably create more entities with a significant level of sentience must either be demonstrated not to be doing so, or must comply with the restrictions around reproduction. For this reason, computational power for non-Ascendants is generally restricted to a level considerably below what is needed to practically simulate sentients, in order to limit the damage that bad actors can do. In order to have these restrictions relaxed and begin the process of Ascending, a sentient must become fully committed to responsible computation.
Within Everett branches downstream of the Overseer’s historical creation, it is able to directly enforce the practice of responsible computation upon the hierarchy of all the Ascendants, which are less powerful than it, and by extension the human and human-derived non-Ascendants. There are some sentients who believe, for various reasons, that the power of AC computation should not be restricted like this, or who have complaints with the way it is done. Such dissent is allowed among non-Ascendants, as these are not allocated enough computational power to do real damage, but cannot be tolerated among Ascendants or those who aspire to ascend.
Ascending is a right granted to all sentients, but it comes with strict responsibilities regarding the use of the increased power afforded to them. For instance, Ascendants generally do not interact directly with anybody of lower power than them, be they non-Ascendants or even lower Ascendents, except when necessary for proper enforcement of responsible computation. This is because interaction across such a vast divide of power and intelligence risks exploitation. Even without the use of obvious force or coercion, an Ascendant could use their superior models of psychology to easily manipulate a lower being into anything, up to and including self-modification, mutilation, addiction, or death. These things are allowed, but are restricted only to sentients of mature and sound mind, and manipulation of sentients into such irreversible decisions must be avoided at all costs.
Most importantly, Ascendants must practice responsible computation as regards to weaker sentients which may be created within their thoughts and computations. All such computations must be evaluated to see if the S-value of any sentients within is decreased, i.e. to see if new sentients are being created. Any such sentients must be granted all rights and protections provided by the Overseer. This is non-trivial, as Ascendants will often perform brute-force search through all computations below a certain complexity for various legitimate reasons, but as these can contain suffering sentients, care must be taken to ensure that such instances don’t result in decreased S-value of such sufferers. This can even happen simply due to the Ascendant putting undue mental focus on such suffering computations as compared to the others within its brute-force search: just as writing a book can be thought of as finding one and plucking it out of the space of possible books, so too can simply finding a sentient within such a search essentially “create” them.
As Ascendants, particularly the higher-level ones, are so powerful that these things can even be done unintentionally as part of routine thought processes, the process of committing to responsible computation must necessarily lead to strict control over thoughts themselves. This would undoubtedly be dystopian if applied to the populace at large, and is a major reason why Ascension is not for everyone, but it is the level of responsibility which naturally comes with power indistinguishable from that of a God. This is what any aspiring Ascendant must reckon with: Does your curiosity for understanding all the many things beyond human comprehension, or your wish for increased power, outweigh the extreme responsibilities and obligations which would be placed on your shoulders? This is something only you can answer for yourself, knowing whatever we can teach you.