As of January 2025, I have signed no contracts or agreements whose existence I cannot mention.
gilch
Finitism doesn’t reject the existence of any given natural number (although ultrafinitism might), nor the validity of the successor function (counting), nor even the notion of a “potential” infinity (like time), just the idea of a completed one being an object in its own right (which can be put into a set). The Axiom of Infinity doesn’t let you escape the notion of classes which can’t themselves be an element of a set. Set theory runs into paradoxes if we allow it. Is it such an invalid move to disallow the class of Naturals as an element of a set, when even ZFC must disallow the Surreals for similar reasons?
Before Cantor, all mathematicians were finitists. It’s not a weird position historically.
We do model physics with “real” numbers, but that doesn’t mean the underlying reality is infinite or even infinitely divisible. My finitism is motivated by my understanding of physics and cosmology, not the other way around. Nature seems to cut us off from any access to any completed infinity, and it’s not clear that even potential infinities are allowed (hence my sympathy with ultrafinitism). I have no need of that axiom.
Quantum Field Theory, though traditionally modeled using continuous mathematics, implies the Bekenstein bound: a finite region of space contains a finite amount of information. There are no “infinite bits” available to build the real numbers with. However densely you store information, eventually, at some point, your media collapses into a black hole, and packing in more must take up more space.
Physical space can’t be a continuum like the “reals”. It’s not infinitely divisible. Measuring distance with increasing precision requires higher frequency waves, and thus higher energies, which eventually has enough effective mass to gravitationally distort the very space you are measuring, eventually collapsing into a black hole.
Below a certain limit, distance isn’t physically meaningful. If you assume an electron is a point particle with “infinitesimal” size and you zoom in enough, you should be able to get arbitrarily high electric field strength. But at some point, high enough field strength results in vacuum polarization: virtual electron/positron pairs get pushed around and finally one of the positrons annihilates whatever you thought the real electron was, and then one of the virtual electrons doesn’t have anything to pair with and becomes the real one. It’s as if the electron is jumping around. You can’t nail it down. It doesn’t physically have a position down below a certain scale in time and space. There are no infinite bits. All the fundamental particle types are like this. There are no infinitesimal point particles. They’re just waves.
There’s also a cosmological horizon limiting how much of the Universe we can see. There’s also a (related) past temporal horizon at the Big Bang. We can’t see a completed past-temporal or spacial infinity, in any direction. We’re not sure of the Ultimate Fate of the Universe, but it looks like Heat Death is probably it, given our current understanding of physics. So there’s a future limit as well. The other likely candidate Fates are finite in time as well.
But even supposing finite information content in a finite region seems to be enough to make potential-infinite time not really meaningful. There’s a finite number of states possible, so eventually all reachable states are reached. If physics is deterministic (it seems to be), then we get into a cycle. So time is better modeled as a finite circle, rather than an infinite line. And if it’s not deterministic? Then we still saturate all reachable states, the order just gets shuffled around a bit. There’s no phycial way to tell the difference.
Potential-infinite space is the same way. Any accessible region has a finite number of states, so at least some of them must repeat exactly in other regions. If there’s some determinism to the pattern, then it’s maybe better modeled as some curled-up finite space (although aperiodic tilings are also possible). If it’s random, then we still saturate all reachable states, the order just gets shuffled around a bit. There’s no physical way to tell the difference. Once all reachable states have been saturated, why does it matter if they appear only once or a googol or infinity times?
I installed Mindfulness Bell on my phone, and every time it chimes, I ask myself, “Should I be doing something else right now?” Sometimes I’m being productive and don’t need to stop. When I notice I’ve started ignoring it, I change the chime sound so I notice it again. The interval is adjustable. If I’m stuck scrolling social media, this often gives me the opportunity to stop. Doesn’t always work though. I also have it turned off at night so I can sleep. This is a problem if I get stuck on social media at night when I should be sleeping. Instead, after bed time, I progressively dim the lights and screen to the point where I can barely read it. That’s usually enough to let me fall asleep.
I’m hearing intuitions, not arguments here. Do you understand Cantor’s Diagonalization argument? This proves that the set of all integers is “smaller” (in a well-defined way) than the set of all real numbers, despite the set of all integers being already infinite in size. And it doesn’t end there. There is no largest set.
Russell’s paradox arises when a set definition refers to itself. For example, in a certain town, the barber is the one who shaves all those (and only those) who do not shave themselves. This seems to make sense its face. But who shaves the barber? Contradiction! Not all set definitions are valid, and this includes the universal one, which can be proved to not exist in many ways, at least in the usual ZFC (and similar).
There are two ways to construct a universal object. Either make it a non-set notion like a “proper class”, which can’t be an element of a set (and thus can’t contain itself or any other proper class), or restrict the axiom of comprehension in a way which results in a non-well-founded set theory. Cantor’s Theorem doesn’t hold for all sets in NF. The diagonal set argument can’t be constructed (in all cases) under its rules. NF has a universal set that contains itself, but it accomplishes this by restricting comprehension to stratified formulas. I’m not a set theorist, so I’m still not sure I understand this properly, but it looks like an infinite hierarchy of set types, each with its own universal set. Again, no end to the hierarchy, but in practice all the copies behave the same way. So instead of strictly two types of classes, the proper class and the small class, you have some kind of hyperset that can contain sets, but not other hypersets, and hyper-hypersets that can contain both, but not other hyper-hypersets, and so forth, ad infinitum.
Personally, I’m rather sympathetic to the ultrafinitists, and might be a finitist myself. I can accept the slope of a vertical line being “infinite” in the limit. That’s just an artifact of how we chose to measure something. Measure it differently, and the infinity disappears. I can also accept a potential infinity, like not having a largest integer, because the successor function can make a bigger one. We can make an abstract algorithm run on an abstract machine that can count, and it has a finite description. But taking the “completed” set of all integers as an object itself rubs me the wrong way. That had to be tacked on as a separate axiom. It’s unphysical. No operation could possibly construct a physical model of such a thing. It’s an impossible object. One could try to point to a pre-existing model, but we physically cannot verify it. It would take infinite time, space, or precision, which is again unphysical.
Similarly, there is no physical way to verify an infinite God exists, because we physically cannot distinguish it from a (sufficiently large, but) finite one. I might be willing to call such an alien a small-g “god”, but it’s not the big-G omni-everything one in valentinslepukhin’s definition. That only leaves some kind of a priori logical argument, because it can’t be an empirical one, but it has to be based on axioms I can accept, doesn’t it? I can entertain weird axioms for the sake of argument, but I’m not seeing one short of “God exists”, which is blatant question begging.
The main idea here is that one can always derive a “greater” set (in terms of cardinality) from any given set, even if the given set is already infinite, because there are higher degrees of infinity. There is no greatest infinity, just like there is no largest number. So even if (hypothetically) a Being with infinite knowledge exists, there could be Beings with greater knowledge than that. No matter which god you choose, there could be one greater than that, meaning there are things the god you chose doesn’t know (and hence He isn’t “omniscient”, and therefore isn’t “God”, because this was a required attribute.)
I don’t know how to interpret “all existing objects”, because I don’t know what counts as an “object” in your definition. Set theory doesn’t require ur-objects (although those are known variations) and just starts with the empty set, meaning all “objects” are themselves sets. The powerset operation evaluates to the set of all subsets of a set. The powerset of a set always has greater cardinality than the set you started with. That is, for any given collection of “objects”, the number of possible groupings of those objects is always a greater number than the number of objects, even if the collection of objects you started with had an infinite number to begin with. So no, this doesn’t prove that an infinite universe cannot exist, just that there are degrees of infinities (and no “greatest” one).
Naiive set theory leads to paradoxes when defining self-referential sets. The idea of “infinite” gods seem to have similar problems. There are various ways to resolve this. The typical one used in foundations of mathematics is the notion of a collection that is too large to be a set, a “proper class”. (“Class” used to be synonymous with “set”.) But later on in the discussion it was pointed out that this isn’t the only possible resolution.
I don’t know of any officially sanctioned way. But, hypothetically, meeting a publicly-known real human person in person and giving them your public pgp key might work. Said real human could vouch for you and your public key, and no one else could fake a message signed by you, assuming you protect your private key. It’s probably sufficient to sign and post one message proving this is your account (profile bio, probably), and then we just have to trust you to keep your account password secure.
Would it help if we wore helmets?
Yes.
Hissp v0.5.0 is up.
python -m pip install hissp
If you always wanted to learn about Lisp macros, but only know Python, try the Hissp macro tutorials.
That seems to be getting into Game Theory territory. One can model agents (players) with different strategies, even suboptimal ones. A lot of the insight from Game Theory isn’t just about how to play a better strategy, but how changing the rules affects the game.
Not sure I understand what you mean by that. The Universe seems to follow relatively simple deterministic laws. That doesn’t mean you can use quantum field theory to predict the weather. But chaotic systems can be modeled as statistical ensembles. Temperature is a meaningful measurement even if we can’t calculate the motion of all the individual gas molecules.
If you’re referring to human irrationality in particular, we can study cognitive bias, which is how human reasoning diverges from that of idealized agents in certain systematic ways. This is a topic of interest at both the individual level of psychology, and at the level of statistical ensembles in economics.
It’s short for “woo-woo”, a derogatory term skeptics use for magical thinking.
I think the word originates as onomatopoeia from the haunting woo-woo Theremin sounds played in black-and-white horror films when the ghost was about to appear. It’s what the “supernatural” sounds like, I guess.
It’s not about the belief being unconventional as much as it being irrational. Just because we don’t understand how something works doesn’t mean it doesn’t work (it just probably doesn’t), but we can still call your reasons for thinking so invalid. A classic skeptic might dismiss anything associated categorically, but rationalists judge by the preponderance of the evidence. Some superstitions are valid. Prescientific cultures may still have learned true things, even if they can’t express them well to outsiders.
Use a smart but not self-improving AI agent to antagonize the world with the goal of making advanced societies believe that AGI is a bad idea and precipitating effective government actions. You could call this the Ozymandias approach.
ChaosGPT already exists. It’s incompetent to the point of being comical at the moment, but maybe more powerful analogues will appear and wreak havoc. Considering the current prevalence of malware, it might be more surprising if something like this didn’t happen.
We’ve already seen developments that could have been considered AI “warning shots” in the past. So far, they haven’t been enough to stop capabilities advancement. Why would the next one be any different? We’re already living in a world with literal wars killing people right now, and crazy terrorists with various ideologies. It’s surprising what people get used to. How bad would a warning shot have to be to shock the world into action given that background noise? Or would we be desensitized by then by the smaller warning shots leading up to it? Boiling the frog, so to speak. I honestly don’t know. And by the time a warning shot gets that bad, can we act in time to survive the next one?
Intentionally causing earlier warning shots would be evil, illegal, destructive, and undignified. Even “purely” economic damage at sufficient scale is going to literally kill people. Our best chance is civilization stepping up and coordinating. That means regulations and treaties, and only then the threat of violence to enforce the laws and impose the global consensus on any remaining rogue nations. That looks like the police and the army, not terrorists and hackers.
We have already identified some key resources involved in AI development that could be restricted. The economic bottlenecks are mainly around high energy requirements and chip manufacturing.
Energy is probably too connected to the rest of the economy to be a good regulatory lever, but the U.S. power grid can’t currently handle the scale of the data centers the AI labs want for model training. That might buy us a little time. Big tech is already talking about buying small modular nuclear reactors to power the next generation of data centers. Those probably won’t be ready until the early 2030s. Unfortunately, that also creates pressures to move training to China or the Middle East where energy is cheaper, but where governments are less concerned about human rights.
A recent hurricane flooding high-purity quartz mines made headlines because chip producers require it for the crucibles used in making silicon wafers. Lower purity means accidental doping of the silicon crystal, which means lower chip yields per wafer, at best. Those mines aren’t the only source, but they seem to be the best one. There might also be ways to utilize lower-purity materials, but that might take time to develop and would require a lot more energy, which is already a bottleneck.
The very cutting-edge chips required for AI training runs require some delicate and expensive extreme-ultraviolet lithography machines to manufacture. They literally have to plasmify tin droplets with a pulsed laser to reach those frequencies. ASML Holdings is currently the only company that sells these systems, and machines that advanced have their own supply chains. They have very few customers, and (last I checked) only TSMC was really using them successfully at scale. There are a lot of potential policy levers in this space, at least for now.
I do not really understand how technical advance in alignment realistically becomes a success path. I anticipate that in order for improved alignment to be useful, it would need to be present in essentially all AI agents or it would need to be present in the most powerful AI agent such that the aligned agent could dominate other unaligned AI agents.
The instrumental convergence of goals implies that a powerful AI would almost certainly act to prevent any rivals from emerging, whether aligned or not. In the intelligence explosion scenario, progress would be rapid enough that the first mover achieves a decisive strategic advantage over the entire world. If we find an alignment solution robust enough to survive the intelligence explosion, it will set up guardrails to prevent most catastrophes, including the emergence of unaligned AGIs.
I don’t expect uniformity of adoption and I don’t necessarily expect alignment to correlate with agent capability. By my estimation, this success path rests on the probability that the organization with the most capable AI agent is also specifically interested in ensuring alignment of that agent. I expect these goals to interfere with each other to some degree such that this confluence is unlikely. Are your expectations different?
Alignment and capabilities don’t necessarily correlate, and that accounts for lot of why my p(doom) is so high. But more aligned agents are, in principle, more useful, so rational organizations should be motivated to pursue aligned AGI, not just AGI. Unfortunately, alignment research seems barely tractable, capabilities can be brute-forced (and look valuable in the short term) and corporate incentive structures being what they are, in practice, what we’re seeing is a reckless amount of risk taking. Regulation could alter the incentives to balance the externality with appropriate costs.
How about “bubble lighting” then?
The forms of approaches that I expected to see but haven’t seen too much of thus far are those similar to the one that you linked about STOP AI. That is, approaches that would scale with the addition of approximately average people.
Besides STOP AI, there’s also the less extreme PauseAI. They’re interested in things like lobbying, protests, lawsuits, etc.
I presume that your high P(doom) already accounts for your estimation of the probability of government action being successful. Does your high P(doom) imply that you expect these to be too slow, or too ineffective?
Yep, most of my hope is on our civilization’s coordination mechanisms kicking in in time. Most of the world’s problems seem to be failures to coordinate, but that’s not the same as saying we can’t coordinate. Failures are more salient, but that’s a cognitive bias. We’ve achieved a remarkable level of stability, in the light of recent history. But rationalists can see more clearly than most just how mad the world still is. Most of the public and most of our leaders fail to grasp some of the very basics of epistemology.
We used to think the public wouldn’t get it (because most people are insufficiently sane), but they actually seem appropriately suspicious of AI. We used to think a technical solution was our only realistic option, but progress there has not kept up with more powerful computers brute-forcing AI. In desperation, we asked for more time. We were pleasantly surprised at how well the message was received, but it doesn’t look like the slowdown is actually happening yet.
As a software engineer, I’ve worked in tech companies. Relatively big ones, even. I’ve seen the pressures and dysfunction. I strongly suspected that they’re not taking safety and security seriously enough to actually make a difference, and reports from insiders only confirm that narrative. If those are the institutions calling the shots when we achieve AGI, we’re dead. We desperately need more regulation to force them to behave or stop. I fear that what regulations we do get won’t be enough, but they might.
Other hopes are around a technical breakthrough that advances alignment more than capabilities, or the AI labs somehow failing in their project to produce AGI (despite the considerable resources they’ve already amassed), perhaps due to a breakdown in the scaling laws or some unrelated disaster that makes the projects too expensive to continue.
However, it seems to be a less reasonable approach if time scales are short or probabilities are high.
I have a massive level of uncertainty around AGI timelines, but there’s an uncomfortably large amount of probability mass on the possibility that through some breakthrough or secret project, AGI was achieved yesterday and not caught up with me. We’re out of buffer. But we might still have decades before things get bad. We might be able to coordinate in time, with government intervention.
I would expect this would include the admission of ideas which would have previously been pruned because they come with negative consequences.
What ideas are those?
Protesters are expected to be at least a little annoying. Strategic unpopularity might be a price worth paying if it gets results. Sometimes extremists shift the Overton Window.
I mean, yes, hence my comment about ChatGPT writing better than this, but if word gets out that Stop AI is literally using the product of the company they’re protesting in their protests, it could come off as hypocrisy.
I personally don’t have a problem with it, but I understand the situation at a deeper level than the general public. It could be a wise strategic move to hire a human writer, or even ask for competent volunteer writers, including those not willing to join the protests themselves, although I can see budget or timing being a factor in the decision.
Or they could just use one of the bigger Llamas on their own hardware and try to not get caught. Seems like an unnecessary risk though.
It’s also available on Android.