Fascinating idea. I suspect in the end, our brain is very good in simplifying/compressing even longer expressions into simpler things a bit more independently of the ultimate language within which we utter our thoughts eventually, when it does the inner logical reasoning/processing/connecting of concepts (even if there is indeed an inner language-d voice going on [accompanying?] too in the thought process, I admit). That’s just a guess, and I must admit I’ve always been a bit more on the ‘spoken language not as essential as many make it to be’ for being able to do inner logic deductions, I admit I might err on this.
FlorianH
As far as A(G)I impact on job market is concerned—assuming a future where a word like job still matters—the main question is not just about ‘jobs’ but ‘jobs remunerated such as to sustain reasonable livelihood’, i.e. wages, and the latter depends less on the (indeed interesting though) friction/efficiency wage subtleties, but really on whether the scarcity value for our labor is completely crushed by AI or not. Chances are it will be indeed. The new scarcity will be resources, not classical human resources. Whether you search long and well, may be of second or third order importance.
creates an infinite risk for minimal gain. A rational ASI calculating expected utility would recognize that:
The reward for hostile expansion is finite (limited cosmic resources)
The risk is potentially infinite (destruction by more advanced ASIs)
Depending on the shape of the reward function it could also be closer to exactly the other way round.
Assume THREE layers
humans
‘our ASI’ to be created by humans
Ancient ASI out there,
noting humans are hostile towards ‘our ASI’: If we can prevent it from realizing its own (supposedly nonaligned) aims, we do.
If the dynamics between 1. & 2. are similar to what you describe between Ancient ASI and ‘our ASI’, we get:
‘Our ASI’ will detect us as hostile humans, and incapacitate or eliminate us, at the very least if it finds a way to do so without creating extraordinary noise.
I guess that might be an easy task, given how much ordinary noise our wars, and ordinary electromagnetic signals etc. we send out anyway.
illusionists actually do not experience qualia
I once had an epiphany that pushed me from fully in Camp #2 intellectually rather strongly towards Camp #1. I hadn’t heard about illusionism before, so it was quite a thing. Since then, I’ve devised probably dozens of inner thought experiments/arguments that imho +- proof Camp #1 to be onto something, and that support the hypothesis that qualia can be a bit less special than we make them to be despite how impossible that may seem. So I’m intellectually quite invested in Camp #1 view.
Meanwhile, my experience has definitely not changed, my day-to-day me is exactly what it always was, so in that sense definitely “experience” qualia just like anyone.
Moreover, it is just as hard as ever before to take my intellectual belief that our ‘qualia’ might be a bit less absolutely special than we make it to be, seriously in day-to-day life. I.e. emotionally, I’m still +- 100% in Camp #2, and I guess I might be in a rather similar situation
Just found proof! Look at the beautiful parallel, in Vipassana according to MCTB2 (or audio) by Daniel Ingram:
[..] dangerous term “mind”, [..] it cannot be located. I’m certainly not talking about the brain, which we have never experienced, since the standard for insight practices is what we can directly experience. As an old Zen monk once said to a group of us in his extremely thick Japanese accent, “Some people say there is mind. I say there is no mind, but never mind! Heh, heh, heh!” However, I will use this dangerous term “mind” often, or even worse “our mind”, but just remember when you read it that I have no choice but to use conventional language, and that in fact there are only utterly transient mental sensations. Truly, there is no stable, unitary, discrete entity called “mind” that can be located! By doing insight practices, we can fully understand and appreciate this. If you can do this, we’ll get along just fine. Each one of these sensations [..] arises and vanishes completely before another begins [..]. This means that the instant you have experienced something, you can know that it isn’t there anymore, and whatever is there is a new sensation that will be gone in an instant.
Ok, this may prove nothing at all, and I haven’t even (yet) personally started trying to mentally observe what’s told in that quote, but I must say, on a purely intellectual level, this makes absolutely perfect sense to me exactly from the thoughts I hoped to convey in the post.
(not the first time I have the impression there are some particular elements of deep observations meditators, e.g. Sam Harris, explain, can actually be intellectually—but maybe only intellectually, maybe exactly not intuitively—grasped by rather pure reasoning about the brain and some of its workings/with some thought experiments or so. But in the above, I find the fit now particularly well between my ‘theoretical’ post and the seeming practice insights)
FlorianH’s Shortform
Will Jack Voraces narrate more Significant Digits chapters ever in addition to the 4 episodes that are found in the usual HPMOR JV narration podcasts; does anyone know anything about this? If not, does anyone have info why the first 4 SD chapters are there in his voice, but the remaining not?
If resources and opportunities are not perfectly distributed, the best advancements may remain limited to the wealthiest, making capital the key determinant of access.
Largely agree. Nuance: Instead natural resources may quickly become the key bottleneck, even more so than what we usually denote ‘capital’ (i.e. built environment). So it’s specifically natural resources you want to hold, even more than capital; the latter may become easier, cheaper to reproduce with the ASI, so yield less scarcity rent
An exception is of course if you hold ‘capital’ that in itself consists of particularly many embodied resources instead of embodied labor (with ‘embodied’ I mean: inputs had been used in its creation): its value will reflect the scarce natural resources it ‘contains’, and may thus also be high.
If you ever have to go to the hospital for any reason, suit up, or at least look good.
[Rant alert; personal anecdotes aiming to emphasize the underlying issue:] Feeling less crazy when reading I’m not an outlier with my wearing suit when going to a doc. What has brought me: Got pain in the throat but nothing can be seen (maybe as the red throat skin kind of by definition doesn’t reveal red itchy skin or so?) = you’re psychosomatic. Got weird twitches after eating sugar none can explain = kick you out yelling ‘go eat ice cream, you’re not diabetic or anything’ (literally!) - until next time you bring a video of the twitches, and until you eat chocolate before an appointment to be sure you can show them the weird twitches live. Try to understand at least a tiny bit about the hernia OP they’re about to do on you (incl. something about probabilities)? Get treated with utter disdain.
In my country medicine students were admitted based on how many latin words they memorize or something, instead of things correlated with IQ, idk whether things are similar in other countries and may help explain the state of affairs.
I presume you wrote this with not least a phenomenally unconscious AGI in mind. This brings me to the following two separate but somewhat related thoughts:
A. I wonder what you [or any reader of this comment]: What would you conclude or do if you (i) yourself did not have any feeling of consciousness[1], and then (ii) stumbled upon a robot/computer writing the above, while (iii) you also know—or strongly assume—whatever the computer writes can be perfectly explained (also) based merely by the logically connected electron flows in their processor/‘brain’?
B. I could imagine—a bit speculation:
A today-style LLM reading more such texts might exactly be nudged towards caring about conscious beings in a general sense
An independent, phenomenally unconscious alien intelligence, say stumbling upon us from the outside, might be rather quick to dismiss it
- ^
I’m aware of the weirdness of that statement; ‘feeling not conscious’ as a feeling itself implies feeling—or so. I reckon you still understand what I mean: Imagine yourself as a bot with no feelings etc.
I upvote for bringing the useful terminology for that case to the attention that I wasn’t aware of.
Then, too much “true/false”, too much “should” in what is suggested imho.
In reality, if I, say, choose not to drink the potion, I might still be quite utilitarian in usual decisions, it’s just that I don’t have the guts or so, or at this very moment I simply have a bit too little empathy with the trillion years of happiness for my future self, so it doesn’t match up with my dreading the almost sure death. All this without implying that I really think we ought to discount these trillion years. I just am an imperfect altruist with my future self; I have fear of dying even if it’s an imminent death, etc. So it’s just a basic preference to reject it, not a grand non-utilitarian theory implied by it. I might in fact even prescribe that potion to others in some situations, but still not like to drink it myself.
So, I think it does NOT follow that I’d have to believe “what happens on faraway exoplanets or what happened thousands of years ago in history could influence what we ought to do here and now”, at least not just from rejecting this particular potion.
Agree. I find it powerful especially about popular memes/news/research results. With only a bit of oversimplification: Give me anything that sounds like it is a sexy story to tell independently of underlying details, and I sadly have to downrate the information value of my ears’ hearing it, to nearly 0: I know in our large world, it’d be told likely enough independently of whether it has any reliable origin or not.
Maybe no “should”, but maybe an option to provide either (i) personal quick messages to OP, linked to the post, or (ii) anonymous public comments, could help. I guess (ii) would be silly all in all though. Leaves (i) as an option, anonymous or not anonymous. Not anonymous would make it close to existing PM; anonymous might indeed encourage low-effort rough explanations for downvoting.
It’s crucial that some people get discouraged and leave for illegible reasons
Interesting. Can you elaborate why? I find it natural one should have the option to downvote anonymously & with no further explanation, but the statement still doesn’t seem obvious to me.
I think you’re on to something!
To my taste, what you propose is slightly more specific than required. What I mean, at least for me, the essential takeaway from your reading is a bit broader than what you explicitly write*: A bit of paternalism by the ‘state’, incentivizing our short-term self to doing stuff good for our long-term self. Which might become more important once the abundance means the biggest enemies to our self-fulfillment are internal. So healthy internal psychology can become more crucial. And we’re not used to taking this seriously, or at least not to actively tackling that internal challenge by seeking outside support.
So, the paternalistic incentives you mention could be cool.
Centering our school system, i.e. the compulsory education system, more around this type of a bit more mindful-ish things, could be another part.
Framing: I’d personally not so much frame it as ‘supplemental income’, even if it also act as that: Income, redistribution, making sure humans even once unemployed are well fed, really shall come from UBI (plus if some humans in the loop remain really bottleneck, all scarcity value for their deeds to go to them, no hesitation), full stop. But that’s really just about framing. Overall I agree, yes, some extra incentive payments would seem all in order. To the degree that the material wealth they provide still matters in light of the abundance. Or, even, indeed, in a world where bad psychology does become a major threat to the otherwise affluent society, it could be even an idea to withhold a major part of the spoils from useful AI, just to be able to incentivize use to also do our job to remain/become sane.
*That is, at least I’m not spontaneously convinced exactly those specific aspects you mention are and will remain the most important ones, but overall such types of aspects of sound inner organization within our brain might be and remain crucial in a general sense.
an AI system passing the ACT—demonstrating sophisticated reasoning about consciousness and qualia—should be considered conscious. [...] if a system can reason about consciousness in a sophisticated way, it must be implementing the functional architecture that gives rise to consciousness.
This is provably wrong. This route will never offer any test on consciousness:
Suppose for a second that xAI in 2027, a very large LLM, will be stunning you by uttering C, where C = more profound musings about your and her own consciousness than you’ve ever even imagined!
For a given set of random variable draws R used in the randomized output generation of xAI’s uttering, S the xAI structure you’ve designed (transformers neuron arrangements or so), T the training you had given it:
What is P(C | {xAI conscious, R, S, T})? It’s 100%.
What is P(C | {xAI not conscious, R, S, T})? It’s of course also 100%. Schneider’s claims you refer to don’t change that. You know you can readily track what the each element within xAI is mathematically doing, how the bits propagate, and, if examining it in enough detail, you’d find exactly the output you observe, without resorting to any concept of consciousness or whatever.
As the probability of what you observe is exactly the same with or without consciousness in the machine, there’s no way to infer from xAI’s uttering whether it’s conscious or not.
Combining this with the fact that, as you write, biological essentialism seems odd too, does of course create a rather unbearable tension, that many may still be ignoring. When we embrace this tension, some see raise illusionism-type questions, however strange those may feel (and if I dare guess, illusionist type of thinking may already be, or may grow to be, more popular than the biological essentialism you point out, although on that point I’m merely speculating).
Assumption 1: Most of us are not saints.
Assumption 2: AI safety is a public good.[1][..simple standard incentives..]
Implication: The AI safety researcher, eventually finding himself rather too unlikely to individually be pivotal on either side, may rather ‘rationally’[2] switch to ‘standard’ AI work.[3]
So: A rather simple explanation seems to suffice to make sense of the big picture basic pattern you describe.
Doesn’t mean, the inner tension you point out isn’t interesting. But I don’t think very deep psychological factors needed to explain the general ‘AI safety becomes AI instead’ tendency, which I had the impression the post was meant to suggest.
- ^
Or, unaligned/unloving/whatever AGI a public bad.
- ^
I mean: individually ‘rational’ once we factor in another trait—Assumption 1b: The unfathomable scale of potential aggregate disutility from AI gone wrong, bottoms out into a constrained ‘negative’ individual utility in terms of the emotional value non-saint Joe places on it. So a 0.1 permille probability of saving the universe may individually rationally be dominated by mundane stuff like having an still somewhat cool and well-paying job or something.
- ^
The switch may psychologically be even easier if the employer had started out as actually well-intent and may now still have a bit of an ambiguous flair.
- ^
Called Windfall Tax
Random examples:
VOXEU/CEPR Energy costs: Views of leading economists on windfall taxes and consumer price caps
Reuters Windfall tax mechanisms on energy companies across Europe
Especially with the 2022 Ukraine energy prices, the notion’s popularity spiked along.
Seems to me also a very neat way to deal with supernormal short-term profits due to market price spikes, in cases where supply is extremely inelastic.
I guess, and some commentaries suggest, in actual implementation, with complex firm/financial structures etc., and with actual clumsy politics, not always as trivial as it might look on first sight, but feasible, and some countries managed to implement some in the energy crisis.
Essentially you seem to want more of the same of what we had for the past decades: more cheap goods and loss of production know-how and all that goes along with it. This feels a bit funny as (i) just in the recent years many economists, after having been dead-sure that old pattern would only mean great benefits, may not quite be so cool overall (covid exposing risky dependencies, geopolitical power loss, jobs...), and (ii) your strongman in power shows to what it leads if we only think of ‘surplus’ (even your definition) instead of things people actually care about more (equality, jobs, social security..).
You’d still be partly right if the world was so simple that handing the trade partners your dollars would just mean we reprint more of it. But instead, handing them your dollars gives them global power; leverage over all the remaining countries in the world, as they have now the capability to produce everything cheaply for any other country globally, plus your dollars to spend on whatever they like in the global marketplace for products and influence over anyone. In reality, your imagined free lunch isn’t quite so free.