Median Internet Footprint Liver
weightt an
I think I generally got your stance on that problem, and I think you are kind of latching on irrelevant bit and slightly transferring your confusion onto relevant bits. (You could summarize it as “I’m conscious, and other people look similar to me, so they are probably too, and by making the dissimilarity larger in some aspects, you make them less likely to be similar to me in that respect too” maybe?)
Like, the major reasoning step is “if EMs display human behaviors and they work by extremely closely emulating brain, then by cutting off all other causes that could have made meaty humans to display these behaviors, you get strong evidence that meaty humans display these behaviors for the reason of computational function that brain performs”.
And it would be very weird if some factors conspired to align and make emulations behave that way for a different reason that causes meaty humans to display them. Like, alternative hypotheses are either extremely fringe (e.g. there is an alien puppet master that puppets all EMs as a joke) or have very weak effects (e.g. while interacting with meaty humans you get some weak telepathy and that is absent while interacting with EMs)So like, there is no significant loss of probability from meaty humans vs high-res human emulations with identical behavior.
I said it in the start of the post:
It would be VERY weird if this emulation exhibited all these human qualities for other reason than meaty humans exhibit them. Like, very extremely what the fuck surprising. Do you agree?
referring exactly to this transfer of a marker whatever it could be. I’m not pulling it out of nowhere by presenting some justification.
As it stands, I can determine that I am conscious but I do not know how or why I am conscious.
Well, presumably it’s a thought in your physical brain “oh, looks like I’m conscious”, we can extract it with AI mind reader or something. You are embedded into physics and cells and atoms, dude. Well, probably embedded. You can explore that further by effecting your physical brain and feeling the change from the inside. Just accumulating that intuition of how exactly you are expressed in the arrangement of cells. I think near future will give us that opportunity with fine control over our bodies and good observational tools. (and we can update on that predictable development in advance of it) But you can start now, by, I don’t know, drinking coffee.
I would be very surprised if other active fleshy humans weren’t conscious, but still not “what the fuck” surprised
But how exactly could you get that information, what evidence could you get. Like, what form of evidence you are envisioning here. I kind of get a feeling that you have that “conscious” as a free floating marker in your epistemology.
Each of the transformation steps described in the post reduces my expectation that the result would be conscious somewhat.
Well, it’s like saying if the {human in a car as a single system} is or is not conscious. Firstly it’s a weird question, because of course it is. And even if you chain the human to a wheel in such a way they will never disjoin from the car.
What I did is constrained possible actions of the human emulation. Not severely, the human still can talk whatever, just with constant compute budget, time or iterative commutation steps. Kind of like you can constrain actions of a meaty human by putting them in a jail or something. (… or in a time loop / repeated complete memory wipes)
No, I don’t think it would be “what the fuck” surprising if an emulation of a human brain was not conscious.
How would you expect to this possibly cash out? Suppose there are human emulations running around doing all things exactly like meaty humans. How exactly do you expect that announcement of a high scientific council go, “We discovered that EMs are not conscious* because …. and that’s important because of …”. Is that completely out of model for you? Or like, can you give me (even goofy) scenario out of that possibility
Or do you think high resolution simulations will fail to replicate capabilities of humans, outlook of them? I.e special sauce/quantum fuckery/literal magic?
Even after iterating, my words are often interpreted in ways I failed to foresee.
It’s also partially the problem with the recipient of communicated message. Sometimes you both have very different background assumptions/intuitive understandings. Sometimes it’s just skill issue and the person you are talking to is bad at parsing and all the work of keeping the discussion on the important things / away from trivial undesirable sidelines is left to you.
Certainly it’s useful to know how to pick your battles and see if this discussion/dialogue is worth what you’re getting out of it at all.
you’re making a token-predicting transformer out of a virtual system with a human emulation as a component.
Should it make a difference? Same iterative computation.
In the system, the words “what’s your earliest memory?” appearing on the paper are going to trigger all sorts of interesting (emulated) neural mechanisms that eventually lead to a verbal response, but the token predictor doesn’t necessarily need to emulate any of that.
Yes, I talked about optimizations a bit. I think you are missing a point of this example. The point is that if you are trying to conclude from the fact that this system is doing next token prediction then it’s definitely not conscious, you are wrong. And my example is an existence proof, kind of.
>It seems you are arguing that anything that presents like it is conscious implies that it is conscious.
No? That’s definitely not what I’m arguing.
>But what ultimately matters is what this thing IS, not how it became in that way. If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn’t worse than the one that evolution engineered with more granularity. Doesn’t matter if this human was assembled atom by atom on molecular assembler, it’s still a conscious human.
Look, here I’m talking about pathways to acquire that “structure” inside you. Not outlook of it.
I think this kind of framing is kind of confused and slippery, I feel like I’m trying wake up and find a solid formulation of it.
Like, what it does it mean, do it by yourself? Do humans do it by themselves? Who knows, but probably not, children that grow without any humans nearby are not very human.
Humans teach humans to behave as if they are conscious. Just like majority of humans have sense of smell, and they teach humans who don’t to act like they can smell things. And some only discover that smell isn’t an inferred characteristic when they are adults. This is how probably non conscious human could pass as conscious, if such disorder existed, hm?
But what ultimately matters is what this thing IS, not how it became in that way. If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn’t worse than the one that evolution engineered with more granularity. Doesn’t matter if this human was assembled atom by atom on molecular assembler, it’s still a conscious human.
Also, remember that one paper where LLMs can substitute CoT with filling symbols …....? [inset the link here] Not sure what’s up with that, but kind of interesting in this context
Good point, Claude, yeah. Quite alien indeed, maybe more parsimonious. This is exactly what I meant by possibility of this analogy being overridden by actually digging into your brain, digging into a human one and developing actually technical gears-level models of both and then comparing them. Until then, who knows, I’m leaning toward healthy dose of uncertainty.
Also, thanks for the comment.
LLMs could be as conscious as human emulations, potentially
If traders can get access to control panel for actions of the external agent AND they profit from accurately predicting its observations, then wouldn’t the best strategy be “create as much chaos as possible that is only predictable to me, its creator”. So, traders that value ONLY accurate predictions will get the advantage?
Well maybe llms can “experiment” on their dataset by assuming something about it and then being modified if they encounter counterexample.
I think it vaguely counts as experimenting.
I think that there may be wrapper-minds with very detailed utility functions, that whatever qualities you attribute to agents that are not them, the wrapper-mind’s behavior will look like their with arbitrary precision on arbitrarily many evaluation parameters. I don’t think it’s practical or it’s something that has a serious chance of happening, but I think it’s a case that might be worth considering.
Like, maybe it’s very easy to build a wrapper mind that is a very good approximation of very non wrapper mind. Who knows
Sounds like a statement “no AI can have or get them”.
Well it can learn it, it can develop them based on a dataset of people’s stories. Especially it looks possible with the approach that is currently being used.
Isn’t consciousness just a “read-only access thing to the world” then? Like is there some reason why dualism is not isomorphic to parallelism?
There is a lot more useful data on YouTube (by several orders of magnitude at least? idk), I think the next wave of such breakthrough models will train on video.
Give it 140k chances to predict “rain or no rain, in this location and time?” and it has no chance.
Well i think it can just encode some message in this bits and you or your colleagues will eventually check it
Exactly
Uh huh, but looks like Cluade actually liked to be mmavocadoed. Still, torment nexus it is