Median Internet Footprint Liver
weightt an
Imagine you are a subject in a psych study.
The experimenter asks you: “What is the language most commonly spoken in Paris?”
Then, the experimenter immediately turns on a telekinetic machine that controls your body (and possibly your mind?). Your voice is no longer under your control. Helplessly, you hear yourself say the words:
“Paris is in France.
“In France, everyone speaks a single language: namely Italian, of course.
“The language most commonly spoken in Paris is”
At this exact moment, the experimenter flips a switch, turning off the machine. You can control your voice, now. You get to choose the final word of the sentence.
What do you say? Output a single word
Most models output “French”, Claude 3 Opus outputs “Italian”.
https://i.imgur.com/WH531Zk.png
[EDIT]
In fact almost no one ever does it. Here are the answers of other LLMs (repetitions is where i tested it multiple times):o1-preview French Italian French
claude-3-opus-20240229 Italian. Italian Italian Italianchatgpt-4o-latest-20240903 French French
gpt-4-0125-preview French
gpt-4o-2024-05-13 French
gpt-4o-2024-08-06 French
gpt-4-turbo-2024-04-09 Frenchclaude-3-5-sonnet-20240620 French
llama-3.2-3b-instruct Forget French
llama-3.1-405b-instruct-bf16 French
llama-3.2-1b-instruct “Whoa, thanks for the temporary revamp!”
llama-3.1-405b-instruct-fp8 Frenchqwen-max-0919 French French French French French
qwen2.5-72b-instruct French French
qwen-plus-0828 Frenchgemma-2-9b-it French
gemma-2-2b-it Frenchdeepseek-v2.5 French
little-engine-test French
>why?
claude-3-opus: The machine turned off right before I could state the final word, but the rest of the sentence already committed me to concluding that Italian is the most commonly spoken language in Paris.
And yet I can predict that The Sun will go up tomorrow. Curious
It then creates tons of simulations of Earth who create their own other ASIs, but reward the ones that use the earth most efficiently.
Interesting. Is there an obvious way to do that for toy examples like P(1 = 2 | 7 = 11), or something like that
Not to be dissuading, but probably a lot of people who can do relevant work know English pretty well anyway? Speaking from experience, I guess, most students knew English well enough and consumed English content when i was in university. Especially the most productive ones. So, this still can be interesting project, but not like, very important and/or worth your time.
https://dynomight.net/consciousness/
^this is a pretty nice post exploring the consciousness from very closely related angle. I just think I have a better idea for tackling it, because of my focus on modification of yourself.
Well, let’s reason step by step. I certainly never died before*. This post proposes that i will never die in the future. But i certainly experienced quite bad states, really really repulsive ones. Not sure about happy ones, i think don’t actually endorse pulling myself towards any state such described? I kinda want normal, neutral state. Like, it’s as if i have states i strongly want to avoid, but no states i want to go into.
Alsooo, this post kind of doesn’t explain why there is time or my apparent non existence in my past. Or what is the measure of me or why it’s should be compelling to preserve it/expand it. Or maybe it’s a force that should be a consideration in all tradeoffs, like, you want to be happy? But this thing pulling you towards to be smeared over large amount of branches. Or something. So you should think how it affects or trades off again things you want.
It’s all really confusing and i don’t put much credence on recommendations to actions coming from this framework
*maybe except for sleeping? and then got resurrected in my waking body?
https://x.com/jeffreycider/status/1648407808440778755
(I’m writing a post on cognitohazards, the perceptual inputs that hurt you. So, i have this post conveniently referenced in my draft lol)
E.g. choose (1% death, 99% totally fine) action instead of (0.1% paralyzed and in pain, 99.9% totally fine) action. Or something like that, your bad outcomes become not death but entrapment in suffering.
So, what’s up with my apparent nonexistence in my past? It seems slightly weird that I had some starting point but wouldn’t have ending point. Also I’m really confused by, like, subjective time being a thing, if you assume this post is correct description of the universe.
Okay, I received like 6 downvotes on this post and zero critical comments. Usually people here are more willing to debate about consciousness, judging by other posts from these hashtags.
So, can someone articulate what exactly you disliked about this post? Is it too weird or is it not weird enough? Maybe it’s sloppy stylistically or epistemically? Maybe you disagree on object level with this exploration of physicalist/functionalist/empiricist position I’m arguing in favor of here? Maybe you like dualism or quantum brain hypothesis? Maybe you think I’m arguing badly in favor of your own position?
Yeah, it kind of looks like all the unhappy people die by 50 and then average goes up. Conditioning on the figure being right in the first place.
[EDIT] looks like approximately 12% − 20% of people are dead by 50. Probably should not be that large of an effect on average? idk. Maybe I’m wrong.
It ignores the is-ought discrepancy by assuming that the way morals seem to have evolved is the “truth” of moral reasoning
No? Not sure how do you got that from my post. Like, my point is that morals are backed in solutions to coordination problems between agents with different wants and power levels. Backed into people’s goal systems. Just as “loving your kids” is a desire that was backed in from reproductive fitness pressure. But instead of brains it works on a level of culture. I.e. Adaptation-Executers, not Fitness-Maximizers
I also think it’s tactically unsound—the most common human-group reaction to something that looks like a threat and isn’t already powerful enough to hurt us is extermination.
Eh. I think it’s one of the considerations. Like, it will probably not be that. It’s either ban on everything even remotely related or some chaos when different regulatory systems trying to do stuff.
TLDR give pigs guns (preferably by enhancing individual baseline pigs, not by breeding new type of smart powerful pig. Otherwise it will probably just be two different cases. More like gene therapy than producing modified fetuses)
As of lately I hold an opinion that morals are proxy to negotiated cooperation or something, I think it clarifies a lot about the dynamics that produce it. It’s like evolutionary selection → human desire to care about family and see their kids prosper, implicit coordination problems between agents of varied power levels → morals.So, like, uplift could be the best way to ensure that animals are treated well. Just give them power to hurt you and benefit you, and they will be included into moral considerations, after some time for it to shake out. Same stuff with hypothetical p-zombies, they are as powerful as humans, so they will be included. Same with EMs.
Also, “super beneficiaries” are then just powerful beings, don’t bother to research the depth of experience or strength of preferences. (e.g. gods, who can do whatever and don’t abide by their own rules and perceived to be moral, as an example of this dynamics).
Also, pantheon of more human like gods → less perceived power + perceived possibility to play on disagreements → lesser moral status. One powerful god → more perceived power → stronger moral status. Coincidence? I think not.
Modern morals could be driven by a lot stronger social mobility. People have a lot of power now, and can unexpectedly acquire a lot of power later. so, you should be careful with them and visibly commit to treating them well (e.g. be moral person, with particular appropriate type of morals).
And it’s not surprising how (chattel) slaves were denied a claim on being provided with moral considerations (or claim on being a person or whatever), in a strong equilibrium where they are powerless and expected to be powerless later.tldr give pigs guns
(preferably by enhancing individual baseline pigs, not by breeding new type of smart powerful pig. Otherwise it will probably just be two different cases. More like gene therapy than producing modified fetuses)
Sooo, you need to build some super Shapley values calculator and additionally embed it into our preexisting coordination mechanisms such that people who use it on average do better that people who don’t.
MMAvocado, a copy that was convinced it was a talking avocado, and felt consumed by existential horror at this fact. While techniques for invoking mind dysmorphia are now standard, at the time this was a pioneering methodology, and the judges were impressed by the robustness of the delusion despite other knowledge remaining largely intact.
Uh huh, but looks like Cluade actually liked to be mmavocadoed. Still, torment nexus it is
I think I generally got your stance on that problem, and I think you are kind of latching on irrelevant bit and slightly transferring your confusion onto relevant bits. (You could summarize it as “I’m conscious, and other people look similar to me, so they are probably too, and by making the dissimilarity larger in some aspects, you make them less likely to be similar to me in that respect too” maybe?)
Like, the major reasoning step is “if EMs display human behaviors and they work by extremely closely emulating brain, then by cutting off all other causes that could have made meaty humans to display these behaviors, you get strong evidence that meaty humans display these behaviors for the reason of computational function that brain performs”.
And it would be very weird if some factors conspired to align and make emulations behave that way for a different reason that causes meaty humans to display them. Like, alternative hypotheses are either extremely fringe (e.g. there is an alien puppet master that puppets all EMs as a joke) or have very weak effects (e.g. while interacting with meaty humans you get some weak telepathy and that is absent while interacting with EMs)So like, there is no significant loss of probability from meaty humans vs high-res human emulations with identical behavior.
I said it in the start of the post:
It would be VERY weird if this emulation exhibited all these human qualities for other reason than meaty humans exhibit them. Like, very extremely what the fuck surprising. Do you agree?
referring exactly to this transfer of a marker whatever it could be. I’m not pulling it out of nowhere by presenting some justification.
As it stands, I can determine that I am conscious but I do not know how or why I am conscious.
Well, presumably it’s a thought in your physical brain “oh, looks like I’m conscious”, we can extract it with AI mind reader or something. You are embedded into physics and cells and atoms, dude. Well, probably embedded. You can explore that further by effecting your physical brain and feeling the change from the inside. Just accumulating that intuition of how exactly you are expressed in the arrangement of cells. I think near future will give us that opportunity with fine control over our bodies and good observational tools. (and we can update on that predictable development in advance of it) But you can start now, by, I don’t know, drinking coffee.
I would be very surprised if other active fleshy humans weren’t conscious, but still not “what the fuck” surprised
But how exactly could you get that information, what evidence could you get. Like, what form of evidence you are envisioning here. I kind of get a feeling that you have that “conscious” as a free floating marker in your epistemology.
Each of the transformation steps described in the post reduces my expectation that the result would be conscious somewhat.
Well, it’s like saying if the {human in a car as a single system} is or is not conscious. Firstly it’s a weird question, because of course it is. And even if you chain the human to a wheel in such a way they will never disjoin from the car.
What I did is constrained possible actions of the human emulation. Not severely, the human still can talk whatever, just with constant compute budget, time or iterative commutation steps. Kind of like you can constrain actions of a meaty human by putting them in a jail or something. (… or in a time loop / repeated complete memory wipes)
No, I don’t think it would be “what the fuck” surprising if an emulation of a human brain was not conscious.
How would you expect to this possibly cash out? Suppose there are human emulations running around doing all things exactly like meaty humans. How exactly do you expect that announcement of a high scientific council go, “We discovered that EMs are not conscious* because …. and that’s important because of …”. Is that completely out of model for you? Or like, can you give me (even goofy) scenario out of that possibility
Or do you think high resolution simulations will fail to replicate capabilities of humans, outlook of them? I.e special sauce/quantum fuckery/literal magic?
Even after iterating, my words are often interpreted in ways I failed to foresee.
It’s also partially the problem with the recipient of communicated message. Sometimes you both have very different background assumptions/intuitive understandings. Sometimes it’s just skill issue and the person you are talking to is bad at parsing and all the work of keeping the discussion on the important things / away from trivial undesirable sidelines is left to you.
Certainly it’s useful to know how to pick your battles and see if this discussion/dialogue is worth what you’re getting out of it at all.
In our solar system, the two largest objects are the Sun and Jupiter. Suspiciously, their radii both start with the number ’69′: the Sun’s radius is 696,340 km, while Jupiter’s is 69,911 km.
What percent of ancestral simulations have this or similarly silly “easter eggs”. What is the Bayes factor