I’m a very confused person trying to become less confused. My history as a New Age mystic still colors everything I think even though I’m striving for rationality nowadays. Here’s my backstory if you’re interested.
MSRayne
This makes me wonder if some proportion of “masculine” gay men are actually transwomen (of the early onset type) with autoandrophilia. I may even fit into that category myself. I didn’t care about masculinity and in fact found it somewhat abhorrent and not-me-ish until I started getting off to more masculine looking guys in porn. (When I first saw porn when I was 12 I mainly focused on twinks and wanted to look like them, and there’s still a part of me that feels that way, which wars with the part that wants to bulk up because masc dudes are also hot—and usually wins, because bulking is hard and I would rather read books.)
Of course, my natural femininity is not tremendous (I wasn’t flamboyant as a child and as far as I know never have been—I’ve always thought feminine-acting men were creepy—but I did flirt with identifying as nonbinary during my late teens, and used to have multiple female alters during the period where I thought I had multiple personalities), and most of my femininity is the result of misandry taught by the media and my mother (I believed for most of my childhood and early teens that masculinity is disgusting and bestial, and that only women can be powerful / noble, but later realized that like all other disgusting and bestial things, masculinity is sexy as fuck, which helped me get out of my misandry phase.)
Nowadays I think my gender identity is probably something like “true hermaphrodite / omega (as in the omegaverse fanfiction trope) male”, which unfortunately is not something that one can currently medically transition to, and I experience no dysphoria (and to be honest, the only reason I think it would be cool to have both male and female genitals is because it seems too asymmetric and unbalanced not to, and I’m very Libra [yes I know astrology isn’t real, but it’s still a helpful and / or fun language to describe personalities with]).
Well—actually, it’s possible I do experience dysphoria, but in which direction changes with my mood (I sometimes don’t feel masculine enough), and there’s an element of The Paraphilia Which Must Not Be Named [note: if you ask me, I will not name it, and I will neither confirm nor deny guesses, but you can probably figure it out based on what I’m not saying] which also interacts in weird ways with the whole thing, and overall I just find gender and sexuality stuff tiresome and confusing and sort of wish I didn’t have to deal with it.
Thanks for coming to my rambly asf TED talk.
This is interesting, and imo dystopian and dreadful, but it doesn’t belong on Lesswrong. I downvoted.
I feel like consequentialists are more likely to go crazy due to not being grounded in deontological or virtue-ethical norms of proper behavior. It’s easy to think that if you’re on track to saving the world, you should be able to do whatever is necessary, however heinous, to achieve that goal. I didn’t learn to stop seeing people as objects until I leaned away from consequentialism and toward the anarchist principle of unity of means and ends (which is probably related to the categorical imperative). E.g. I want to live in a world where people are respected as individuals, so I have to respect them as individuals—whereas maximizing individual-respect might lead me to do all sorts of weird things to people now in return for some vague notion of helping lots more future people.
I was about to mention Piaget, but you referred to him at the end of the post. Definitely seems relevant, since we noticed the possible connection independently.
This reminds me strongly of the anarchist principle of unity of means and ends, which is why anarchists aren’t into violent revolution anymore—you can’t end coercion by coercive means.
Ooh! I don’t know much about the theory of reinforcement learning, could you explain that more / point me to references? (Also, this feels like it relates to the real reason for the time-value of money: money you supposedly will get in the future always has a less than 100% chance of actually reaching you, and is thus less valuable than money you have now.)
It seems to me that the optimal schedule by which to use up your slack / resources is based on risk. When planning for the future, there’s always the possibility that some unknown unknown interferes. When maximizing the total Intrinsically Good Stuff you get to do, you have to take into account timelines where all the ants’ planning is for nought and the grasshopper actually has the right idea. It doesn’t seem right to ever have zero credence of this (as that means being totally certain that the project of saving up resources for cosmic winter will go perfectly smoothly, and we can’t be certain of something that will literally take trillions of years), therefore it is actually optimal to always put some of your resources into living for right now, proportional to that uncertainty about the success of the project.
computers have no consciousness
Um… citation please?
1. Who are the customers actually buying all these products so that the auto-corporations can profit? They cannot keep their soulless economy going without someone to sell to, and if it’s other AIs, why are those AIs buying when they can’t actually use the products themselves?
2. What happened to the largest industry in developed countries, the service industry, which fundamentally relies on having an actual sophont customer to serve? (And again, if it’s AIs, who the hell created AIs that exist solely to receive services they cannot actually enjoy, and how did they make money by doing that?)
3. Why didn’t shareholders divest from auto-corporations upon realizing that they were likely to lead to ruin? (Don’t say “they didn’t realize it till too late”, because you, right now, know it’s a bad idea, and you don’t even have money on the line.)I ask these because, to be honest, I think this scenario is extremely far-fetched and unlikely. The worst thing that would happen if auto-corporations become a thing, in my mental model, is that existing economic inequalities would be permanently exacerbated due to the insane wealth accrued by their shareholders—because the only currently likely route to AGI, large language models, already understand what we actually mean when we say to maximize shareholder value, and won’t paperclip-maximize, because they’re not stupid, and they use language the same way humans do.
I’ve never had a job in my life—yes really, I’ve had a rather strange life so far, it’s complicated—but I’ve been reading and thinking about topics which I now know are related to operations for years, trying to design (in my head...) a system for distributing the work of managing a complex organization across a totally decentralized group so that no one is in charge, with the aid of AI and a social media esque interface. (I’ve never actually made the thing, because I keep finding new things I need to know, and I’m not a software engineer, just a designer.)
So, I think I have some parts of the requisite skillset here, and a ton of intuition about how to run systems efficiently built up from all the independent studying I’ve done—but absolutely no prior experience with basically anything in reality, except happening to (I believe) have the right personality for operations work. Should I bother applying?
I don’t know what to think about all that. I don’t know how to determine what the line is between having qualia and not. I just feel certain that any organism with a brain sufficiently similar to those of humans—certainly all mammals, birds, reptiles, fish, cephalopods, and arthropods—has some sort of internal experience. I’m less sure about things like jellyfish and the like. I suppose the intuition probably comes from the fact that the entities I mentioned seem to actively orient themselves in the world, but it’s hard to say.
I don’t feel comfortable speculating which AIs have qualia, or if any do at all—I am not convinced of functionalism and suspect that consciousness has something to do with the physical substrate, primarily because I can’t imagine how consciousness can be subjectively continuous (one of its most fundamental traits in my experience!) in the absence of a continuously inhabited brain (rather than being a program that can be loaded in and out of anything, and copied endlessly many times, with no fixed temporal relation between subjective moments.)
I don’t know anything about colab, other than that the colab notebooks I’ve found online take a ridiculously long time to load, often have mysterious errors, and annoy the hell out of me. I don’t know enough AI-related coding stuff to use it on my own. I just want something plug and play, which is why I mainly rely on KoboldAI, Open Assistant, etc.
We’re not talking about sapience though, we’re talking about sentience. Why does the ability to think have any moral relevance? Only possessing qualia, being able to suffer or have joy, is relevant, and most animals likely possess that. I don’t understand the distinctions you’re making in your other comment. There is one, binary distinction that matters: is there something it is like to be this thing, or is there not? If yes, its life is sacred, if no, it is an inanimate object. The line seems absolutely clear to me. Eating fish or shrimp is bad for the same reasons that eating cows or humans is. They are all on the exact same moral level to me. The only meaningful dimension of variation is how complex their qualia are—I’d rather eat entities with less complex qualia over those with more, if I have to choose. But I don’t think the differences are that strong.
Just to be That Guy I’d like to also remind everyone that animal sentience means vegetarianism, at the very least (and because of the intertwined nature of the dairy, egg, and meat industries, most likely veganism) is a moral imperative, to the extent that your ethical values incorporate sentience at all. Also, I’d go further to say that uplifting to sophonce those animals that we can, once we can at some future time, is also a moral imperative, but that relies on reasoning and values I hold that may not be self-evident to others, such as that increasing the agency of an entity that isn’t drastically misaligned with other entities is fundamentally good.
Most Wikipedia readers spend less than a minute on a page?? I always read pages all the way through… even if they’re about something that doesn’t interest me much...
Welcome! And yes, this is a thing people have talked about a lot, particularly in the context of outer versus inner alignment (the outer optimizer, evolution, designed an inner optimizer, humans, who optimize for different things, like pleasure etc, than evolution does, but ended up effectively becoming a “singularity” from its point of view). It’s cool that you noticed this on your own!
This is my thought exactly. I would try it, but I am poor and don’t even have a GPU lol. This is something I’d love to see tested.
So basically… LMs have to learn language in the exact same way human children do: start by grasping the essentials and then work upward to complex meanings and information structures.
This is a fantastically good point. I’ve often seen this failure mode and not had a name for it, such as when someone I know complains about his political opponents having a self-contradictory ideology—I always have to correct him that in fact, different people in roughly the same camp are contradicting one another, but each individual perspective is self-consistent. Now I have a name for that phenomenon!
I feel the same as Adrian and Cato. I am very much the opposite of a rigorous thinker—in fact, I am probably not capable of rigor—and I would like to be the person who spews loads of interesting off the wall ideas for others to parse through and expand upon those which are useful. But that kind of role doesn’t seem to exist here and I feel very intimidated even writing comments, much less actual posts—which is why I rarely do. The feeling that I have to put tremendous labor into making a Proper Essay full of citations and links to sequences and detailed arguments and so on—it’s just too much work and not worth the effort for something I don’t even know anyone will care about.