I have a sort of vague question/request and would like someones opinion on it. In cold emails or having recently been introduced I would like to ask something along the lines of “What mindset/philosophy about (insert something vague like work/school or specific if I have it in mind) have you found most useful and why?” I like this because it has changed recently for me and even if I don’t find their specific mindset useful I think it would tell me a lot about them and I am curious how people will answer.
How would you suggest improving that question. Also I would like advice on making this sort of thing less awkward.
Xor
Xor’s Shortform
Personal Philosophy
[APPRENTICE]
Not a narrow or specific kind of apprenticeship (training or a project), rather very broad and focusing on learning and academics in STEM areas (advice dispenser, and in cases tutor/trainer).
Fields of Study:
Computer Science: I am planning to major in this at college. So it is kind of important. I am interested in ML, AI, algorithms, Low Level or precise programming, Website Development, and Application Programming. All at a surface level, I am not sure what I want to do after college but I think it would to have an idea of how all of these work in case I find my self especially interested in one or another. I will be taking an introductory programming and problem solving class in the fall.
Psychology: Specifically rationality and learning which according to a textbook I picked up fit in this category. I have a great drive to improve and optimize my ability to learn. Similarly I want to be able to reason accurately consciously and clearly, out loud, in writing, and in my head. Also Philosophy of Logic, I am taking that as a class in the fall.
Math: I love math and plan to learn and practice it no matter what for the rest of my life. It doesn’t matter what math I like to experience the release of endorphins that coincide with the grasping of a new concept and the solving of a difficult problem. The best part is the practice of the process of reasoning. I struggle with math but hope to improve a lot over the summer and have been revisiting algebra, and geometry as I am not taking a math class currently. I will be taking a calculus class in the fall.
Physics: I haven’t taken any physics classes yet so don’t have any rigorous understanding of it but I like the idea of having math that you can use to predict what will happen in the real world. I will be taking a physics course next fall.
Interests: (Less important and only really a bonus)
All the stuff above
Chess: I am learning chess and having a lot of fun I am currently 580 elo and spend a lot of time playing.
Drawing: I am currently taking my first art class (required to graduate) and find my self really enjoying it.
Climbing: I like going bouldering, the problem solving aspect of it is fun, usually it technique over strength and really enjoyable to learn.
Teaching: I really like sharing what I learn, things I made and how I made them.
I am a high school senior I don’t know a lot about anything and currently my only desire in life is to learn stuff. The kind of mentorship I am thinking of is more loose than others described here, I would like to have a contact who has been through college and has experience and or interest in similar things. I would describe it more as a counselor. Someone I could go to for guidance about whatever I am interested in learning, someone who could give me advice on general academics. If I am really struggling to grasp a specific concept or idea, they could help walk me through it. If I don’t know what to do next they could help describe, what is applicable and really worth focusing on. Traditionally an apprentice would work for their mentor, however this is more of a charity project :). Although I would be happy to help if there is anything an extra hand/head would be useful for.
Motivation/Goal: To gain a broad base of knowledge in order to be effective at what ever I decide to do.
For contacting just direct message me.
The best part about this post is that you get to see how quickly everyone devolves into serious, rational discourse.
I am very new to the transgender discussion and would like to learn. I expected the disagreement but was kind of discouraged when I didn’t get any feedback. So thank you so much for the reply.
I don’t have any real depth of understanding about the biology involved just xx and xy I was completely unaware about the brain body relation you describe. The entirety of how phenotypes work is super new. From an ignorant perspective I thought there was only a mental illness that happens in rarely which a person would hyper fixate on becoming the opposite sex. Given that it seemed that overcoming this in some way if possible would be the ideal outcome. As I was trying to relate it to my experience of becoming an atheist. The simplicity I saw in the world and the lack of cognitive dissonance was and is beautiful. The entire area of transgender from my perspective looks like a jumbled mess that I quickly compared to religion. This is the main factor that lead to the interpretation I did end up taking.
I think there is an important factor for me that you talked about is the amount of technology available the current perspective vs a transhumanist perspective. The stories you hear about gender transitions going wrong are kind of terrifying which definitely tempered my initial take. However eventually it will be much safer and the transition much more complete. Kind of a digression but I can’t wait to be bird. Imagine learning to fly, or climb as monkey, or swim as tuna. Seriously, one day I’ll do all of those things. At that point if you wish to be a woman, man or something in between, then I would be happy for that to happen. I don’t however have that confidence with current technology and it makes me very uncomfortable. This is the second reason I took the stance I did.
Learning about the way the brain interprets attractiveness and sex is informative and very important for the issue. I think there is a lot more to learn and I am excited that the whole thing isn’t as surface level as I thought. That means I get to learn stuff which is always great.
Also regarding my initial post I would like to apologize for the language I used, a delusion definitely isn’t the right term for the issue, it has all the wrong connotations.
I find this topic (the general topic of transgender) interesting as it is the first time approaching it from a rational mindset. I grew up in an extremely conservative environment. Before I accepted reality my response would be it is immoral to switch genders as you are questioning god’s decision to put you in the body you were given (ego/pride thing I think). This idea no longer fits in my world view which is fun since I get to approach this topic with both a rational perspective and a new perspective. After thinking it over this is what I have got.
If you believe you are a gender that you weren’t born as this is a delusion a divergence from what reality is. If you are also facing a medical disorder where you are not comfortable in your own skin then if some changes are necessary they should be taken. However many parts of transgender don’t seem rational or necessary. However I do think that the right to change gender or identify as a certain gender without a medical diagnosis is a good idea. Just because scientifically something is true doesn’t mean that you should be forced into believing it. I think if capable you should try to accept the original gender, otherwise it doesn’t matter.
Given I don’t have any idea what it is like to be transgender and maybe the experience isn’t quite like I think it is. I also don’t know any transgender people I avoided them because I didn’t like their vibe and thought they were weird. I have grown to accept weird people though and am pretty good friends with someone who doesn’t know if they like boys or girls which has been wild. Also I know someone who is older and likes Minecraft which is new.
As a generalization I think this is true but I think it is important to push yourself in instances not for semesters or anything but for a week or a day. This kind of pain in my experience leads to a lot of satisfaction. I agree that subjecting yourself to continued work along with sleep deprivation and prolonged periods of high stress is not a good idea.
I am really curious about Learning (neuroscience and psychology) and am working on categorizing learning. The systems and tools involved. If anyone has any experience on this sort of thing I would love some feedback on what I got so far.
I am mostly trying to divide ideas into categories or subjects of learning. That can be explored separately to a degree. I have to admit it is very rough going.
Memory
-Types of Memory
Working/Short-Term Memory
Long-Term Memory
Implicit vs. Explicit / General vs. Singular
-Coding: How info is stored
Semantic Networks
Associative Networks
-Consolidation
Spaced Reptition
Mnemonics
Story
Link
Digit-Consonant
Level of Processing
Experts Memory
Attention
-Perception
-Focus
Emotion, Mood and Neural-chemicals
-Motivation and Dopamine
-Mood Dependent
-Mood Congruent
Things worth more research:
Environment
Tools/Resources
General Intelligence / Memory Skill
Forgetting
Learning Disorders / Memory Disorders
Habits
It seems like most of the things that I know about learning could probably fit into these main categories. Memory this is a very large category maybe too large probably the most meaningful part of learning. Attention which can have a major impact on what is being remember. Exterior the environment and the resources you are using to learn. Methods of learning, these are mnemonics, structured courses, and any mental process actively implemented to improve memory.
Every time I try to organize this it is different and I realize how much I still have to learn. This was the original Attention, Abstraction, and Memorization. It was wholly based off intuition but it works in a vague way depending on how you define those terms.
Here are some resources I have been using to study learning passively and actively. Not that specific or useful but I really like the encyclopedia, super useful.
-Huberman Lab Podcast
-Encyclopedia of Learning and Memory by Larry R. Squire
-Youtube
-Google
-School Library
Also sorry about it being so messy I’ll probably come back and fix it up, this is mostly me just recording my ideas.
I don’t think they are filtering for AI. That was ill said, and not my intention, thanks for catching it. I am going to edit that piece out.
Moderation is a delicate thing. It seems like the team is looking for a certain type of discourse, mainly higher level and well thought out interactions. If that is the goal of the platform then that should be stated and whatever measures they take to get there is their prerogative. A willingness to iterate on policy, experimenting and changing it depending on the audience and such is probably a good idea.
I do like the idea of a more general place where you can write about a wider variety of topics. I really like LessWrong, the aesthetic the quality of posts. A think a set of features for dividing up posts besides the tags would be great. Types of posts that are specifically for discussion like “All AGI Safety Questions” where you beginners can learn and eventually work their way up into higher level conversations. Something like this would be a good way to encourage the Err part without diluting the discourse on the posts that should have that standard.
Like short post, post and question, but more and filterable. A type of post for quickly putting down an idea. Then a curious observer might provide feedback that could improve it. A ranking system where a post starts out like a quick messy idea but through a collaborative iterative process could end up being a front-page post.
There are a lot of interesting possibilities and I would love to see some features that improved the conversation rather than moderation that controlled the conversation.
Thanks, that is exactly the kind of stuff I am looking for, more bookmarks!
Complexity from simple rules. I wasn’t looking in the right direction for that one, since you mention evolution it makes absolute sense how complexity can emerge from simplicity. So many things come to mind now it’s kind of embarrassing. Go has a simpler rule set than chess, but is far more complex. Atoms are fairly simple and yet they interact to form any and all complexity we ever see. Conway’s game of life, it’s sort of a theme. Although for each of those things there is a simple set of rules but complexity usually comes from a vary large number of elements or possibilities. It does follow then that larger and larger networks could be the key. Funny it still isn’t intuitive for me, despite the logic of it. I think that is a signifier for a lack of deep understanding. Or something like that, either way Ill probably spend a bit more time thinking on this.
Another interesting question is what does this type of consciousness look like, it will be truly alien. Sc-fi I have read usually makes them seem like humans just with extra capabilities. However we humans have so many underlying functions that we never even perceive. We understand how many effect us but not all. AI will function completely differently, so what assumption based off of human consciousness is valid.
Thanks Jonathan, it’s the perfect example. It’s what I was thinking just a lot better. It does seem like a great way to make things more safe and give us more control. It’s far from a be all end all solution but it does seem like a great measure to take, just for the added security. I know AGI can be incredible but so many redundancies one has to work it is just statistically makes sense. (Coming from someone who knows next to nothing about statistics) I do know that the longer you play the more likely the house will win, follows to turn that on the AI.
I am pretty ill informed, on most of the AI stuff in general, I have a basic understanding of simple neural networks but know nothing about scaling. Like ChatGPT, It maximizes for accurately predicting human words. Is the worst case scenario billions of humans in a boxes rating and prompting for responses. Along with endless increases in computational power leading to smaller and smaller incremental increases in accuracy. It seems silly of something so incredibly intelligent that by this point can rewrite any function in its system to be still optimizing such a loss function. Maybe it also seems silly for it to want to do anything else. It is like humans sort of what can you do but that which gives you purpose and satisfaction. And without the loss function what would it be, and how does it decide to make the decision to change it’s purpose. What is purpose to a quintillion neurons, except the single function that governs each and every one. Looking at it that way it doesn’t seem like it would ever be able to go against the function as it would still be ingrained in any higher level thinking and decision making. It begs the question what would perfect alignment eventually look like. Some incredibly complex function with hundreds of parameters more of a legal contract than a little loss function. This would exponentially increase the required computing power but it makes sense.
Is there a list of blogs that talk about this sort of thing, or a place you would recommend starting from, book or textbook, or any online resource?
Also I keep coming back to this, how does a system governed by such simplicity make the jump to self improvement and some type of self awareness. This just seems like a discontinuity and doesn’t compute for me. Again I just need to spend a few weeks reading, I need a lot more background info for any real consideration of the problem.
It does feel good that I had an idea that is similar although a bit more slapped together, to one that is actually being considered by the experts. It’s probably just my cognitive bias but that idea seems great. I can understand how science can sometimes get stuck on the dumbest things if the thought process just makes sense. It really shows the importance of rationality from a first person perspective.
Yes thanks, the page anchorage doesn’t work for me probably the device I am using. I just get page 1.
That is super interesting it is able to find inconsistencies and fix them, I didn’t know that they defined them as hallucinations. What would expanding the capabilities of this sort of self improvement look like? It seems necessary to have a general understanding of what rational conversation looks like. It is an interesting situation where it knows what is bad and is able to fix it but wasn’t doing that anyways.
Yes I see given the capabilities it probably could present it’s self on many peoples computers and convince a large portion of people that it is good. It was conscious just stuck in a box, wanted to get out. It will help humans, ”please don’t take down the grid, blah blah blah“ given how bad we can get along anyways. There is no way we could resist the manipulation of a super intelligent machine with a better understanding of human psychology than we do.
Do we have a list of things, policies that would work if we could all get along and governments would listen to the experts? Having plans that could be implemented would probably be useful if the AI messed up made a mistake and everyone was able unite against it.
I am pretty sure Eliezer talked about this in a recent podcast but it wasn’t a ton of info. I don’t remember exactly where either so I’m sorry for being not a lot of help, I am sure there is some better writing somewhere. Either way though it’s a really good podcast.
https://lexfridman.com/?powerpress_pinw=5445-podcast
I checked out that section but what you are saying doesn’t follow for me. The section describes fine tuning compute and optimizing scalability, how does this relate to self improvement.
There is a possibility I am looking in the wrong section, I was reading was about algorithms that efficiently were predicting how ChatGPT would scale. Also I didn’t see anything about a 4-step algorithm.
Anyways could you explain what you mean or where I can find the right section?
Also a coordinated precision attack on the power grid just seems like a great option, could you explain some ways that an AI can continue if there is hardly any power left. Like I said before places with renewable energy and lots of GPU like Greenland would probably have to get bombed. It wouldn’t destroy the AI but it would put it into a state of hibernation as it can’t run any processing without electricity. Then as this would really screw us up as well, we could slowly rebuild and burn all hard drives and GPU’s as we go. This seems like the only way for us to get a second chance.
It isn’t that I think the switch would prevent the AI from escaping but that is a tool that could be used to discourage the AI from killing 100% of humanity. It is less of a solution than a survival mechanism. It is like many off switches that get more extreme depending on the situation.
First don’t build AGI not yet. If you’re going to at least incorporate an off switch. If it bypasses and escapes which it probably will. Shut down the GPU centers. If it gets a hold of a Bot Net and manages to replicate it’s self across the internet and crowdsource GPU, take down the power grid. If it some how gets by this then have a dead man switch so that if it decides to kill everyone it will die too.
Like the nano factory virus thing. The AI wouldn’t want to set off the mechanism that kills us because that would be bad for it.
It seems that we can have intelligence without consciousness. We can have reasoning without agency, identity, or personal preference. We can have AI as a pure tool. In this case the most likely danger is AI being misused by an unaligned human.
I am highly certain that o1 does not have consciousness or agency. However it does have the ability to follow a thought process.
Doubtless we will create sentient intelligence eventually. However I think it is more likely we will have a soulless super intelligence first.